Networking and Information Security

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 427

WOLAITA SODO UNIVERSITY

SCHOOL OF INFORMATICS
DEPARTMENT OF INFORMATION TECHNOLOGY

Networking and Information Security Theme

Courses Under Themes


 Data Communication and Computer Network

 System and Network Administration

 Network Device and configuration


 Information Assurance and Security

Prepared by:
Mesele Gebre (MSc)
Akilu Elias (MSc)
Tigist Simon (MSc)

April 2015,

Wolaita Sodo Ethiopia

1
DATA COMMUNICATIONS
AND
COMPUTER NETWORKS

Prepared by
Mesele Gebre(MSc)

2
Contents
1 DATA COMMUNICATIONS andCOMPUTER NETWORKS .............................................................................. 2
1.1. Introduction ....................................................................................................................................................... 1
1.2. Data Communication ......................................................................................................................................... 9
1.3. History of Computer Networks ....................................................................................................................... 10
1.4. Review Questions ............................................................................................................................................ 10
UNIT - II .................................................................................................................................................................... 11
2.1. Data Communications ..................................................................................................................................... 11
2.2. Data Transmission: .......................................................................................................................................... 11
2.3. Transmission Impairments .............................................................................................................................. 14
2.4. Data Transmission Mode ................................................................................................................................. 15
2.5. Transmission Media ........................................................................................................................................ 19
2.6. Types of Transmission Media: ............................................................................................................................ 20
2.7. Review Questions ............................................................................................................................................ 24
UNIT - III ................................................................................................................................................................... 25
3.1. Network Line Configuration ........................................................................................................................... 25
3.2. Network Devices ............................................................................................................................................. 26
3.3. Network Topologies ........................................................................................................................................ 30
3.4. Connection-Oriented and Connectionless Services ......................................................................................... 34
UNIT - IV ................................................................................................................................................................... 36
4.1. Network Protocol............................................................................................................................................. 36
4.2. TCP/IP Protocol Suite ..................................................................................................................................... 37
4.3. Four Layers of TCP/IP Model ......................................................................................................................... 38
4.4. Open System Interconnection (OSI) Reference Model ................................................................................... 41
4.5. Layers of the OSI Layers ................................................................................................................................. 42
4.6. Network Standards and Standardization Bodies ............................................................................................. 47
4.7. Review Questions ............................................................................................................................................ 47
UNIT - V .................................................................................................................................................................... 48
5.1. LAN Technologies .......................................................................................................................................... 48
5.2. Large Networks and Wide Area Networks...................................................................................................... 49
5.3. Types of WAN Technologies .......................................................................................................................... 50
5.5. Review Questions ............................................................................................................................................ 51
UNIT - VI ................................................................................................................................................................... 52
6.1. Web Technologies ........................................................................................................................................... 52
6.2. Server-Side Programs ...................................................................................................................................... 53
6.3. Socket Programming ....................................................................................................................................... 53
6.4. Server Sockets ................................................................................................................................................. 55
6.5. Review Questions ............................................................................................................................................ 58
3
UNIT - VII.................................................................................................................................................................. 59
7.1. Fundamentals of Network Security ................................................................................................................. 59
7.2. Goals of Network Security .............................................................................................................................. 59
7.3. Cryptography ................................................................................................................................................... 61
7.4. Types of Cryptosystems .................................................................................................................................. 65
7.5. Firewalls .......................................................................................................................................................... 68
7.6. Virtual Private Network .................................................................................................................................. 69
7.7 Chapter Seven Review Questions .................................................................................................................... 70
2 SYSTEM AND NETWORK ADMINISTRATION ............................................................................................... 71
UNIT - I ...................................................................................................................................................................... 72
1.1Computer System .............................................................................................................................................. 72
1.2.Network Overview ........................................................................................................................................... 72
1.3 Overview of the TCP/IP Protocol suites........................................................................................................... 75
1.3.1 Network Access Layer Protocols .............................................................................................................. 77
1.3.2 Transport Layer ......................................................................................................................................... 78
1.3.3 Application Layer ...................................................................................................................................... 80
1.4 Philosophy of System Administration .............................................................................................................. 80
1.4.1 What is System Administration? ............................................................................................................... 81
1.5 Review Questions ............................................................................................................................................. 85
UNIT - II .................................................................................................................................................................... 86
2. Workgroups ............................................................................................................................................................ 86
2.1. Windows Workgroups vs Homegroups and Domains .................................................................................... 87
2.1.1 Domain Controller ..................................................................................................................................... 87
2.2. Domain Controllers ......................................................................................................................................... 87
2.2.1 System requirements for a Domain Controller .......................................................................................... 88
2.3 LDAP & Windows Active Directory ......................................................................................................... 89
2.3.1 Lightweight Directory Access Protocol (LDAP) ...................................................................................... 89
2.3.2 Windows Active Directory ........................................................................................................................ 91
2.3.3 Active Directory Services.......................................................................................................................... 91
2.3.4 Logical Structure ....................................................................................................................................... 92
2.3.5 Implementation .......................................................................................................................................... 95
2.3.6 Trusting...................................................................................................................................................... 95
2.3.7 Management solutions ............................................................................................................................... 96
2.3.7 Unix integration ......................................................................................................................................... 96
2.4 Review Questions ............................................................................................................................................. 97
UNIT III ..................................................................................................................................................................... 98
3 USERS AND CAPABILITIES ............................................................................................................................... 98
3.1 What is File & Folder Permissions? ................................................................................................................. 99
3.2. Policy Tools & Roaming Profiles.................................................................................................................... 99

4
3.3. Advanced Concepts I ..................................................................................................................................... 102
3.3.1 Automating Administrative Tasks – Windows Host Scripting ............................................................... 108
3.4. Advanced Concepts II ................................................................................................................................... 110
3.4.1 Proxies and Gateways ............................................................................................................................ 112
3.5. Review Questions .......................................................................................................................................... 115
UNIT- IV .................................................................................................................................................................. 116
4. Resource Monitoring & Management .............................................................................................................. 116
4.1. Monitoring System Capacity ......................................................................................................................... 117
4.1.1. What to Monitor? ................................................................................................................................... 117
4.1.2. Monitoring Tools .................................................................................................................................... 120
4.1.3. Network Printers ................................................................................................................................... 129
4.2. Remote Administration ................................................................................................................................. 131
4.2.1. Requirements to Perform Remote Administration ................................................................................. 131
4.2.2. Common Tasks/Services for which Remote Administration is Used..................................................... 131
4.2.3. Remote Desktop Solutions ..................................................................................................................... 132
4.2.4. Disadvantages of Remote Administration .............................................................................................. 134
4.3. Performance................................................................................................................................................... 134
4.4 Review Questions ........................................................................................................................................... 138
UNIT - V .................................................................................................................................................................. 139
5.1. Introduction ................................................................................................................................................... 139
5.1.1What is Unix/Linux? ................................................................................................................................ 139
5.1.2. Linux Distribution ................................................................................................................................ 139
5.1.3. Unix/Linux Architecture....................................................................................................................... 140
5.1.4 Open Source ............................................................................................................................................ 140
5.1.5. Properties of Linux ............................................................................................................................... 141
5.1.6. Linux and GNU .................................................................................................................................... 142
5.1.7. About Linux Files and the File System ................................................................................................ 142
5.2. Linux Systems and Network Concepts.......................................................................................................... 145
5.2.1. What is Networking? .............................................................................................................................. 145
5.2.2 Network Configuration and Information ................................................................................................. 146
5.3. Linux User Administration ............................................................................................................................ 149
5.3.1 The Privileged root Account.................................................................................................................... 149
5.3.2. Why Users? ............................................................................................................................................ 149
5.3.3. User and Group Information................................................................................................................... 151
5.4. Linux Service/Server Administration ............................................................................................................ 158
5.4.1 Supporting a Windows Network – through SAMBA .............................................................................. 158
5.4.2. Mail Server ........................................................................................................................................... 162
5.5. Review Questions .......................................................................................................................................... 164
3 Network Device and Configuration....................................................................................................................... 165
UNIT I ...................................................................................................................................................................... 166

5
1 CONFIGURATION WIZARD ............................................................................................................................. 166
1.2 Network Devices ............................................................................................................................................ 166
1.3 OSI Model ...................................................................................................................................................... 166
1.4 Network device............................................................................................................................................... 174
Types of Switches............................................................................................................................................. 177
1.4.1Media Converter ....................................................................................................................................... 178
1.4.2 Configuring Static VLANs ...................................................................................................................... 183
1.4.4 Automatic Discovery and Configuration Manager.................................................................................. 186
1.4.5 Wireless Mobility Configuration Menu .................................................................................................. 186
1.5. Device Schedules........................................................................................................................................... 189
1.5.1 VPN Policy Manager ............................................................................................................................... 189
UNIT -II ................................................................................................................................................................... 197
2. ROUTER AND SWITCH .................................................................................................................................... 197
2.1. Basic Configuration ....................................................................................................................................... 197
2.2. Passwords ...................................................................................................................................................... 205
2.3. Wildcard Masks ............................................................................................................................................. 206
2.4. Access Control Lists ...................................................................................................................................... 212
2.5. Remote Access .............................................................................................................................................. 214
2.6. Logging with Syslog Usage........................................................................................................................... 221
2.7. Miscellaneous ................................................................................................................................................ 226
UNIT-III: .................................................................................................................................................................. 228
3. ROUTERS ............................................................................................................................................................ 228
3.1. Router Basic Configuration ........................................................................................................................... 228
3.2. Static Routing ................................................................................................................................................ 230
3.3. Dynamic Routing........................................................................................................................................... 232
3.4. Routing Protocols Matrix .............................................................................................................................. 232
3.5. RIP ................................................................................................................................................................. 245
3.6. IGRP .............................................................................................................................................................. 248
3.7. EIGRP ........................................................................................................................................................... 250
3.8. OSPF ............................................................................................................................................................. 255
3.9. DHCP ............................................................................................................................................................ 261
3.10. NAT and PAT.............................................................................................................................................. 266
3.11. PPP .............................................................................................................................................................. 272
3.12. Frame Relay................................................................................................................................................. 273
3.13. Router on the stick121 Review Questions ................................................................................................... 281
3.13.1 Router on the stick ................................................................................................................................. 281
UNIT- IV .................................................................................................................................................................. 286
4. SWITCHES .......................................................................................................................................................... 286
4.1. Switch basic configuration ............................................................................................................................ 286
6
4.2. CAM Table .................................................................................................................................................... 289
4.3. Port Security .................................................................................................................................................. 290
4.4. VLANs .......................................................................................................................................................... 292
4.5. STP ................................................................................................................................................................ 298
4.6. VTP ............................................................................................................................................................... 305
4.7. Inter VLAN Communication ......................................................................................................................... 312
4.8. Miscellaneous ................................................................................................................................................ 317
4.9 REFERENCES ............................................................................................................................................... 319
4 INFORMATION ASSURANCE AND SECURITY ............................................................................................ 320
UNIT - I .................................................................................................................................................................... 321
1. Introduction .......................................................................................................................................................... 321
1.1Information assurance process......................................................................................................................... 321
1.2 Computer security .......................................................................................................................................... 322
1.2.1 Why Security? ......................................................................................................................................... 323
1.2.2 Principles of Security (Goals).................................................................................................................. 323
1.2. 3The OSI Security Architecture ................................................................................................................ 324
1.3. Security Attacks, Services and Mechanisms ................................................................................................. 324
1.3.1 Security Services ..................................................................................................................................... 324
1.3.2 Security Mechanisms:.............................................................................................................................. 330
1.4. A Model for Network Security ...................................................................................................................... 330
1.5. Enterprise security ......................................................................................................................................... 332
1.6. Cyber Defense ............................................................................................................................................... 333
UNIT - II .................................................................................................................................................................. 342
2.1. Introduction ................................................................................................................................................... 342
2.2. Ecommerce Security...................................................................................................................................... 343
2.2.1. Online Security Issues Overview ........................................................................................................... 343
2.2.2 Ecommerce Security Issues ..................................................................................................................... 343
2.2.3.E-Commerce Security Tools ................................................................................................................... 344
2.2.4.Purpose of Security in E-Commerce ....................................................................................................... 344
2.3. What is Web Security? .................................................................................................................................. 345
2.4. Layers Involved in Web Security .................................................................................................................. 347
2.5. Public-Key Infrastructure (PKI) .................................................................................................................... 352
2.6. Enterprise information security architecture ................................................................................................. 355
2.7. An Overview of Intrusion Detection and Prevention Systems ...................................................................... 359
UNIT - III ................................................................................................................................................................. 368
3.1. Securing Private Networks ............................................................................................................................ 368
3.2. Overview of Firewall ..................................................................................................................................... 368
3.4. Introduction to Secure Network Design ........................................................................................................ 369
3.5. Designing Security into a Network ............................................................................................................... 369
7
3.6. Designing an Appropriate Network............................................................................................................... 370
3.7. Internal Security Practices ............................................................................................................................. 371
3.8. IPv4 and IPv6 Security .................................................................................................................................. 373
3.9. Applications of IPsec ..................................................................................................................................... 374
3.10.Transport and Tunnel Modes........................................................................................................................ 378
UNIT - IV ................................................................................................................................................................. 380
4.1. Conventional Encryption principles .............................................................................................................. 380
4.2. Firewalls ........................................................................................................................................................ 404
UNIT - V .................................................................................................................................................................. 409
5.1. Viruses and Other Wildlife ............................................................................................................................ 409
5.2. A Malware Taxonomy ................................................................................................................................... 409
5.3. Viruses and Public Health ............................................................................................................................. 410
5.3.1Viruses ...................................................................................................................................................... 410
5.3.2 The history of viruses .............................................................................................................................. 410
5.3.3 Types of Viruses ...................................................................................................................................... 411
5.3.4Executable Infectors ................................................................................................................................. 411
5.3.5 Macro Viruses ......................................................................................................................................... 412
5.3.6 Virus Detection ........................................................................................................................................ 412
5.4. Computer Worms .......................................................................................................................................... 412
5.5. Trojans Horses and Backdoors ...................................................................................................................... 413
5.5.1Trojans Horses .......................................................................................................................................... 413
5.5.2. Backdoors ............................................................................................................................................... 414
5.5.3Types of Trojans ....................................................................................................................................... 414
5.6. Other Forms of Malicious Logic ................................................................................................................... 414
5.7. Counter measures .......................................................................................................................................... 417

8
1.1. Introduction
Computer networks are the basis of communication in IT. They are used in a huge variety of ways and can include
many different types of network. A computer network is a set of computers that are connected together so that they
can share information. The earliest examples of computer networks are from the 1960s, but they have come a long
way in the half-century since then.

A computer network comprises two or more computers that are connected either by cables (wired) or WiFi
(wireless) with the purpose of transmitting, exchanging, or sharing data and resources. You build a computer
network using hardware (e.g., routers, switches, access points, and cables) and software (e.g., operating systems or
business applications).

Geographic location often defines a computer network. For example, a LAN (local area network) connects
computers in a defined physical space, like an office building, whereas a WAN (wide
area network) can connect computers across continents. The internet is the largest example of WAN, connecting
billions of computers worldwide.

You can further define a computer network by the protocols it uses to communicate, the physical arrangement of its
components, how it controls traffic, and its purpose.
Computer networks enable communication for every business, entertainment, and research
purpose. The internet, online search, email, audio and video sharing, online commerce, livestreaming, and social
networks all exist because of computer networks.

Computer network types

As networking needs evolved, so did the computer network types that serve those needs. Here are the
most common and widely used computer network types:

 LAN (local area network): A LAN connects computers over a relatively short distance, allowing
them to share data, files, and resources. For example, a LAN may connect all the computers in an
office building, school, or hospital. Typically, LANs are privately owned and managed.
 WLAN (wireless local area network): A WLAN is just like a LAN but connection between
devices on the network are made wirelessly.
 WAN (wide area network): As the name implies, a WAN connects computers over a wide area,
such as from region to region or even continent to continent. The internet is the largest WAN,
connecting billions of computers worldwide. You will typically see collective or distributed
ownership models for WAN management.
 MAN (metropolitan area network): MANs are typically larger than LANs but smaller than
WANs. Cities and government entities typically own and manage MANs.
 PAN (personal area network): A PAN serves one person. For example, if you have an iPhone
and a Mac, it‘s very likely you‘ve set up a PAN that shares and syncs content— text messages,
emails, photos, and more—across both devices.
 SAN (storage area network): A SAN is a specialized network that provides access to block-level
storage shared network or cloud storage that, to the user, looks and works like a storage drive
that‘s physically attached to a computer. (For more information on how a SAN works with block
storage, see Block Storage: A Complete Guide.)
 CAN (campus area network): A CAN is also known as a corporate area network. A CAN is
larger than a LAN but smaller than a WAN. CANs serve sites such as colleges, universities, and
business campuses.
 VPN (virtual private network): A VPN is a secure, point-to-point connection between two
network end points (see ‗Nodes‘ below). A VPN establishes an encrypted channel that keeps a
user‘s identity and access credentials, as well as any data transferred, inaccessible to hackers.

The following are some common terms to know when discussing computer networking:

1
 IP address: An IP address is a unique number assigned to every device connected to a network
that uses the Internet Protocol for communication. Each IP address identifies the device‘s host
network and the location of the device on the host network. When one device sends data to
another, the data includes a ‗header‘ that includes the IP address of the sending device and the IP
address of the destination device.
 Notes: A node is a connection point inside a network that can receive, send, create, or store data.
Each node requires you to provide some form of identification to receive access, like an IP
address. A few examples of nodes include computers, printers, modems, bridges, and switches. A
node is essentially any network device that can recognize, process, and transmit information to
any other network node.
 Routers: A router is a physical or virtual device that sends information contained in data packets
between networks. Routers analyze data within the packets to determine the best way for the
information to reach its ultimate destination. Routers forward data packets until they reach their
destination node.
 Switches: A switch is a device that connects other devices and manages node-to-node
communication within a network; ensuring data packets reach their ultimate destination. While a
router sends information between networks, a switch sends information between nodes in a single
network. When discussing computer networks, ‗switching‘ refers to how data is transferred
between devices in a network.

The three main types of switching are as follows:

 Circuit switching, which establishes a dedicated communication path between nodes in a network.
This dedicated path assures the full bandwidth is available during the transmission, meaning no
other traffic can travel along that path.

 Packet switching involves breaking down data into independent components called packets which,
because of their small size, make fewer demands on the network. The packets travel through the
network to their end destination.

 Message switching sends a message in its entirety from the source node, traveling from switch to
switch until it reaches its destination node.

 Ports: A port identifies a specific connection between network devices. Each port is identified by
a number. If you think of an IP address as comparable to the address of a hotel, then ports are the
suites or room numbers within that hotel. Computers use port numbers to determine which
application, service, or process should receive specific messages.
 Network cable types: The most common network cable types are Ethernet twisted pair, coaxial,
and fiber optic. The choice of cable type depends on the size of the network, the arrangement of
network elements, and the physical distance between devices.

Network Basic Understanding

A system of interconnected computers and computerized peripherals such as printers is called computer
network. This interconnection among computers facilitates information sharing among them. Computers
may connect to each other by either wired or wireless media.

1. Network Engineering: Networking engineering is a complicated task, which involves software,


firmware, chip level engineering, hardware, and electric pulses. To ease network engineering, the
whole networking concept is divided into multiple layers. Each layer is involved in some
particular task and is independent of all other layers. But as a whole, almost all networking tasks

2
depend on all of these layers. Layers share data between them and they depend on each other only
to take input and send output.
2. Internet:

A network of networks is called an internetwork, or simply the internet. It is the largest network in
existence on this planet. The internet hugely connects all WANs and it can have connection to LANs and
Home networks. Internet uses TCP/IP protocol suite and uses IP as its addressing protocol. Present day,
Internet is widely implemented using IPv4. Because of shortage of
address spaces, it is gradually migrating from IPv4 to IPv6.

Internet enables its users to share and access enormous amount of information worldwide. It uses WWW,
FTP, email services, audio and video streaming etc. At huge level, internet works on Client-Server model.

Internet uses very high speed backbone of fiber optics. To inter-connect various continents, fibers are laid
under sea known to us as submarine communication cable.

Applications of Communication & Computer Network

Computer systems and peripherals are connected to form a network. They provide numerous advantages:

 Resource sharing such as printers and storage devices


 Exchange of information by means of e-Mails and FTP
 Information sharing by using Web or Internet
 Interaction with other users using dynamic web pages
 IP phones
 Video conferences
 Parallel computing
 Instant messaging

Characteristics of a Computer Network

 Share resources from one computer to another.


 Create files and store them in one computer, access those files from the other computer(s)
connected over the network.
 Connect a printer, scanner, or a fax machine to one computer within the network and let other
computers of the network use the machines available over the network.

Following is the list of hardware‘s required to set up a computer network.

 Network Cables
 Distributors
 Routers
 Internal Network Cards
 External Network Cards

3
A. Network Cables: Network cables are used to connect computers. The most commonly used cable is
Category 5 cable RJ-45.

B. Distributors

A computer can be connected to another one via a serial port but if we need to connect many computers
to produce a network, this serial connection will not work.

The solution is to use a central body to which other computers, printers, scanners, etc. can be connected
and then this body will manage or distribute network traffic.

C. Router

A router is a type of device which acts as the central point among computers and other devices that are a
part of the network. It is equipped with holes called ports. Computers and other devices are connected to a
router using network cables. Now-a-days router comes in wireless modes using which computers can be
connected without any physical cable.

D. Network Card

Network card is a necessary component of a computer without which a computer cannot be connected
over a network. It is also known as the network adapter or Network Interface Card (NIC). Most branded

4
computers have network card pre-installed. Network cards are of two types: Internal and External
Network Cards.

E. Internal Network Cards

Motherboard has a slot for internal network card where it is to be inserted. Internal network cards are of
two types in which the first type uses Peripheral Component Interconnect (PCI) connection, while the
second type uses Industry Standard Architecture (ISA). Network cables are required to provide network
access.

F. External Network Cards: External network cards are of two types: Wireless and USB based. Wireless network
card needs to be inserted into the motherboard; however no network cable is required to connect to the network.

G. Universal Serial Bus (USB): USB card is easy to use and connects via USB port. Computers automatically
detect USB card and can install the drivers required to support the USB network card automatically.

A system of interconnected computers and computerized peripherals such as printers is called computer network.
This interconnection among computers facilitates information sharing among them. Computers may connect to
each other by either wired or wireless media.

Classification of Computer Networks


Computer networks are classified based on various factors. They include:

 Geographical span
 Inter-connectivity
 Administration
 Architecture

5
A. Geographical Span

Geographically a network can be seen in one of the following categories:

 It may be spanned across your table, among Bluetooth enabled devices,. Ranging not more than few
meters.
 It may be spanned across a whole building, including intermediate devices to connect all floors.
 It may be spanned across a whole city.
 It may be spanned across multiple cities or provinces.
 It may be one network covering whole world.

Personal Area Network

A Personal Area Network (PAN) is smallest network which is very personal to a user. This may include Bluetooth
enabled devices or infra-red enabled devices. PAN has connectivity range up to 10 meters. PAN may include
wireless computer keyboard and mouse, Bluetooth enabled headphones, wireless printers and TV remotes.

For example, Piconet is Bluetooth-enabled Personal Area Network which may contain up to 8 devices connected
together in a master-slave fashion.

Local Area Network

A computer network spanned inside a building and operated under single administrative system is generally termed
as Local Area Network (LAN). Usually,LAN covers an organization‘ offices, schools, colleges or universities.
Number of systems connected in LAN may vary from as least as two to as much as 16 million.

LAN provides a useful way of sharing the resources between end users. The resources such as printers; file servers,
scanners, and internet are easily sharable among computers.

LANs are composed of inexpensive networking and routing equipment. It may contain local servers serving file
storage and other locally shared applications. It mostly operates on private IP addresses and does not involve heavy
routing. LAN works under its own local domain and controlled centrally.

LAN uses either Ethernet or Token-ring technology. Ethernet is most widely employed LAN technology and uses
Star topology, while Token-ring is rarely seen. LAN can be wired, wireless, or in both forms at once.

6
Metropolitan Area Network

The Metropolitan Area Network (MAN) generally expands throughout a city such as cable TV network. It can be
in the form of Ethernet, Token-ring, ATM, or Fiber Distributed DataInterface (FDDI).

Metro Ethernet is a service which is provided by ISPs. This service enables its users to expand their Local Area
Networks. For example, MAN can help an organization to connect all of its offices in a city.

Backbone of MAN is high-capacity and high-speed fiber optics. MAN works in between Local Area Network and
Wide Area Network. MAN provides uplink for LANs to WANs or internet.

Wide Area Network

As the name suggests, the Wide Area Network (WAN) covers a wide area which may span across provinces and
even a whole country. Generally, telecommunication networks are Wide Area Network. These networks provide
connectivity to MANs and LANs. Since they are equipped with very high speed backbone, WANs use very
expensive network equipment.

WAN may use advanced technologies such as Asynchronous Transfer Mode (ATM), Frame Relay, and
Synchronous Optical Network (SONET). WAN may be managed by multiple administrations.

Internetwork

A network of networks is called an internetwork, or simply the internet. It is the largest network in existence on this
planet. The internet hugely connects all WANs and it can have connection to LANs and Home networks. Internet
uses TCP/IP protocol suite and uses IP as its addressing protocol. Present day, Internet is widely implemented
using IPv4. Because of shortage of address spaces, it is gradually migrating from IPv4 to IPv6.

Internet enables its users to share and access enormous amount of information worldwide. It uses WWW, FTP,
email services, audio and video streaming etc. At huge level, internet works on Client-Server model.

Internet uses very high speed backbone of fiber optics. To inter-connect various continents, fibers are laid under sea
known to us as submarine communication cable.

Internet is widely deployed on World Wide Web services using HTML linked pages and is accessible by client
software known as Web Browsers. When a user requests a page using some web browser located on some Web
7
Server anywhere in the world, the Web Server responds with the proper HTML page. The communication delay is
very low.

Internet is serving many proposes and is involved in many aspects of life. Some of them are:

 Web sites
 E-mail
 Instant Messaging
 Blogging
 Social Media
 Marketing
 Networking
 Resource Sharing
 Audio and Video Streaming

A. Inter-Connectivity

Components of a network can be connected to each other differently in some fashion. By connectedness we mean
either logically, physically, or both ways.

 Every single device can be connected to every other device on network, making the network mesh.
 All devices can be connected to a single medium but geographically disconnected, created bus like
structure.
 Each device is connected to its left and right peers only, creating linear structure.
 All devices connected together with a single device, creating star like structure.
 All devices connected arbitrarily using all previous ways to connect each other, resulting in a hybrid
structure.

B. Administration: From an administrator‘s point of view, a network can be private network which belongs a
single autonomous system and cannot be accessed outside its physical or logical domain. A network can be public
which is accessed by all.

C. Network Architecture: Computer networks can be discriminated into various types such as Client-Server, peer-
to-peer or hybrid, depending upon its architecture.

 There can be one or more systems acting as Server. Other being Client, requests the Server to serve
requests. Server takes and processes request on behalf of Clients.
 Two systems can be connected Point-to-Point, or in back-to-back fashion. They both reside at the
same level and called peers.
 There can be hybrid network which involves network architecture of both the above types.

D. Network Applications: Computer systems and peripherals are connected to form a network. They
provide numerous advantages:

 Resource sharing such as printers and storage devices


 Exchange of information by means of e-Mails and FTP
 Information sharing by using Web or Internet
 Interaction with other users using dynamic web pages
 IP phones
 Video conferences
 Parallel computing
 Instant messaging

Generally, networks are distinguished based on their geographical span. A network can be as small as
distance between your mobile phone and its Bluetooth headphone and as large as the internet itself,
covering the whole geographical world.
8
1.2. Data Communication
Data communications refers to the transmission of this digital data between two or more computers and a computer
network or data network is a telecommunications network that allows computers to exchange data. The physical
connection between networked computing devices is established using either cable media or wireless media. The
best-known computer
network is the Internet.

Data communications and networking are changing the way we do business and the way we live.
Business decisions have to be made ever more quickly, and the decision makers require immediate access to
accurate information. Why wait a week for that report from Germany to arrive by mail when it could appear almost
instantaneously through computer networks?
Businesses today rely on computer networks and internetworks. But before we ask how quickly we can get hooked
up, we need to know how networks operate, what types of technologies are available, and which design best fills
which set of needs.

The development of the personal computer brought about tremendous changes for Business, industry, science, and
education. A similar revolution is occurring in data Communications and networking. Technological advances are
making it possible for Communications links to carry
more and faster signals. As a result, services are evolving to allow use of this expanded capacity. For example,
established telephone services such as conference calling, call waiting, voice mail, and caller ID have been
extended.

Research in data communications and networking has resulted in new technologies. One goal is to be able to
exchange data such as text, audio, and video from all points in the world. We want to access the Internet to
download and upload information quickly and accurately and at any
time.

When we communicate, we are sharing information. This sharing can be local or remote. Between individuals,
local communication usually occurs face to face, while remote communication takes place over distance. The term
telecommunication, which includes telephony, telegraphy, and television, means communication at a distance (tele
is Greek for ―far‖).

The word data refers to information presented in whatever form is agreed upon by the parties creating and using the
data. Data communications are the exchange of data between two devices via some form of transmission medium
such as a wire cable. For data communications to occur, the communicating devices must be part of a communication
system made up of a combination of hardware (physical equipment) and software (programs). The effectiveness of a
data communications system depends on four fundamental characteristics: delivery, accuracy, timeliness, and jitter.

1. Delivery: The system must deliver data to the correct destination. Data must be received by the intended device
or user and only by that device or user.
2. Accuracy: The system must deliver the data accurately. Data that have been altered in transmission and left
uncorrected are unusable.
3. Timeliness: The system must deliver data in a timely manner. Data delivered late are useless.
In the case of video and audio, timely delivery means delivering data as they are produced, in the same order that
they are produced, and without significant delay. This kind of delivery is called real-time transmission.
4. Jitter: Jitter refers to the variation in the packet arrival time. It is the uneven delay in the delivery of audio or
video packets. For example, let us assume that video packets are sent every 3D ms. If some of the packets arrive
with 3D-ms delay and others with 4D-ms delay, an uneven quality in the video is the result.
Data communications refers to the transmission of this digital data between two or more computers and a computer
network or data network is a telecommunications network that allows computers to exchange data. The physical
connection between networked computing devices is established using either cable media or wireless media. The
best-known computer
network is the Internet.

This tutorial should teach you basics of Data Communication and Computer Network (DCN) and will also take you
through various advance concepts related to Data Communication and Computer Network.
9
1.3. History of Computer Networks
Computer networking as we know it today may be said to have gotten its start with the ARPANET development in
the late 1960s and early 1970s. Prior to that time there were computer vendor‖ networks‖ designed primarily to
connect terminals and remote job entry stations to a mainframe. But the notion of networking between computers
viewing each other as equal peers to achieve ―resource sharing‖ was fundamental to the ARPANET design [1]. The
other strong emphasis of the ARPANET work was its reliance on the then novel technique of packet switching to
efficiently share communication resources among‖ bursty‖ users, instead of the more traditional message or circuit
switching. Although the term ―network architecture‖ was not yet widely used, the initial ARPANET design did
have a definite structure and introduced
another key concept: protocol layering, or the idea that the total communications functions could be divided into
several layers, each building upon the services of the one below. The original design had three major layers, a
network layer, which included the network access and switch-toswitch (IMP-to-IMP) protocols, a host-to-host layer
(the Network Control Protocolor NCP), and a ―function-oriented protocol‖ layer, where specific applications such
as file transfer, mail, speech, and remote terminal support were provided [2]. Similar ideas were being pursued in
several other research projects around the world, including the Cyclades network in France [3],
the National Physical Laboratory Network in England [4], and the Ethernet system [5] at Xerox PARC in the USA.
Some of these projects focused more heavily on the potential for high-speed local networks such as the early 3-
Mbps Ethernet. Satellite and radio channels for mobile users were also a topic of growing interest.

By 1973 it was clear to the networking vanguard that another protocol layer needed to be inserted into the protocol
hierarchy to accommodate the interconnection of diverse types of individual networks.

The basis for the network interconnection approach developing in this community was to make use of a variety of
individual networks each providing only a simple ―best effort‖ or ―datagram‖ transmission service. Reliable virtual
circuit services would then be provided on an end-to-end
basis with the TCP (or similar protocol) in the hosts. During the same time period, public data networks (PDNs)
were emerging under the auspices of CCITT, aimed at providing more traditional virtual circuit types of network
service via the newly defined X.25 protocol. The middle and late 1970s saw networking conferences dominated by
heated debates over the relative merits of circuit versus packet switching and datagrams versus X.25 virtual circuits
[8].

A computer network is a group of computers that has the potential to transmit, receive and exchange voice, data,
and video traffic. A network connection can be set up with the help of either cable or wireless media. In modern
times, computer networks are very important as information technology is increasing rapidly all over the world.
The network and data communication are the essential factors to rise information technology in the world as
technology‘s advancement is on the system, including the gadgets. ARPANET began the networking long ago.

1.4. Review Questions

 What is computer network?


 What is Data communication means?
 Discus on History of Computer network?
 What is the Purpose of Computer network?

10
1 UNIT - II

2.1. Data Communications


Data Communications is the transfer of data or information between a source and a receiver. The source transmits
the data and the receiver receives it. The actual generation of the information is not part of Data Communications
nor is the resulting action of the information at the receiver.
Data Communication is interested in the transfer of data, the method of transfer and the preservation of the data
during the transfer process.

The general Communication Model

Figure 2. 1 Data communication model

1. An information source, which produces a message.


2. A transmitter, which encodes the message into signals
3. A channel, to which signals are adapted for transmission
4. A receiver, which ‗decodes‘ (reconstructs) the message from the signal.
5. A destination, where the message arrives.

Activity 2.1

1. What is data transmission means and discuss in details


2. Discuss on analog and digital signals
3. How transmission impairment happen during data transmission?

2.2. Data Transmission:


Data transmission is the process of sending digital or analog data over a communication medium to one or more
computing, network, communication or electronic devices. It enables the transfer and communication of devices in
a point-to-point, point-to-multipoint and multipoint-to multipoint environment.

Data transmission can be analog and digital but is mainly reserved for sending and receiving digital data. It works
when a device or piece of equipment, such as a computer, intends to send a data object or file to one or multiple
recipient devices, like a computer or server. The digital data originates from the source device in the form of
discrete signals or digital bit streams. These data streams/signals are placed over a communication medium, such as
physical copper wires, wireless carriers and optical fiber, for delivery to the destination/recipient device. Moreover,
each outward signal can be baseband or pass band.
11
In addition to external communication, data transmission also may be internally carried to a device. For
example, the random access memory (RAM) or hard disk that sends data to a processor is also a form of
data transmission.

Analog and Digital Transmission

Analog transmission is a method of conveying voice, data, image, signal, or video information. It uses a
continuous signal varying in amplitude, phase, or another property that is in proportion to
a specific characteristic of a variable. Analog transmission could mean that the transmission is a transfer
of an analog source signal which uses an analog modulation method (or a variance of one or more
properties of high frequency periodic waveform, also known as a carrier signal). FM and AM are
examples of such a modulation. The transmission could also use no modulation at all. It is most notably
an information signal that is constantly varying.

Data transmission (also known as digital transmission or digital communications) is a literal transfer of
data over a point to point (or point to multipoint) transmission medium –such as copper wires, optical
fibers, wireless communications media, or storage media. The data that is to be transferred is often
represented as an electro-magnetic signal (such as a microwave). Digital transmission transfers messages
discretely. These messages are represented by a sequence of pulses via a line code. However, these
messages can also be represented by a limited set of wave forms that always vary. Either way, they are
represented using a digital modulation
method.

Analog transmission is capable of being conveyed in a no fewer than four ways: through a twisted pair or
coax cable, through a fiber optic cable, through the air, or through water. There are, however, only two
basic types of analog transmission. The first is known as amplitude modulation (or AM). This is a
technique used in electronic communication and works by alternating the strength of a transmitted signal
in relation to the information that is being sent. The second is known as frequency modulation (or FM).
This type of communication conveys information over a carrier wave, just as AM transmission. However,
FM communication alternates the frequency of the transmitted signal.

Data that is transmitted via digital transmission may be digital messages that have origins for a data
source (a computer or a keyboard, for example). However, this transmitted data may also be from an
analog signal (a phone call or a video signal, for example). It may then be digitized into a bit stream using
pulse code modulation (or PCM) –or even more advanced source coding schemes. The coding of the data
is carried out using codec equipment. There are a number of differences between analog and digital
transmission, and it is important to understand how conversions between analog and digital occur. Let‘s
look first at the older form of transmission, analog.

A. Analog Transmission

An analog wave form (or signal) is characterized by being continuously variable along amplitude and
frequency. In the case of telephony, for instance, when you speak into a handset, there are changes in the
air pressure around your mouth. Those changes in air pressure fall onto the
handset, where they are amplified and then converted into current, or voltage fluctuations. Those
fluctuations in current are an analog of the actual voice pattern—hence the use of the term analog to
describe these signals.

12
When it comes to an analog circuit what we also refer to as a voice-grade line—we need to also define the
frequency band in which it operates. The human voice, for example, can typically generate frequencies
from 100Hz to 10,000Hz, for a bandwidth of 9,900Hz. But the ear does not require a vast range of
frequencies to elicit meaning from ordinary speech; the vast majority of sounds we make that constitute
intelligible speech fall between 250Hz and 3,400Hz. So, the phone company typically allotted a total
bandwidth of 4,000Hz for voice transmission.
Remember that the total frequency spectrum of twisted-pair is 1MHz. To provision a voice-grade analog
circuit, bandwidth-limiting filters are put on that circuit to filter out all frequencies above 4,000Hz. That‘s
why analog circuits can conduct only fairly low-speed data communications.
The maximum data rate over an analog facility is 33.6Kbps when there are analog loops at either end.

Elicit meaning from ordinary speech; the vast majority of sounds we make that constitute intelligible
speech fall between 250Hz and 3,400Hz. So, the phone company typically allotted a total bandwidth of
4,000Hz for voice transmission. Remember that the total frequency spectrum of twisted-pair is 1MHz. To
provision a voice-grade analog circuit, bandwidth-limiting filters are
put on that circuit to filter out all frequencies above 4,000Hz. That‘s why analog circuits can conduct only
fairly low-speed data communications. The maximum data rate over an analog facility is 33.6Kbps when
there are analog loops at either end.

Analog facilities have limited bandwidth, which means they cannot support high-speed data. Another
characteristic of analog is that noise is accumulated as the signal traverses the network.
As the signal moves across the distance, it loses power and becomes impaired by factors such as moisture
in the cable, dirt on a contact, and critters chewing on the cable somewhere in the network. By the time
the signal arrives at the amplifier, it is not only attenuated, it is also impaired and noisy. One of the
problems with a basic amplifier is that it is a dumb device. All it knows how to do is to add power, so it
takes a weak and impaired signal, adds power to it, and brings it back up to its original power level. But
along with an increased signal, the amplifier passes along an increased noise level. So in an analog
network, each time a signal goes through an amplifier, it accumulates noise. After you mix together
coffee and cream, you can no longer separate them. The same concept applies in analog networks: After
you mix the signal and the noise, you can no longer separate the two, and, as a result, you end up with
very high error rates.

B. Digital Transmission

Digital transmission is quite different from analog transmission. For one thing, the signal is much simpler.
Rather than being a continuously variable wave form, it is a series of discrete pulses, representing one bits
and zero bits. Each computer uses a coding scheme that defines what combinations of ones and zeros
constitute all the characters in a character set (that is, lowercase
letters, uppercase letters, punctuation marks, digits, keyboard control functions).

13
How the ones and zeros are physically carried through the network depends on whether the network is
electrical or optical. In electrical networks, one bits are represented as high voltage, and zero bits are
represented as null, or low voltage. In optical networks, one bits are represented
by the presence of light, and zero bits are represented by the absence of light. The ones and zeros—the
on/off conditions—are carried through the network, and the receiving device repackages the ones and
zeros to determine what character is being represented. Because a digital signal is easier to reproduce than
an analog signal, we can treat it with a little less care in the network. Rather than use dumb amplifiers,
digital networks use regenerative repeaters, also referred to as signal regenerators. As a strong, clean,
digital pulse travels over a distance, it loses power, similar to an analog signal. The digital pulse, like an
analog signal, is eroded by impairments in the network. But the weakened and impaired signal enters the
regenerative repeater, where the repeater examines the signal to determine what was supposed to be a one
and what was supposed to be a zero. The repeater regenerates a new signal to pass on to the next point in
the network, in essence eliminating noise and thus vastly improving the error rate.

2.3. Transmission Impairments


The signal received may differ from the signal transmitted. The effect will degrade the signal quality for
analog signals and introduce bit errors for digital signals. There are three types of transmission
impairments: attenuation, delay distortion, and noise.

(1) Attenuation: The impairment is caused by the strength of signals that degrades with distance over a
transmission link. Three factors are related to the attenuation:
The received signal should have sufficient strength to be intelligently interpreted by a receiver. An
amplifier or a repeater is needed to boost the strength of the signal.
A signal should be maintained at a level higher than the noise so that error will not be generated. Again,
an amplifier or a repeater can be used.
Attenuation is an increasing function of frequency, with more attenuation at higher frequency than at
lower frequency. An equalizer can smooth out the effect of attenuation across frequency bands, and an
amplifier can amplify high frequencies more than low frequencies.
(2)Delay distortion: The velocity of propagation of a signal through a guided medium varies with
frequencies; it is fast at the center of the frequency, but it falls off at the two edges of frequencies.
Equalization techniques can be used to smooth out the delay distortion. Delay distortion is a major reason
for the timing jitter problem, where the receiver clock deviates from the incoming signal in a random
fashion so that an incoming signal might arrive earlier
or late.

(3)Noise: Impairment occurs when an unwanted signal is inserted between transmission and reception.
There are four types of noises:

 Thermal noise: This noise is a function of temperature and bandwidth. It cannot be eliminated.
The thermal noise is proportional to the temperature and bandwidth as shown in the equation:

14
thermal noise = K(constant) *temperature *bandwith. Intermodulation noise this noise is caused
by nonlinearity in the transmission system f 1; f2 frequencies could produce a signal at f1 + f2 or
ABS (f1 – f2) and affect the frequencies at f1 + f2 or ABS (f1 – f2).
 Cross talk: This type of noise is caused by electrical coupling in the nearby twisted pair or by
unwanted signal picked by microwave antennas. For example, sometimes when you are on the
telephone, you might hear someone else‘s conversation due to the cross talk problem.
 Impulse noise: Irregular pulses and short duration of relative high amplitude cause impulse
noise. This noise is also caused by lightning and faults in the communication system. It is not an
annoyance for analog data, but it is an annoyance for digital data. For example, 0.01 sec at 4800
bps causes 50 bits of distortion.

Activity 2.2

 Discuss and list some examples of half duplex transmission mode?


 Explain in details about parallel and serial data transmission mode?
 Discuss on the unguided transmission media using examples?
 Discus some common guided transmission medium with examples?

2.4. Data Transmission Mode


Data Transmission mode defines the direction of the flow of information between two communication
devices. It is also called Data Communication or Directional Mode. It specifies the direction of the flow
of information from one place to another in a computer network.
In the Open System Interconnection (OSI) Layer Model, the Physical Layer is dedicated to data
transmission in the network. It mainly decides the direction of data in which the data needs to travel to
reach the receiver system or node.

So, in this module, we will learn about different data transmission modes based on the direction of
exchange, synchronization between the transmitter and receiver, and the number of bits sent
simultaneously in a computer network.

According to the Direction of Exchange of Information:

1. Simplex
Simplex is the data transmission mode in which the data can flow only in one direction, i.e., the
communication is unidirectional. In this mode, a sender can only send data but can not receive it.
Similarly, a receiver can only receive data but cannot send it.

This transmission mode is not so popular because we cannot perform two-way communication between
the sender and receiver in this mode. It is mainly used in the business field as in sales that do not require

15
any corresponding reply. It is similar to a one-way street. For Example, Radio and TV transmission,
keyboard, mouse, etc.

2. Half-Duplex
Half-Duplex is the data transmission mode in which the data can flow in both directions but in one
direction at a time. It is also referred to as Semi-Duplex. In other words, each station can both transmit
and receive the data but not at the same time. When one device is sending the other can only receive and
vice-versa.

In this type of transmission mode, the entire capacity of the channel can be utilized for each direction.
Transmission lines can carry data in both directions, but the data can be sent only in one direction at a
time.

This type of data transmission mode can be used in cases where there is no need for communication in
both directions at the same time. It can be used for error detection when the sender does not send or the
receiver does not receive the data properly. In such cases, the data needs to be transmitted again by the
receiver. For Example, Police radio, Internet Browsers, etc.

3. Full-Duplex
Full-Duplex is the data transmission mode in which the data can flow in both directions at the same time.
It is bi-directional in nature. It is two-way communication in which both the stations can transmit and
receive the data simultaneously.

Full-Duplex mode has double bandwidth as compared to the half-duplex. The capacity of the channel is
divided between the two directions of communication. This mode is used when communication in both
directions is required simultaneously. For Example, a Telephone Network, in which both the persons can
talk and listen to each other simultaneously.

16
According to the synchronization between the transmitter and the receiver:

1. Synchronous
The Synchronous transmission mode is a mode of communication in which the bits are sent one after
another without any start/stop bits or gaps between them. Actually, both the sender and receiver are paced
by the same system clock. In this way, synchronization is achieved.
In a Synchronous mode of data transmission, bytes are transmitted as blocks in a continuous stream of
bits. Since there is no start and stop bits in the message block. It is the responsibility of the receiver to
group the bits correctly. The receiver counts the bits as they arrive and groups them in eight bits unit. The
receiver continuously receives the information at the same
rate that the transmitter has sent it. It also listens to the messages even if no bits are transmitted.
In synchronous mode, the bits are sent successively with no separation between each character, so it
becomes necessary to insert some synchronization elements with the message, this is called ―Character-
Level Synchronization‖.
For Example, if there are two bytes of data, say (10001101, 11001011) then it will be transmitted in the
synchronous mode as follows:

For Example communication in CPU and RAM

2. Asynchronous
The Asynchronous transmission mode is a mode of communication in which a start and the stop bit is
introduced in the message during transmission. The start and stop bits ensure that the data is transmitted
correctly from the sender to the receiver.
Generally, the start bit is ‗0‘ and the end bit is ‗1‘.Asynchronous here means ‗asynchronous at the byte
level‘, but the bits are still synchronized. The time duration between each character is the same and
synchronized.

In an asynchronous mode of communication, data bits can be sent at any point in time. The messages are
sent at irregular intervals and only one data byte can be sent at a time. This type of transmission mode is
best suited for short-distance data transfer.
For Example, if there are two bytes of data, say (10001101, 11001011) then it will be transmitted in the
asynchronous mode as follows:

17
For Example, Data input from a keyboard to the computer.

According to the number of bits sent simultaneously in the network:

1. Serial
The Serial data transmission mode is a mode in which the data bits are sent serially one after the other at a
time over the transmission channel.

It needs a single transmission line for communication. The data bits are received in synchronization with
one another. So, there is a challenge of synchronizing the transmitter and receiver.

In serial data transmission, the system takes several clock cycles to transmit the data stream. In this mode,
the data integrity is maintained, as it transmits the data bits in a specific order, one after the other.
This type of transmission mode is best suited for long-distance data transfer, or the amount of data being
sent is relatively small.
For Example, Data transmission between two computers using serial ports.

2. Parallel
The Parallel data transmission mode is a mode in which the data bits are sent parallelly at a time. In other
words, there is a transmission of n-bits at the same time simultaneously.

18
Multiple transmission lines are used in such modes of transmission. So, multiple data bytes can be
transmitted in a single system clock. This mode of transmission is used when a large amount of data has
to be sent in a shorter duration of time. It is mostly used for short-distance communication.
For n-bits, we need n-transmission lines. So, the complexity of the network increases but the transmission
speed is high. If two or more transmission lines are too close to each other, then there may be a chance of
interference in the data, degrading the signal quality.
For Example, Data transmission between computer and printer.

Hence, after learning the various transmission modes, we can conclude that some points need to be
considered when selecting a data transmission mode:

 Transmission Rate.
 The Distance that it covers.
 Cost and Ease of Installation.
 The resistance of environmental conditions.

This is all about the various transmission modes in a computer network.

2.5. Transmission Media


Communication channels that are used to carry the data from the transmitters to the receivers through the
electromagnetic signals. The main function of this is to carry the data in the bits form through the Local
Area Network (LAN). In data communication, it works like a physical path
between the sender & the receiver. For instance, in a copper cable network the bits in the form of
electrical signals whereas in a fiber network, the bits are available in the form of light pulses.
The quality, as well as characteristics of data transmission, can be determined from the characteristics of
medium & signal. The properties of different transmission media are delay, bandwidth, maintenance, cost,
and easy installation.

Transmission media is a pathway that carries the information from sender to receiver. We use different
types of cables or waves to transmit data. Data is transmitted normally through electrical or
electromagnetic signals.
An electrical signal is in the form of current. An electromagnetic signal is series of electromagnetic
energy pulses at various frequencies. These signals can be transmitted through copper wires, optical
fibers, atmosphere, water and vacuum Different Medias have different properties like bandwidth, delay,
cost and ease of installation and maintenance. Transmission media is also called Communication channel.

19
2.6. Types of Transmission Media:
Transmission media is broadly classified into two groups.

Guided Media
This kind of transmission media is also known as wired otherwise bounded media. In this type, the signals
can be transmitted directly & restricted in a thin path through physical links. The main features of guided
media mainly include secure, high-speed, and used in small distances. This kind of media is classified
into three types which are discussed below.

A). Twisted Pair Cable


It includes two separately protected conductor wires. Normally, some pairs of cables are packaged jointly
in a protective cover. Insulated copper wires arranged in regular spiral pattern.

 The oldest, least expensive, and most commonly used media


 reduce susceptibility to interference than straight pair wires (two straight parallel wires tend to act
as an antenna and pick up extraneous signals when compared to twisted pairs)
 Highly susceptible to electrical noise, interference, and ‗tapping‘ of the signal as compared to the
other guided media
 Usually used for multiplexing multiple telephone lines, also used for transmitting digital date for
point-to-point links (e.g. the leased line for AAUNet)
 Arrangement of twisted pairs into group used for high-speed (10-100 Mbps) LAN.

This is the most frequently used type of transmission media and it is available in two types.

1. UTP (Unshielded Twisted Pair)


This UTP cable has the capacity to block interference. It doesn‘t depend on a physical guard and used in

20
telephonic applications. The advantage of UTP is a low cost, very simple to install, and high speed. The
disadvantages of UTP is liable to exterior interference, transmits in fewer distances, and less capacity.

Types of UTP
Category 3 Cable
With 10 MHz bandwidth, used for telco voice and horizontal wiring for 10-Mbps
10Base-T Ethernet or 4-Mbps Token Ring.
Category 4 Cable
With 20 MHz bandwidth, used for 16-Mbps Token Ring.
Category 5 Cable
The single most popular flavor! With 100 MHz bandwidth, it can handle upto 100-Mbps.

2. Shielded Twisted Pair


STP cable includes a particular jacket for blocking outside interference. It is used in rapid data rate
Ethernet, in voice & data channels of telephone lines.
The main advantages of STP cable mainly include good speed, removes crosstalk.

The main disadvantages are hard to manufacture as well as install, It is expensive and bulky also

B) Coaxial Cable
This cable contains an external plastic cover and it includes two parallel conductors where each conductor
includes a separate protection cover. This cable is used to transmit data in two modes like baseband mode
as well as broadband mode. This cable is widely used in cable TVs & analog TV networks.
The advantages of the coaxial cable include high bandwidth, noise immunity is good, low cost and simple
to install. The disadvantage of this cable is, the failure of cable can disturb the whole network.

 Most versatile medium used in LANs, Cable TV, VCR-to-TV connections

21
 Noise immunity is better than twisted pair

 Less susceptible to interference and cross talk but there still is attenuation and thermal noise
problem
 Can go up to 185m (10Base2) or 500m(10Base5) without the need for an amplifier/repeater

C) Optical Fiber Cable


This cable uses the notion of light reflected through a core that is made with plastic or glass. The core is
enclosed with less thick plastic or glass and it is known as the cladding, used for large volume data
transmission.
The main advantages of this cable include lightweight, capacity & bandwidth will be increased, signal
attenuation is less, etc. The disadvantages are high cost, fragile, installation & maintenance is difficult and
unidirectional.

Two types of fiber optic cables


Multimode Fiber optic cable

 Fiber optic cable where the light signal travels dispersed through the core
 Core is usually 50-62m in diameter
 Maximum distance signal travels without a repeater is 500m

Single Mode fiber

 Fiber optic cable where the light signal travels in a single mode through the core
 m in diameterCore is usually less than 10
 Maximum distance signal travels without a repeater is 10km (with the appropriate modulation up
to 100km)

22
Unguided (wireless transmission)

In unguided media transmission and reception are achieved by means of an antenna. It is also known as
unbounded otherwise wireless transmission media. It doesn‘t require any physical medium to transmit
electromagnetic signals. The main features of this media are less secure; the signal can be transmitted
through air, and applicable for large distances. There are three types of unguided media which are
discussed below.

A). Radio waves


These waves are very easy to produce as well as penetrate through buildings. In this, the transmitting &
receiving antennas no need to align. The frequency range of these waves ranges from 3 kHz to 1GHz.
These waves are used in AM & Fm radios for transmission. These waves are classified into two types
namely Terrestrial & Satellite.

1. Terrestrial Microwave

 Typically used where laying a cable is not practical


 Parabolic dish shaped antenna for directional and bar-like antenna for omnidirectional
transmission
 transmits/receives electromagnetic waves in the 2-40 GHz range
 Travels in a straight line (line-of-sight propagation)
 High data rates: 100‘s Mbps
 Repeaters spaced 10 – 100 km apart
 Applications : telephone and data transmission- wireless LANs

2. Satellite Microwave

23
B) Microwaves
It is a sightline transmission which means the transmitting & receiving antennas need to align correctly
with each other. The distance which is covered through the signal can be directly proportional to the
antenna‘s height. The frequency range of microwaves ranges from 1GHz to 300GHz. These are
extensively used in TV distribution & mobile phone communication.

C) Infrared Waves
Infrared (IR) waves are used in extremely small distance communication as they cannot go through
obstacles. So it stops intrusion between systems. The range of frequency of these waves is 300GHz to
400THz. These waves are used in TV remotes, keyboards, wireless mouse, printer, etc.

For short-range communication

 Remote controls for TVs, VCRs, and stereos


 Indoor wireless LANs

2.7. Review Questions


 How Data Communication works?
 List and explain Guided transmission mediums?
 What is transmission impairments means?
 How analog signals transmitted?
 In which means signals transmit without guided medium?
 How satellites provide data to the ground station?

24
UNIT - III

3.1. Network Line Configuration


Line configuration refers to the way two or more communication devices attached to a link. Line
configuration is also referred to as connection. A Link is the physical communication pathway that
transfers data from one device to another. For communication to occur, two devices must be connected in
same way to the same link at the same time.
There are two possible line configurations.

1. Point-to-Point.
2. Multipoint.

Point-to-Point
A Point to Point Line Configuration Provide dedicated link between two devices use actual length of wire
or cable to connect the two end including microwave & satellite link. Infrared remote control & tvs
remote control.
The entire capacity of the channel is reserved for transmission between those two devices. Most point-to-
point line configurations use an actual length of wire or cable to connect the two ends, but other options,
such as microwave or satellite links, are also possible.
Point to point network topology is considered to be one of the easiest and most conventional network
topologies. It is also the simplest to establish and understand. To visualize, one can consider point to point
network topology as two phones connected end to end for a two way communication.

Multipoint Configuration
Multipoint Configuration also known as Multidrop line configuration one or more than two specific
devices share a single link capacity of the channel is shared.
More than two devices share the Link that is the capacity of the channel is shared now. With shared
capacity, there can be two possibilities in a Multipoint Line Config:

 Spatial Sharing: If several devices can share the link simultaneously, its called Spatially shared
line configuration
 Temporal (Time) Sharing: If users must take turns using the link , then it‘s called Temporally
shared or Time Shared Line Configuration

25
Activity 3.1

 Discuss on the network devices?


 What are the central devices among the network devices?
 How to differentiate Switch and Hub network devices?
 Discuss in details about functions of router?

3.2. Network Devices


Network devices, or networking hardware, are physical devices that are required for communication and
interaction between hardware on a computer network
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over the same
network before the signal becomes too weak or corrupted so as to extend the length to which the signal
can be transmitted over the same network. An important point to be noted
about repeaters is that they do not amplify the signal. When the signal becomes weak, they copy the
signal bit by bit and regenerate it at the original strength. It is a 2 port device.
2. Hub – Hubs connect multiple computer networking devices together. A hub also acts as a repeater in
that it amplifies signals that deteriorate after traveling long distances over connecting cables. A hub is the
simplest in the family of network connecting devices because it connects LAN components with identical
protocols.
A hub can be used with both digital and analog data, provided its settings have been configured to prepare
for the formatting of the incoming data. For example, if the incoming data is in digital format, the hub
must pass it on as packets; however, if the incoming data is analog, then the hub passes it on in signal
form. Hubs do not perform packet filtering or addressing functions; they just send data packets to all
connected devices. Hubs operate at the Physical layer of the Open Systems Interconnection (OSI) model.
There are two types of hubs: simple and multiple port.
A hub is basically a multiport repeater. A hub connects multiple wires coming from different branches,
for example, the connector in star topology which connects different stations. Hubs cannot filter data, so
data packets are sent to all connected devices. In other words, the collision domain of all hosts connected
through Hub remains one. Also, they do not have the intelligence to find out the best path for data packets
which leads to inefficiencies and
Wastage.

 Active Hub: – These are the hubs that have their own power supply and can clean, boost, and
relay the signal along with the network. It serves both as a repeater as well as a wiring center.
These are used to extend the maximum distance between nodes.
 Passive Hub:- These are the hubs that collect wiring from nodes and power supply from the
active hub. These hubs relay signals onto the network without cleaning and boosting them and
can‘t be used to extend the distance between nodes.

26
 Intelligent Hub:- It works like active hubs and includes remote management capabilities. They
also provide flexible data rates to network devices. It also enables an administrator to monitor the
traffic passing
 through the hub and to configure each port in the hub.

3. Bridge – A bridge operates at the data link layer. A bridge is a repeater with add on the functionality of
filtering content by reading the MAC addresses of source and destination.
It is also used for interconnecting two LANs working on the same protocol. It has a single input and
single output port, thus making it a 2 port device.

Types of Bridges

 Transparent Bridges:- These are the bridge in which the stations are completely unaware of the
bridge‘s existence i.e. whether or not a bridge is added or deleted from the network,
reconfiguration of the stations is unnecessary. These bridges make use of two processes i.e.
bridge forwarding and bridge learning.
 Source Routing Bridges:- In these bridges, routing operation is performed by the source station
and the frame specifies which route to follow. The host can discover the frame by sending a
special frame called the discovery frame, which spreads through the entire network using all
possible paths to the destination.

4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its efficiency (a large
number of ports imply less traffic) and performance. A switch is a data link layer device. The switch can
perform error checking before forwarding data, which makes it very efficient as it does not forward
packets that have errors and forward good packets selectively to the correct port only. In other words, the
switch divides the collision domain of hosts, but broadcast domain remains the same. Switches generally
have a more intelligent role than hubs. A switch is a multiport device that improves network efficiency.
The switch maintains limited routing information about nodes in the internal network, and it allows
connections to systems like hubs or routers. Strands of LANs are usually connected using switches.
Generally, switches can read the hardware addresses of incoming packets to transmit them to the
appropriate destination.

Using switches improves network efficiency over hubs or routers because of the virtual circuit capability.
Switches also improve network security because the virtual circuits are more difficult to examine with
network monitors. You can think of a switch as a device that has some of the best capabilities of routers
and hubs combined. A switch can work at either the Data Link layer
or the Network layer of the OSI model. A multilayer switch is one that can operate at both layers, which
means that it can operate as both a switch and a router. A multilayer switch is a highperformance device
that supports the same routing protocols as routers.
Switches can be subject to distributed denial of service (DDoS) attacks; flood guards are used to prevent
malicious traffic from bringing the switch to a halt. Switch port security is important so be sure to secure
switches: Disable all unused ports and use DHCP snooping, ARP inspection and MAC address filtering.

5. Routers –
Routers help transmit packets to their destinations by charting a path through the sea of interconnected
networking devices using different network topologies. Routers are intelligent devices, and they store
information about the networks they‘re connected to. Most routers can be
configured to operate as packet-filtering firewalls and use access control lists (ACLs). Routers, in

27
conjunction with a channel service unit/data service unit (CSU/DSU), are also used to translate from LAN
framing to WAN framing. This is needed because LANs and WANs use different network protocols.
Such routers are known as border routers. They serve as the outside connection of a LAN to a WAN, and
they operate at the border of your network.
Router is also used to divide internal networks into two or more sub networks. Routers can also be
connected internally to other routers, creating zones that operate independently. Routers establish
communication by maintaining tables about destinations and local connections. A router contains
information about the systems connected to it and where to send requests if the destination isn‘t known.
Routers usually communicate routing and other information using one of three standard protocols:
Routing Information Protocol (RIP), Border Gateway Protocol (BGP) or Open Shortest Path First
(OSPF).
Routers are your first line of defense, and they must be configured to pass only traffic that is authorized
by network administrators. The routes themselves can be configured as static or dynamic. If they are
static, they can only be configured manually and stay that way until changed. If they are dynamic, they
learn of other routers around them and use information about those routers to build their routing tables.

Routers are general-purpose devices that interconnect two or more heterogeneous networks. They are
usually dedicated to special-purpose computers, with separate input and output network interfaces for
each connected network. Because routers and gateways are the backbone of large computer networks like
the internet, they have special features that give them the flexibility and the ability to cope with varying
network addressing schemes and frame sizes through segmentation of big packets into smaller sizes that
fit the new network components. Each router interface has its own Address Resolution Protocol (ARP)
module, its own LAN address (network card address) and its own Internet Protocol (IP) address. The
router, with the help of a routing table, has knowledge of routes a packet could take from its source to its
destination. The routing table, like in the bridge and switch, grows dynamically. Upon receipt of a packet,
the router removes the packet headers and trailers and analyzes the IP header by determining the source
and destination addresses and data type, and noting the arrival time. It also updates the router table with
new addresses not already in the table. The IP header and arrival time information is entered in the
routing table. Routers normally work at the Network layer of the OSI model.
A router is a device like a switch that routes data packets based on their IP addresses. The router is mainly
a Network Layer device. Routers normally connect LANs and WANs together and have a dynamically
updating routing table based on which they make decisions on routing the data packets. Router divide
broadcast domains of hosts connected through it.

6. Gateway: Gateways normally work at the Transport and Session layers of the OSI model. At the
Transport layer and above, there are numerous protocols and standards from different vendors; gateways
are used to deal with them. Gateways provide translation between networking technologies such as Open
System Interconnection (OSI) and Transmission Control Protocol/Internet Protocol (TCP/IP). Because of

28
this, gateways connect two or more autonomous networks, each with its own routing algorithms,
protocols, topology, domain name service, and network administration procedures and policies.

Gateways perform all of the functions of routers and more. In fact, a router with added translation
functionality is a gateway. The function that does the translation between different network technologies
is called a protocol converter.

7. Modem
Modems (modulators-demodulators) are used to transmit digital signals over analog telephone lines.
Thus, digital signals are converted by the modem into analog signals of different frequencies and
transmitted to a modem at the receiving location. The receiving modem performs the reverse
transformation and provides a digital output to a device connected to a
modem, usually a computer. The digital data is usually transferred to or from the modem over a serial line
through an industry standard interface, RS-232. Many telephone companies offer DSL services, and many
cable operators use modems as end terminals for identification and recognition of home and personal
users. Modems work on both the Physical and Data Link layers.

8. Access Point
While an access point (AP) can technically involve either a wired or wireless connection, it commonly
means a wireless device. An AP works at the second OSI layer, the Data Link layer, and it can operate
either as a bridge connecting a standard wired network to wireless devices or as a router passing data
transmissions from one access point to another.

Wireless access points (WAPs) consist of a transmitter and receiver (transceiver) device used to create a
wireless LAN (WLAN). Access points typically are separate network devices with a built-in antenna,
transmitter and adapter. APs use the wireless infrastructure network mode to
provide a connection point between WLANs and a wired Ethernet LAN. They also have several ports,
giving you a way to expand the network to support additional clients. Depending on the size of the
network, one or more APs might be required to provide full coverage. Additional APs are used to allow
access to more wireless clients and to expand the range of the wireless network.
Each AP is limited by its transmission range — the distance a client can be from an AP and still obtain a
usable signal and data process speed. The actual distance depends on the wireless standard, the
obstructions and environmental conditions between the client and the AP. Higher
end APs have high-powered antennas, enabling them to extend how far the wireless signal can travel.
APs might also provide many ports that can be used to increase the network‘s size, firewall capabilities
and Dynamic Host Configuration Protocol (DHCP) service. Therefore, we get APs that are a switch,
DHCP server, router and firewall.
9. NIC
NIC or network interface card is a network adapter that is used to connect the computer to the network. It
is installed in the computer to establish a LAN. It has a unique id that is written on the chip, and it has a
connector to connect the cable to it. The cable acts as an interface between the computer and router or
modem. NIC card is a layer 2 device which means that it
works on both physical and data link layer of the network model.

Activity 3.2

29
 Discuss on Connectionless and Connection oriented network?
 Discus in details on logical and physical topologies?
 Compare and contrast the four main physical topologies

3.3. Network Topologies


Network topology refers to how various nodes, devices, and connections on your network are physically
or logically arranged in relation to each other. Think of your network as a city, and the topology as the
road map. Just as there are many ways to arrange and maintain a city—such as making sure the avenues
and boulevards can facilitate passage between the parts of town getting the most traffic—there are several
ways to arrange a network. Each has advantages and disadvantages and depending on the needs of your
company, certain arrangements can give you a greater degree of connectivity and security.
There are two approaches to network topology: physical and logical. Physical network topology, as the
name suggests, refers to the physical connections and interconnections between nodes and the network—
the wires, cables, and so forth. Logical network topology is a little more abstract and strategic, referring to
the conceptual understanding of how and why the network is arranged the way it is, and how data moves
through it.
The way a network is arranged can make or break network functionality, connectivity, and protection
from downtime. The question of, ―What is network topology?‖ can be answered with an explanation of
the two categories in the network topology.

1. Physical – The physical network topology refers to the actual connections (wires, cables, etc.) of how
the network is arranged. Setup, maintenance, and provisioning tasks require insight into the physical
network.
2. Logical – The logical network topology is a higher-level idea of how the network is set up, including
which nodes connect to each other and in which ways, as well as how data is transmitted through the
network. Logical network topology includes any virtual and cloud resources.
Effective network management and monitoring require a strong grasp of both the physical and logical
topology of a network to ensure your network is efficient and healthy.

What is Star Topology?

A star topology, the most common network topology, is laid out so every node in the network is directly
connected to one central hub via coaxial, twisted-pair, or fiber-optic cable. Acting as a server, this central
node manages data transmission—as information sent from any node on the
network has to pass through the central one to reach its destination—and functions as a repeater, which
helps prevent data loss.

Advantages of Star Topology

30
Star topologies are common since they allow you to conveniently manage your entire network from a
single location. Because each of the nodes is independently connected to the central hub, should one go
down, the rest of the network will continue functioning unaffected, making the star topology a stable and
secure network layout?
Additionally, devices can be added, removed, and modified without taking the entire network offline.

Disadvantages of Star Topology

On the flipside, if the central hub goes down, the rest of the network can‘t function. But if the central hub
is properly managed and kept in good health, administrators shouldn‘t have too many issues.
The overall bandwidth and performance of the network are also limited by the central node‘s
configurations and technical specifications, making star topologies expensive to set up and operate.

What is Bus Topology?

A bus topology orients all the devices on a network along a single cable running in a single direction from
one end of the network to the other—which is why it‘s sometimes called a ―line topology‖ or ―backbone
topology.‖ Data flow on the network also follows the route of the cable, moving in one direction.

Advantages of Bus Topology

Bus topologies are a good, cost-effective choice for smaller networks because the layout is simple,
allowing all devices to be connected via a single coaxial or RJ45 cable. If needed, more nodes can be
easily added to the network by joining additional cables.

Disadvantages of Bus Topology

However, because bus topologies use a single cable to transmit data, they‘re somewhat vulnerable. If the
cable experiences a failure, the whole network go down, this can be timeconsuming and expensive to
restore, which can be less of an issue with smaller networks.
Bus topologies are best suited for small networks because there‘s only so much bandwidth, and every
additional node will slow transmission speeds.

What is Ring Topology? Ring topology is where nodes are arranged in a circle (or ring). The data can
travel through the ring network in either one direction or both directions, with each device having exactly
two neighbors.

31
Pros of Ring Topology

Since each device is only connected to the ones on either side, when data is transmitted, the packets also
travel along the circle, moving through each of the intermediate nodes until they arrive at their
destination. If a large network is arranged in a ring topology, repeaters can be used to ensure packets
arrive correctly and without data loss.
Only one station on the network is permitted to send data at a time, which greatly reduces the risk of
packet collisions, making ring topologies efficient at transmitting data without errors.
By and large, ring topologies are cost-effective and inexpensive to install, and the intricate pointto-point
connectivity of the nodes makes it relatively easy to identify issues or misconfigurations on the network.

Cons of Ring Topology

Even though it‘s popular, a ring topology is still vulnerable to failure without proper network
management. Since the flow of data transmission moves unidirectional between nodes along each ring, if
one node goes down, it can take the entire network with it. That‘s why it‘s imperative for each of the
nodes to be monitored and kept in good health. Nevertheless, even if you‘re vigilant and attentive to node
performance, your network can still be taken down by a transmission line failure.
Additionally, the entire network must be taken offline to reconfigure, add, or remove nodes. And while
that‘s not the end of the world, scheduling downtime for the network can be inconvenient and costly.

What is Tree Topology? The tree topology structure gets its name from how the central node functions
as a sort of trunk for the network, with nodes extending outward in a branch-like fashion. However, where
each node in a star topology is directly connected to the central hub, a tree topology has a parent-child
hierarchy to how the nodes are connected. Those connected to the central hub are connected linearly to
other nodes, so two connected nodes only share one mutual connection. Because the tree topology
structure is both extremely flexible and scalable, it‘s often used for wide area networks to support many
spread-out devices.

32
Pros of Tree Topology

Combining elements of the star and bus topologies allows for the easy addition of nodes and network
expansion. Troubleshooting errors on the network is also a straightforward process, as each of the
branches can be individually assessed for performance issues.

Cons of Tree Topology

As with the star topology, the entire network depends on the health of the root node in a tree topology
structure. Should the central hub fail, the various node branches will become disconnected, though
connectivity within—but not between—branch systems will remain.
Because of the hierarchical complexity and linear structure of the network layout, adding more nodes to a
tree topology can quickly make proper management an unwieldy, not to mention costly, experience. Tree
topologies are expensive because of the sheer amount of cabling required to connect each device to the
next within the hierarchical layout.

What is Mesh Topology?

A mesh topology is an intricate and elaborate structure of point-to-point connections where the nodes are
interconnected. Mesh networks can be full or partial mesh. Partial mesh topologies are mostly
interconnected, with a few nodes with only two or three connections, while full-mesh topologies are
surprise fully interconnected.

The web-like structure of mesh topologies offers two different methods of data transmission: routing and
flooding. When data is routed, the nodes use logic to determine the shortest distance from the source to
destination, and when data is flooded, the information is sent to all nodes within the network without the
need for routing logic.

Advantages of Mesh Topology

Mesh topologies are reliable and stable, and the complex degree of interconnectivity between nodes
makes the network resistant to failure. For instance, no single device going down can bring the network
offline.

Disadvantages of Mesh Topology

Mesh topologies are incredibly labor-intensive. Each interconnection between nodes requires a cable and
configuration once deployed, so it can also be time-consuming to set up. As with other topology

33
structures, the cost of cabling adds up fast, and to say mesh networks require a lot of cabling is an
understatement.

What is Hybrid Topology?

Hybrid topologies combine two or more different topology structures—the tree topology is a good
example, integrating the bus and star layouts. Hybrid structures are most commonly found in larger
companies where individual departments have personalized network topologies adapted to suit their needs
and network usage.

Advantages of Hybrid Topology

The main advantage of hybrid structures is the degree of flexibility they provide, as there are few
limitations on the network structure itself that a hybrid setup can‘t accommodate.

Disadvantages of Hybrid Topology

However, each type of network topology comes with its own disadvantages, and as a network grows in
complexity, so too does the experience and know-how required on the part of the admins to keep
everything functioning optimally. There‘s also the monetary cost to consider when creating a hybrid
network topology.

3.4. Connection-Oriented and Connectionless Services


Connection-Oriented Services

In a connection-oriented service, each packet is related to a source/destination connection. These packets


are routed along a similar path, known as a virtual circuit. Thus, it provides an end-toend connection to
the client for reliable data transfer. It delivers information in order without
duplication or missing information. It does not congest the communication channel and the buffer of the
receiving device. The host machine requests a connection to interact and closes the connection after the
transmission of the data. Mobile communication is an example of a connection-oriented service. A
connection-oriented service is one that establishes a dedicated connection between the communicating
entities before data communication commences. It is modeled after the telephone system. To use a
connection-oriented service, the user first establishes a connection, uses it and then releases it. In

34
connection-oriented services, the data streams/packets are delivered to the receiver in the same order in
which they have been sent by the sender.

Connection-oriented services may be of the following types –

 Reliable Message Stream: e.g. sequence of pages


 Reliable Byte Stream: e.g. song download
 Unreliable Connection: e.g. VoIP (Voice over Internet Protocol)

Advantages of Connection-Oriented Services

 This is mostly a reliable connection.


 Congestions are less frequent.
 Sequencing of data packets is guaranteed.
 Problems related to duplicate data packets are alleviated.
 Suitable for long connection.

Disadvantages of Connection-Oriented Services

 • Resource allocation is needed before communication. This often leads to under-utilized network
resources.
 The lesser speed of connection due to the time is taken for establishing and relinquishing the
connection.
 In the case of router failures or network congestions, there are no alternative ways to continue
communication.

Connectionless-Services

In connectionless service, a router treats each packet individually. The packets are routed through
different paths through the network according to the decisions made by routers. The network or
communication channel does not guarantee data delivery from the host machine to the destination
machine in connectionless service.

The data to be transmitted is broken into packets. These independent packets are called data grams in
analogy with telegrams.
The packets contain the address of the destination machine. Connectionless service is equivalent to the
postal system. In the postal system, a letter is put in an envelope that contains the address of the
destination. It is then placed in a letterbox.
The letter finally delivers to the destination through the postal network. However, it does not guarantee to
appear in the addressee‘s letterbox.

35
UNIT - IV

4.1. Network Protocol


A network protocol is a set of established rules that dictate how to format, transmit and receive data so
that computer network devices from servers and routers to endpoints — can communicate, regardless of
the differences in their underlying infrastructures, designs or standards.
To successfully send and receive information, devices on both sides of a communication exchange must
accept and follow protocol conventions. In networking, support for protocols can be built into software,
hardware or both.
Without computing protocols, computers and other devices would not know how to engage with each
other. As a result, except for specialty networks built around a specific architecture, few networks would
be able to function, and the internet as we know it wouldn‘t exist. Virtually all network end users rely on
network protocols for connectivity.
Network protocols break larger processes into discrete, narrowly defined functions and tasks across every
level of the network. In the standard model, known as the Open Systems Interconnection (OSI) model,
one or more network protocols govern activities at each layer in the telecommunication exchange. Lower
layers deal with data transport, while the upper layers in
the OSI model deal with software and applications.
A set of cooperating network protocols is called a protocol suite. The Transmission Control
Protocol/Internet Protocol (TCP/IP) suite, which is typically used in client-server models, includes
numerous protocols across layers — such as the data, network, transport and application
layers — working together to enable internet connectivity. These include the following:

 TCP, which uses a set of rules to exchange messages with other internet points at the information
packet level;
 User Datagram Protocol, or UDP, which acts as an alternative communication protocol to
 TCP and is used to establish low-latency and loss-tolerating connections between applications
and the internet;
 IP, which uses a set of rules to send and receive messages at the level of IP addresses; and
 Additional network protocols, including Hypertext Transfer Protocol (HTTP) and File Transfer
Protocol (FTP), each of which has defined sets of rules to exchange and display information.

Every packet transmitted and received over a network contains binary data. Most computing protocols
will add a header at the beginning of each packet in order to store information about the sender and the
message‘s intended destination. Some protocols may also include footer at the end with additional
information. Network protocols process these headers and footers as part of the data moving among
devices in order to identify messages of their own kind.

Activity 4.1

 How network protocols works.


 Discuss on Logical address, Physical address of the computer network?
 Discuss on the TCP/ IP protocols and UDP protocols?

36
4.2. TCP/IP Protocol Suite
TCP/IP is a suite of protocols that can be used to connect dissimilar brands of computers and network
devices. The largest TCP/IP network is the Internet. The Internet was developed by the U.S. DOD under
the auspices of the Defense Advanced Research Project Agency (DARPA) when DOD scientists were
faced with the problem of linking thousands of computers running
different operating systems. The Defense Advanced Research Project Agency (DARPA) is a small
organization within the Pentagon, but its impact on technology in general and on data communications in
particular has been huge. For all practical purposes, DARPA‘s programs and funding created the Internet.
You can think of the TCP/IP sui te as the lifeblood of the Internet. The TCP/IP suite has become widely
adopted, because it is an open protocol standard that can be implemented on any platform regardless of
the manufacturer. In addition, it is independent of any physical network hardware. TCP/IP can be
implemented on Ethernet, X.25, and token ring, among other platforms.
Although there are different interpretations on how to describe TCP/IP within a layered model, it is
generally described as being composed of fewer than the seven used in the OSI model. The TCP/IP
protocol suite generally follows four-layer architecture.

The IP portion of TCP/IP is the connectionless network layer protocol. It is sometimes called an
―unreliable‖ protocol, meaning that IP does not establish an end-to-end connection before transmitting
data grams and that it contains no error detection and recovery code. The datagram is the packet format
defined by IP. IP operates across the network and data link layers of the OSI model and relies on the TCP
protocol to ensure that the data reaches its destination correctly.
The heart of the IP portion of TCP/IP is a concept called the Internet address. This is a 32-bit number
assigned to every node on the network. IP addresses are written in a dotted decimal format that
corresponds to the 32-bit binary address. Each octet is assigned a number between 0 and 255. An example
of an IP address in dotted decimal format is 12.31.80.1. This IP address
translated into a 32-bit binary number is:

An IP address is divided into two parts, a network ID and a host ID, but the format of these parts depends
on the class of the address. There are three main address classes: class A, class B, and class C. The
formats differ in the number of bits allocated to the network ID and host ID and are distinguished by the
first three bits of the 32 bit address.
The TCP portion of TCP/IP comes into operation once a packet is delivered to the correct Internet
address. In contrast to IP, which is a connectionless protocol, TCP is connection oriented. It establishes a
logical end-to-end connection between two communicating nodes or devices. TCP operates at the
transport layer of the OSI model and provides a virtual circuit service between end-user applications, with
reliable data transfer, which is lacking in the datagram-oriented IP.

Software packages that follow the TCP standard run on each machine, establish a connection to each
other, and manage the communications exchanges. TCP provides the flow control, error detection, and
sequencing of the data; looks for responses; and takes the appropriate action to replace missing data
blocks.

37
The end-to-end connection is established through the exchange of control information. This exchange of
information is called a three-way handshake. This handshake is necessary to establish the logical
connection and to allow the transmission of data to begin.
In its simplest form, host A would transmit to host B the synchronize sequence number bit set.
This tells host B that host A wishes to establish a connection and informs host B of the starting sequence
number for host A. Host B sends back to host A an acknowledgment and confirms its starting sequence
number. Host A acknowledges receipt of host B‘s transmission and begins the transfer of data. Later, in
this tutorial, I will explain how this three-way handshake can be
exploited to disrupt the operation of a system.
Another important TCP/IP protocol is the user datagram protocol (UDP). Like TCP, UDP operates at the
transport layer. The major difference between TCP and UDP is that UDP is a connectionless datagram
protocol. UDP gives applications direct access to a datagram delivery service-like the service IP provides.
This allows applications to exchange data with a minimum
of protocol overhead. Figure below illustrates the hierarchical relationship between IP and TCP/UDP and
the applications that rely upon the protocols.

The UDP protocol is best suited for applications that transmit small amounts of data, where the process of
creating connections and ensuring delivery may be greater than the work of simply retransmitting the
data. Another situation where UDP would be appropriate is when an application provides its own method
of error checking and ensuring delivery.

4.3. Four Layers of TCP/IP Model


In this TCP/IP tutorial, we will explain different layers and their functionalities in TCP/IP model:

The functionality of the TCP IP model is divided into four layers, and each includes specific protocols.
TCP/IP is a layered server architecture system in which each layer is defined according to a specific
function to perform. All these four TCP IP layers work collaboratively to transmit the data from one layer
to another.

 Application Layer
 Transport Layer
 Internet Layer
 Network Interface

38
Application Layer

Application layer interacts with an application program, which is the highest level of OSI model.
The application layer is the OSI layer, which is closest to the end-user. It means the OSI application layer
allows users to interact with other software application.
Application layer interacts with software applications to implement a communicating component. The
interpretation of data by the application program is always outside the scope of the OSI model.
Example of the application layer is an application such as file transfer, email, remote login, etc.

The functions of the Application Layers are:

 Application-layer helps you to identify communication partners, determining resource


availability, and synchronizing communication.
 It allows users to log on to a remote host
 This layer provides various e-mail services
 This application offers distributed database sources and access for global information about
various objects and services.

Transport Layer

Transport layer builds on the network layer in order to provide data transport from a process on a source
system machine to a process on a destination system. It is hosted using single or multiple networks, and
also maintains the quality of service functions.
It determines how much data should be sent where and at what rate. This layer builds on the message
which is received from the application layer. It helps ensure that data units are delivered error-free and in
sequence.
Transport layer helps you to control the reliability of a link through flow control, error control, and
segmentation or de-segmentation.
The transport layer also offers an acknowledgment of the successful data transmission and sends the next
data in case no errors occurred. TCP is the best-known example of the transport layer.

Important functions of Transport Layers:

 It divides the message received from the session layer into segments and numbers them to make a
sequence.
 Transport layer makes sure that the message is delivered to the correct process on the destination
machine.
 It also makes sure that the entire message arrives without any error else it should be retransmitted.

Internet Layer

An internet layer is a second layer of TCP/IP layes of the TCP/IP model. It is also known as a network
layer. The main work of this layer is to send the packets from any network, and any computer still they
reach the destination irrespective of the route they take.

The Internet layer offers the functional and procedural method for transferring variable length data
sequences from one node to another with the help of various networks.
Message delivery at the network layer does not give any guaranteed to be reliable network layer protocol.
Layer-management protocols that belong to the network layer are:

39
1. Routing protocols
2. Multicast group management
3. Network-layer addresses assignment.

The Network Interface Layer

Network Interface Layer is this layer of the four-layer TCP/IP model. This layer is also called a network
access layer. It helps you to define details of how data should be sent using the network.
It also includes how bits should optically be signaled by hardware devices which directly interface with a
network medium, like coaxial, coaxial, fiber, or twisted-pair cables.
A network layer is a combination of the data line and defined in the article of OSI reference model. This
layer defines how the data should be sent physically through the network. This layer is responsible for the
transmission of the data between two devices on the same network.

Most Common TCP/IP Protocols

Some widely used most common TCP/IP protocol are:


TCP: Transmission Control Protocol is an internet protocol suite which breaks up the message into TCP
Segments and reassembling them at the receiving side.
IP: An Internet Protocol address that is also known as an IP address is a numerical label. It is assigned to
each device that is connected to a computer network which uses the IP for communication. Its routing
function allows internetworking and essentially establishes the Internet. Combination of IP with a TCP
allows developing a virtual connection between a
destination and a source.
HTTP: The Hypertext Transfer Protocol is a foundation of the World Wide Web. It is used for
transferring web pages and other such resources from the HTTP server or web server to the web client or
the HTTP client. Whenever you use a web browser like Google Chrome or Firefox, you are using a web
client. It helps HTTP to transfer web pages that you request from the remote
servers.
SMTP: SMTP stands for Simple mail transfer protocol. This protocol supports the e-mail is known as a
simple mail transfer protocol. This protocol helps you to send the data to another email address.
SNMP: SNMP stands for Simple Network Management Protocol. It is a framework which is used for
managing the devices on the internet by using the TCP/IP protocol.
DNS: DNS stands for Domain Name System. An IP address that is used to identify the connection of a
host to the internet uniquely. However, users prefer to use names instead of addresses for that DNS.
TELNET: TELNET stands for Terminal Network. It establishes the connection between the local and
remote computer. It established connection in such a manner that you can simulate your local system at
the remote system.
FTP: FTP stands for File Transfer Protocol. It is a mostly used standard protocol for transmitting the files
from one machine to another.

Advantages of the TCP/IP model

Here, are pros/benefits of using the TCP/IP model:

 It helps you to establish/set up a connection between different types of computers.


 It operates independently of the operating system.
 It supports many routing-protocols.
 It enables the internetworking between the organizations.
 TCP/IP model has highly scalable client-server architecture.

40
 It can be operated independently.
 Supports a number of routing protocols.
 It can be used to establish a connection between two computers.

Disadvantages of the TCP/IP model

Here, are few drawbacks of using the TCP/IP model:

 TCP/IP is a complicated model to set up and manage.


 The shallow/overhead of TCP/IP is higher-than IPX (Internetwork Packet Exchange).
 In this, model the transport layer does not guarantee delivery of packets.
 Replacing protocol in TCP/IP is not easy.
 It has no clear separation from its services, interfaces, and protocols.

Activity 4.2

 Discuss on the TCP/IP layers and OSI layers


 Why IP addressing is important for computer networks?
 Discus on the common TCP/IP protocols?
 How data transfers from source to Destination

4.4. Open System Interconnection (OSI) Reference Model


The OSI Model is a logical and conceptual model that defines network communication used by systems
open to interconnection and communication with other systems. The Open System Interconnection (OSI
Model) also defines a logical network and effectively describes computer packet transfer by using various
layers of protocols.

Characteristics of OSI Model

Here are some important characteristics of the OSI model:

 A layer should only be created where the definite levels of abstraction are needed.
 The function of each layer should be selected as per the internationally standardized protocols.
 The number of layers should be large so that separate functions should not be put in the same
layer. At the same time, it should be small enough so that architecture doesn‘t become very
complicated.
 In the OSI model, each layer relies on the next lower layer to perform primitive functions. Every
level should able to provide services to the next higher layer
 Changes made in one layer should not need changes in other lavers.

Why of OSI Model?

 Helps you to understand communication over a network


 Troubleshooting is easier by separating functions into different network layers.

41
 Helps you to understand new technologies as they are developed.
 Allows you to compare primary functional relationships on various network layers.

History of OSI Model

Here are essential landmarks from the history of OSI model:

 In the late 1970s, the ISO conducted a program to develop general standards and methods of
networking.
 In 1973, an Experimental Packet Switched System in the UK identified the requirement for
defining the higher-level protocols.
 In the year 1983, OSI model was initially intended to be a detailed specification of actual
interfaces.
 In 1984, the OSI architecture was formally adopted by ISO as an international standard

4.5. Layers of the OSI Layers

OSI model is a layered server architecture system in which each layer is defined according to a
specific function to perform. All these seven layers work collaboratively to transmit the data
from one layer to another.

 The Upper Layers: It deals with application issues and mostly implemented only in
software. The highest is closest to the end system user. In this layer, communication from
one end-user to another begins by using the interaction between the application layer. It
will process all the way to end-user.
 The Lower Layers: These layers handle activities related to data transport. The physical
layer and data link layers also implemented in software and hardware.

Upper and lower layers further divide network architecture into seven different layers as below:

 Application
 Presentation
 Session
 Transport
 Network, Data-link
 Physical layers

42
Physical Layer

The physical layer helps you to define the electrical and physical specifications of the data connection.
This level establishes the relationship between a device and a physical transmission medium. The
physical layer is not concerned with protocols or other such higher-layer items.
Examples of hardware in the physical layer are network adapters, Ethernet, repeaters, networking hubs,
etc

Data Link Layer

Data link layer corrects errors which can occur at the physical layer. The layer allows you to define the
protocol to establish and terminates a connection between two connected network devices.
It is IP address understandable layer, which helps you to define logical addressing so that any endpoint
should be identified.
The layer also helps you implement routing of packets through a network. It helps you to define the best
path, which allows you to take data from the source to the destination.
The data link layer is subdivided into two types of sub layers:

1. Media Access Control (MAC) layer- It is responsible for controlling how device in a network gain
access to medium and permits to transmit data.
2. Logical link control layer- This layer is responsible for identity and encapsulating network-layer
protocols and allows you to find the error.

Important Functions of Data link Layer

 Framing which divides the data from Network layer into frames.
 Allows you to add header to the frame to define the physical address of the source and the
destination machine
 Adds Logical addresses of the sender and receivers

43
 It is also responsible for the sourcing process to the destination process delivery of the entire
message.
 It also offers a system for error control in which it detects retransmits damage or lost frames.
 Data link layer also provides a mechanism to transmit data over independent networks

Transport Layer

The transport layer builds on the network layer to provide data transport from a process on a source
machine to a process on a destination machine. It is hosted using single or multiple networks, and also
maintains the quality of service functions.
It determines how much data should be sent where and at what rate. This layer builds on the messages
which are received from the application layer. It helps ensure that data units are delivered error-free and
in sequence.
Transport layer helps you to control the reliability of a link through flow control, error control, and
segmentation or de-segmentation.
The transport layer also offers an acknowledgment of the successful data transmission and sends the next
data in case no errors occurred. TCP is the best-known example of the transport layer.

Important functions of Transport Layers

It divides the message received from the session layer into segments and numbers them to

make a sequence.

 Transport layer makes sure that the message is delivered to the correct process on the destination
machine.
 It also makes sure that the entire message arrives without any error else it should be retransmitted.

Network Layer

The network layer provides the functional and procedural means of transferring variable length data
sequences from one node to another connected in ―different networks‖.
Message delivery at the network layer does not give any guaranteed to be reliable network layer protocol.
Layer-management protocols that belong to the network layer are:
1. routing protocols
2. multicast group management
3. Network-layer addresses assignment.

Session Layer

Session Layer controls the dialogues between computers. It helps you to establish starting and terminating
the connections between the local and remote application.

Important function of Session Layer

 It establishes, maintains, and ends a session.


 Session layer enables two systems to enter into a dialog
 It also allows a process to add a checkpoint to steam of data.

44
Presentation Layer

Presentation layer allows you to define the form in which the data is to exchange between the two
communicating entities. It also helps you to handles data compression and data encryption.
This layer transforms data into the form which is accepted by the application. It also formats and encrypts
data which should be sent across all the networks. This layer is also known as a syntax layer.

The function of Presentation Layers

 Character code translation from ASCII to EBCDIC.


 Data compression: Allows reducing the number of bits that needs to be transmitted on the
network.
 Data encryption: Helps you to encrypt data for security purposes — for example, password
encryption.
 It provides a user interface and support for services like email and file transfer.

Application Layer

Application layer interacts with an application program, which is the highest level of OSI model.
The application layer is the OSI layer, which is closest to the end-user. It means OSI application layer
allows users to interact with other software application.
Application layer interacts with software applications to implement a communicating component. The
interpretation of data by the application program is always outside the scope of the OSI model.
Example of the application layer is an application such as file transfer, email, remote login, etc.

The functions of the Application Layers are

 Application-layer helps you to identify communication partners, determining resource


availability, and synchronizing communication.
 It allows users to log on to a remote host
 This layer provides various e-mail services
 This application offers distributed database sources and access for global information about
various objects and services.

45
Differences between OSI and TCP/IP models

Difference between OSI and TCP/IP model

Here, are some important differences between the OSI and TCP/IP model:

46
4.6. Network Standards and Standardization Bodies
Networking standards define the rules for data communications that are needed for interoperability of
networking technologies and processes. Standards help in creating and maintaining open markets and
allow different vendors to compete on the basis of the quality of their products while being compatible
with existing market products.
During data communication, a number of standards may be used simultaneously at the different layers.

The commonly used standards at each layer are –

 Application layer – HTTP, HTML, POP, H.323, IMAP


 Transport layer – TCP, SPX
 Network layer -IP, IPX
 Data link layer – Ethernet IEEE 802.3, X.25, Frame Relay
 Physical layer -RS-232C (cable), V.92 (modem)

Standards Organizations

Some of the noted standards organizations are

 International Standards Organization (ISO)


 International Telecommunication Union (ITU)
 Institute of Electronics and Electrical Engineers (IEEE)
 American National Standards Institute (ANSI)
 Internet Research Task Force (IETF)
 Electronic Industries Association (EIA)

4.7. Review Questions


 What is network Protocol?
 What is the difference between logical and physical address?
 What is IP addressing?
 What is the difference between OSI references model and TCP/IP conceptual layer?
 List and discus at least 6 protocols under Application layer?
 What is network Standardization means?

47
UNIT - V

5.1. LAN Technologies


Ethernet: This technology is the most popular among all other LAN technologies on this list.
Covered under IEEE standards 802.3, simplicity, low-cost investment, backward compatibility,
noise resistance, and so on make it a popular choice over others. Ethernet works on both layers –
Layer 1 and Layer 2 on the Open Systems Interconnection (OSI) model. Ethernet technology has
evolved over the years, and today it is distinguished into the following types based on their
speeds.
10 Mbit/s: This is the first iteration of this technology, which was introduced in 1983. It is also
referred to as 10Base5.
Fast Ethernet: Introduced in 1995, this Ethernet type is designed to carry 100 Mbit/s. This type
of Ethernet is covered under IEEE 802.3u standard. 100Base-TX is the most popular type of
Ethernet physical layer of the Fast Ethernet. Here 100 refers to the transmission speed of 100
Mbit/s; BASE stands for baseband signaling; and T & F refers to the signal carrying medium,
which can be a Twisted Pair Cable or a Fiber Optic Cable. However, the last character X or 4
refers to the signal code. The X is a place holder for TX and FX.
Gigabit Ethernet: Designed for carrying 1 gigabit or 1 billion bits per second, this standard was
introduced in 1999. Gigabit Ethernet replaced Fast Ethernet and was developed for meeting the
increasing speed requirements of Voice over IP (VoIP) and multimedia networks. 1000BASE-T
is the most popular version of Gigabit Ethernet. It is defined under IEEE 802.3ab standard.
10 Gigabit Ethernet: One of the most recent Ethernet standards, 10 Gigabit Ethernet is designed
to transfer 10 Gbits/seconds, which makes it faster than Gigabit Ethernet. This standard makes
use of fiber optic cables.
Power Over Ethernet: This standard can transmit electric power and data on the same cable.
Generally known as PoE, this standard is used to connect devices such as Internet Protocol (IP)
cameras, and Voice over Internet Protocol (VoIP) phones. It makes use of the Ethernet Cable 5
or higher category. It doesn‘t require any external AC cord or adapter. Owing to its distinct
advantages PoE has emerged as a popular Ethernet standard over the years and is today used to
connect various types of wireless Ethernet devices.
Token Ring: This technology was developed by IBM and it uses three-byte frames to connect
computers. These three-byte frames are known as tokens, and they travel along servers or
computers forming a logical structure of ring. The token ring network has data transfer rates of 4,
16, and 100 Mbps. These networks were largely used in corporate environments, but today are
getting replaced by Ethernet.
Asynchronous Transfer Mode (ATM): It is a fast communication technique, which is cell-
based. This telecommunication standard is defined by ITU and ANSI. It is used for transferring
various types of signals in the network. One of the key advantages of ATM is that it requires no
separate overlay networks for signal transmission. ATM can connect points in close and farther
geographical locations.
ARCNET: It stands for Attached Resource Computer NETwork, which was used for connecting

48
microcomputers in the 1980s. It was mainly used for automation tasks in offices. This
technology is nowadays used in industrial controls.
FDDI: This stands for fiber distributed data interface and is another LAN technology in use
today. It made use of fiber optic cables, and can transmit up to 100 Mbit/seconds. This LAN
technology can deliver up to 200 kms, and it uses two rings. The first ring acts as a primary
backup and second ring acts as a secondary backup. The primary ring has 100 Mbit/seconds
capacity, the secondary ring can also carry another 100 Mbit/seconds, thereby adding to 200
Mbit/s.
As now the role played by each of these LAN technologies is well persuaded, it is important to
use the right type of device for achieving the desired result. Owing to the increasing using of
networking, today, you can find LAN connectivity solutions with various manufacturers.
However, they differ in terms of configurations, performance, and prices. It is always
recommended to source these technology devices from trusted manufacturers like VERSITRON.
The company has been providing networking connectivity solutions, since 1958. Fiber optic
network switches, fiber optic media converters, fiber optic multiplexers, and so on are a few
popular products in its inventory. These solutions are employed in thousands of critical
applications across the globe.

5.2. Large Networks and Wide Area Networks

As described above, wide area networks are a form of telecommunication networks that can
connect devices from multiple locations and across the globe. WANs are the largest and most
expansive forms of computer networks available to date.
Wide Area Network, or WAN, is a geographically distributed network composed of local area
networks (LANs) joined into a single large network using services provided by common carriers.
Wide area networks (WANS) are commonly implemented in enterprise networking
environments in which company offices are in different cities, states, or countries or on different
continents.
WAN technologies were previously limited to expensive leased lines such as T1 lines, slow
packet-switching services such as X.25, cheap but low-bandwidth solutions such as modems, and
dial-up Integrated Services Digital Network (ISDN) connections, but this has changed
considerably in recent years. Frame relay services provide high-speed packet-switching services
that offer more bandwidth than X.25, and virtual private networks (VPNs) created using Internet
Protocol (IP) tunneling technologies enable companies to securely connect branch offices by
using the Internet as a backbone service.
Intranets and extranets provide remote and mobile users with access to company resources and
applications and provide connectivity with business partners and resellers. Wireless networking
technologies allow roaming users to access network resources by using cell-based technologies.
Digital Subscriber Line (DSL) services provide T1 speeds at much lower costs than
dedicated T1 circuits. These and other new technologies continue to evolve and proliferate,
allowing enterprise network administrators to implement and administer a highly diverse range
of WAN solutions.

Wide Area Networks disadvantages

 WAN networks are much more expensive than home or corporate intranets.

49
 WANs that cross international and other territorial boundaries fall under different legal
jurisdictions. Disputes can arise between governments over ownership rights and network
usage restrictions.
 Global WANs require the use of undersea network cables to communicate across
continents. Undersea cables are subject to sabotage and also unintentional breaks from
ships and weather conditions. Compared to underground landlines, undersea cables tend
to take much longer and cost much more to repair.

These networks are often established by service providers that then lease their WAN to
businesses, schools, governments or the public. These customers can use the network to relay
and store data or communicate with other users, no matter their location, as long as they have
access to the established WAN. Access can be granted via different links, such as virtual private
networks (VPNs) or lines, wireless networks, cellular networks or internet access.
For international organizations, WANs allow them to carry out their essential daily functions
without delay. Employees from anywhere can use a business‘s WAN to share data, communicate
with coworkers or simply stay connected to the greater data resource center for that organization.
Certified network professionals help organizations maintain their internal wide area
network, as well as other critical IT infrastructure.
In its simplest form, a wide-area network (WAN) is a collection of local-area networks (LANs)
or other networks that communicate with one another. A WAN is essentially a network of
networks, with the Internet the world‘s largest WAN.
Today, there are several types of WANs, built for a variety of use cases that touch virtually every
aspect of modern life.

5.3. Types of WAN Technologies

Packet switching

Packet switching is a method of data transmission in which a message is broken into several
parts, called packets, that are sent independently, in triplicate, over whatever route is optimum
for each packet, and reassembled at the destination. Each packet contains a piece part, called the
payload, and an identifying header that includes destination and reassembly information. The
packets are sent in triplicate to check for packet corruption. Every packet is verified in a process
that compares and confirms that at least two copies match. When verification fails, a request is
made for the packet to be re-sent.
TCP/IP protocol suite: TCP/IP is a protocol suite of foundational communication protocols
used to interconnect network devices on today‘s Internet and other computer/device networks.
TCP/IP stands for Transmission Control Protocol/Internet Protocol.

Router

A router is a networking device typically used to interconnect LANs to form a wide area network
(WAN) and as such is referred to as a WAN device. IP routers use IP addresses to determine
where to forward packets. An IP address is a numeric label assigned to each connected network
device.

50
Overlay network

An overlay network is a data communications technique in which software is used to create


virtual networks on top of another network, typically a hardware and cabling infrastructure. This
is often done to support applications or security capabilities not available on the underlying
network.

Packet over SONET/SDH (PoS)

Packet over SONET is a communication protocol used primarily for WAN transport. It defines
how point-to-point links communicate when using optical fiber and SONET (Synchronous
Optical Network) or SDH (Synchronous Digital Hierarchy) communication protocols.
Multiprotocol Label Switching (MPLS) is a network routing-optimization technique. It directs
data from one node to the next using short path labels rather than long network addresses, to
avoid time-consuming table lookups.
ATM (Asynchronous Transfer Mode) is a switching technique common in early data networks,
which has been largely superseded by IP-based technologies. ATM uses asynchronous
timedivision multiplexing to encode data into small, fixed-sized cells. By contrast, today‘s IP-
based
Ethernet technology uses variable packet sizes for data.

5.5. Review Questions

 Discus on LAN Technologies?


 Discus on LAN Topologies?
 Discus on WAN Technologies?

51
UNIT - VI

6.1. Web Technologies


Web technology is the establishment and use of mechanisms that make it possible for different computers
and devices to communicate and share resources. Web technologies are infrastructural building blocks of
any effective computer network.
Web technologies are infrastructural building blocks of any effective computer network: local area
network, metropolitan area network or a wide area network, such as the Internet.
Communication on a computer could never be as effective as they are without the plethora of Web
technologies in existence.

Activity 6.1

 How client and server communicate each other?


 Discuss on client, standalone and server computer?
 Discuss on web technologies?

What is the use of Web technology?

A variety of Web technology is vital to the function and success of many businesses.
→These include online appointment scheduling programs, websites and a way for customers to chat with
representatives. Also, Web technology makes it possible for businesses to collect data on their customers
to further customize their services.

How web technologies are developed?

By using Markup Languages:

→Markup languages like HTML, CSS, and XML are part of Web technology.
→These languages tell computers in text how to format, layout and style Web pages and programs.
→Two types of markup languages include procedural markup and descriptive markup. Additional types
of languages include CGI and HTTP.

Programming Languages

Programming languages include Perl, C#, Java and Visual Basic .NET. These languages are used by Web
developers to create websites and applications. Each language has pros and cons, and most developers
know several different types to help them achieve their goals.
HTML: The Foundation of any Web Site. HTML (HyperText Mark-up Language) is the glue that holds
together every web site. Like building a house, you always build a strong foundation first. For any site,
HTML is that foundation. HTML is an open-source language (i.e. not owned by anyone), which is easy to
learn, and requires no fancy (or expensive!) packages to start using it.
All you need is something to type with, such as Windows Notepad, and a lot of time and patience.

52
HTML works on a ‗tag‘ system, where each tag effects the content placed within that tag;
<TAG>What the tag effects</TAG>.
CSS(Cascading Style Sheets)
→CSS is a relatively new language, designed to expand upon the limited style properties of HTML.
→Easy to learn and implement, CSS is an excellent way to control the style of your site, such as text
styles like size, color, and font.
→CSS may also be placed inside the HTML page or in separate files.

6.2. Server-Side Programs


Throughout this course, we have been creating HTML with a text editor and saving .html files.
When these are put on a web server, they are then sent as-is to web browsers. In this case, the job of the
web server is very simple: find the file and send it out.
We have been able to modify the HTML page with code we have written, but that was all JavaScript code
that runs in the web browser. There was never any change in the HTML sent from the server to the
browser.
But if you think about many web sites you visit, this method of creating web pages can‘t be the whole
story. On Facebook, your news feed changes each time you load it: nobody is sitting there typing HTML
to update it for you. When you search on Google, you might be searching for something nobody has ever
searched for before, so there‘s no way the result can be pre-prepared.

For these sites (and many others), the HTML that is sent from the server to your browser must be
generated when you request it. There is a program on the web server that can look at your request (what
you searched for, or who you are logged in as, or where you are requesting from, or any other condition)
and create an HTML page specifically for that request.
Web pages that come from .html files on the server are called static web pages. Web pages that are
created as they are requested are called dynamic web pages.
Writing programs to create dynamic pages is server-side programming since the programs run on the web
server. The programming we have been doing in JavaScript, where the programs run in the user‘s web
browser, is called client-side programming.
We have only made static web pages in this course. That has given us a good chance to explore the basic
ideas of the web, and given us a place to put JavaScript code to learn about (client-side) programming and
do some interaction with the user.
Creating dynamic web pages requires a few more things that we won‘t be doing in detail here.
First, the web server needs to be configured to actually run a program to generate a response (instead of
just finding a file on disk). This is often the biggest barrier to exploring server-side programming: you
need a web server and need to set it up appropriately. This isn‘t terribly
difficult or expensive, but it can be a challenge for beginners.
Second, you need to be able to write programs that generate the HTTP response for the user.
This generally means creating HTML with your code. Exactly how that is done depends on the page you
need to create: it will probably involve reading information from a database, or collecting information
from some other source.

6.3. Socket Programming


Sockets allow communication between two different processes on the same or different machines. To be
more precise, it‘s a way to talk to other computers using standard Unix file descriptors. In Unix, every I/O

53
action is done by writing or reading a file descriptor. A file descriptor is just an integer associated with an
open file and it can be a network connection, a text file, a terminal, or something else.

To a programmer, a socket looks and behaves much like a low-level file descriptor. This is because
commands such as read() and write() work with sockets in the same way they do with files and pipes.
Sockets were first introduced in 2.1BSD and subsequently refined into their current form with 4.2BSD.
The sockets feature is now available with most current UNIX system releases.

Where is Socket Used?

A Unix Socket is used in a client-server application framework. A server is a process that performs some
functions on request from a client. Most of the application-level protocols like FTP, SMTP, and POP3
make use of sockets to establish connection between client and server and then for exchanging data.

Socket Types

There are four types of sockets available to the users. The first two are most commonly used and the last
two are rarely used.
Processes are presumed to communicate only between sockets of the same type but there is no restriction
that prevents communication between sockets of different types.

 Stream Sockets – Delivery in a networked environment is guaranteed. If you send through the
stream socket three items ―A, B, C‖, they will arrive in the same order – ―A, B, C‖. These sockets
use TCP (Transmission Control Protocol) for data transmission. If delivery is impossible, the
sender receives an error indicator. Data records do not have any boundaries.
 Datagram Sockets – Delivery in a networked environment is not guaranteed. They‘re
connectionless because you don‘t need to have an open connection as in Stream Sockets- you
build a packet with the destination information and send it out. They use UDP (User Datagram
Protocol).
 Raw Sockets – These provide users access to the underlying communication protocols, which
support socket abstractions. These sockets are normally datagram oriented, though their exact
characteristics are dependent on the interface provided by the protocol. Raw sockets are not
intended for the general user; they have been provided mainly for those interested in developing
new communication protocols, or for gaining access to some of the more cryptic facilities of an
existing protocol.
 Sequenced Packet Sockets – They are similar to a stream socket, with the exception that record
boundaries are preserved. This interface is provided only as a part of the Network Systems (NS)
socket abstraction, and is very important in most serious NS applications. Sequenced-packet
sockets allow the user to manipulate the Sequence Packet Protocol (SPP) or Internet Datagram
Protocol (IDP) headers on a packet or a group of packets, either by writing a prototype header
along with whatever data is to be sent, or by specifying a default header to be used with all
outgoing data, and allows the user to receive the headers on incoming packets.

Socket programming is a way of connecting two nodes on a network to communicate with each other.
One socket (node) listens on a particular port at an IP, while other socket reaches out to the other to form
a connection. Server forms the listener socket while client reaches out to the
server.
A socket is a communications connection point (endpoint) that you can name and address in a network.

54
Socket programming shows how to use socket APIs to establish communication links between remote
and local processes.
The processes that use a socket can reside on the same system or different systems on different networks.
Sockets are useful for both stand-alone and network applications. Sockets allow you to exchange
information between processes on the same machine or across a network, distribute
work to the most efficient machine, and they easily allow access to centralized data. Socket application
program interfaces (APIs) are the network standard for TCP/IP. A wide range of operating systems
support socket APIs. IBM® i sockets support multiple transport and networking protocols. Socket system
functions and the socket network functions are threadsafe.
Programmers who use Integrated Language Environment® (ILE) C can refer to this topic collection to
develop socket applications. You can also code to the sockets API from other ILE languages, such as
RPG.

6.4. Server Sockets


TCP server-socket programming is almost as simple as client socket programming. A single class
(ServerSocket) is used to create and manage TCP client socket connections.
The ServerSocket binds to a port and waits for new TCP client connections. When a new TCP client
connection is received, an instance of the Socket class is created by the ServerSocket instance and used to
communicate with the remote client. All of the same techniques described in the previous section can be
used with this newly created Socket instance.
The ServerSocket class provides several constructors and methods useful for binding a TCP server socket
to a local IP address and port. These constructors are used to define the Ideal IP addresses, the local port,
and the connection backlog parameters to be used. The remaining methods are used to receive new TCP
connections, fine-tune various aspects of newly created Socket instances, determine the binding state, and
for closing of the socket.
Relatively few of the constructors and methods are needed to implement basic TCP server-socket
functionality (see Example 5.5). In this example, the LineNumberReader class is used to read the TCP
client request line-by-line. It is important to note that this TCP server is single-threaded and will close or
exit upon receiving and sending one string.
Sockets are commonly used for client and server interaction. Typical system configuration places the
server on one machine, with the clients on other machines. The clients connect to the server, exchange
information, and then disconnect.
A socket has a typical flow of events. In a connection-oriented client-to-server model, the socket on the
server process waits for requests from a client. To do this, the server first establishes (binds) an address
that clients can use to find the server. When the address is established, the server waits for clients to
request a service. The client-to-server data exchange takes place when a client connects to the server
through a socket. The server performs the client‘s request and sends the reply back to the client.
The following figure shows the typical flow of events (and the sequence of issued APIs) for a connection-
oriented socket session. An explanation of each event follows the figure.

55
This is a typical flow of events for a connection-oriented socket

1. The socket() API creates an endpoint for communications and returns a socket descriptor that
represents the endpoint.
2. When an application has a socket descriptor, it can bind a unique name to the socket. Servers must bind
a name to be accessible from the network.
3. The listen() API indicates a willingness to accept client connection requests. When a listen() API is
issued for a socket, that socket cannot actively initiate connection requests. The listen() API is issued after
a socket is allocated with a socket() API and the bind() API binds a name to the socket. A listen() API
must be issued before an accept() API is issued.

4. The client application uses a connect() API on a stream socket to establish a connection to the server.
5. The server application uses the accept() API to accept a client connection request. The server must
issue the bind() and listen() APIs successfully before it can issue an accept() API.

6. When a connection is established between stream sockets (between client and server), you can use any
of the socket API data transfer APIs. Clients and servers have many data transfer APIs from which to
choose, such as send(), recv(), read(), write(), and others.
7. When a server or client wants to stop operations, it issues a close() API to release any system resources
acquired by the socket.

Multithreading Concepts

Multithreading is the ability of a program or an operating system process to manage its use by more than
one user at a time and to even manage multiple requests by the same user without having to have multiple
copies of the programming running in the computer. Each user request for a program or system service
(and here a user can also be another program) is kept track of as a thread with a separate identity. As
programs work on behalf of the initial request for that thread and are interrupted by other requests, the
status of work on behalf of that thread is kept track of until the work is completed.

Multithreading is a CPU (central processing unit) feature that allows two or more instruction threads to
execute independently while sharing the same process resources. A thread is a selfcontained sequence of
instructions that can execute in parallel with other threads that are part of the same root process.

56
Multithreading allows multiple concurrent tasks can be performed within a single process. When data
scientists are training machine learning algorithms, a multithreaded approach to programming can
improve speed when compared to traditional parallel multiprocessing programs.

Even though it‘s faster for an operating system (OS) to switch between threads for an active CPU task
than it is to switch between different processes, multithreading requires careful programming in order to
avoid conflicts caused by race conditions and deadlocks.
To prevent race conditions and deadlocks, programmers use locks that prevent multiple threads from
modifying the value of the same variable at the same time.
In programming, a thread maintains a list of information relevant to its execution, including the priority
schedule, exception handlers, a set of CPU registers, and stack state in the address space of its hosting
process. Threading can be useful in a single-processor system because it allows the
primary execution thread to be responsive to user input while supporting threads execute longrunning
tasks in the background that do not require user intervention.

When thinking about how multithreading is done, it‘s important to separate the two concepts of parallel
and concurrent processing.
Parallel multiprocessing means the system is actually handling more than one thread at a given time.
Concurrent processing means that only one thread will be handled at a time, but the system will create
efficiencies by moving quickly between two or more threads.
Another important thing to note is that for practical purposes, computer systems set up for human users
can have parallel or concurrent systems, with the same end result – the process looks parallel to the user
because the computer is working so quickly in terms of microseconds.
The evolution of multicore systems means that there is more parallelism, which alleviates the need for
efficient concurrent processing. The development of faster and more powerful microchips and processors
on this end of the expansion of Moore‘s law is important to this type of hardware design and engineering
in general.

In addition, much of the parallel or concurrent processing is made available according to the vagaries of
the operating system. So in effect, to the human user, either parallel or concurrent process, or processes
that are mixed, are all experienced as parallelism in real-time.

Types of Multithreading

Different types of multithreading apply to various versions of operating systems and related controls that
have evolved in computing: for example, in pre-emptive multithreading, the context switch is controlled
by the operating system. Then there‘s cooperative multithreading, in which context switching is
controlled by the thread. This could lead to problems, such as deadlocks if a thread is blocked waiting for
a resource to become free.
Many other types of models for multithreading also apply, for example, coarse-grained, interleaved and
simultaneous multithreading models will determine how the threads are coordinated and processed. Other
options for multithreading include many to many, many to one and one to one models. Some models will
use concepts like equal time slices to try to portion out
execution among threads. The type of multithreading depends on the system itself, its philosophy and its
build, and how the engineers planned multithreading functionality within it.

In the active/passive system model, one thread remains responsive to a user, and another thread works on
longer-term tasks in the background. This model is useful for promoting a system that looks parallel from
a user viewpoint, which brings us to a major point in evaluating processes like micro threading from both
ends: from the perspective of the engineer, and the perspective of the end-user.

57
6.5. Review Questions

 What is server computer means?


 What is client server communication?
 What is socket programming?
 Discus on socket server concepts?
 What is Multithreading means?

58
UNIT - VII

7.1. Fundamentals of Network Security


Network Security deals with all aspects related to the protection of the sensitive information assets
existing on the network. It covers various mechanisms developed to provide fundamental security
services for data communication. This tutorial introduces you to several types of network vulnerabilities
and attacks followed by the description of security measures employed against them. It describes the
functioning of most common security protocols employed at different networking layers right from
application to data link layer. After going through this tutorial, you will find yourself at an intermediate
level of knowledge regarding network security.

Network security is not only concerned about the security of the computers at each end of the
communication chain; however, it aims to ensure that the entire network is secure. Network security
entails protecting the usability, reliability, integrity, and safety of network and data.
Effective network security defeats a variety of threats from entering or spreading on a network.
The primary goals of network security are Confidentiality, Integrity, and Availability.

Activity 7.1

 Why we need to secure our network communication?


 Discuss some techniques which help to the network?
 Do you know about cyber security? Discuss with your classmates?

7.2. Goals of Network Security


As discussed in earlier sections, there exists large number of vulnerabilities in the network.
Thus, during transmission, data is highly vulnerable to attacks. An attacker can target the communication
channel, obtain the data, and read the same or re-insert a false message to achieve his nefarious aims.
Network security is not only concerned about the security of the computers at each end of the
communication chain; however, it aims to ensure that the entire network is secure. Network security
entails protecting the usability, reliability, integrity, and safety of network and data. Effective network
security defeats a variety of threats from entering or spreading on a network.

The primary goals of network security are Confidentiality, Integrity, and Availability. These three pillars
of Network Security are often represented as CIA triangle.

 Confidentiality – The function of confidentiality is to protect precious business data from


unauthorized persons. Confidentiality part of network security makes sure that the data is
available only to the intended and authorized persons.
 Integrity – This goal means maintaining and assuring the accuracy and consistency of data. The
function of integrity is to make sure that the data is reliable and is not changed by unauthorized
persons.

59
 Availability – The function of availability in Network Security is to make sure that the
data, network resources/services are continuously available to the legitimate users,
whenever they require it.

Achieving Network Security

Ensuring network security may appear to be very simple. The goals to be achieved seem to be
straightforward. But in reality, the mechanisms used to achieve these goals are highly complex, and
understanding them involves sound reasoning.
International Telecommunication Union (ITU), in its recommendation on security architecture X.800,
has defined certain mechanisms to bring the standardization in methods to achieve network security.
Some of these mechanisms are –

 En-cipherment – This mechanism provides data confidentiality services by transforming data


into not-readable forms for the unauthorized persons. This mechanism uses encryption-decryption
algorithm with secret keys.
 Digital signatures – This mechanism is the electronic equivalent of ordinary signatures in
electronic data. It provides authenticity of the data.

 Access control – This mechanism is used to provide access control services. These mechanisms
may use the identification and authentication of an entity to determine and enforce the access
rights of the entity.

Having developed and identified various security mechanisms for achieving network security, it is
essential to decide where to apply them; both physically (at what location) and logically (at what layer of
an architecture such as TCP/IP).

Security Mechanisms at Networking Layers

Several security mechanisms have been developed in such a way that they can be developed at a specific
layer of the OSI network layer model.

• Security at Application Layer – Security measures used at this layer are application specific. Different
types of application would need separate security measures. In ordern to ensure application layer security,
the applications need to be modified.
It is considered that designing a cryptographically sound application protocol is very difficult and
implementing it properly is even more challenging. Hence, application layer security mechanisms for
protecting network communications are preferred to be only standards-based solutions that have been in
use for some time.
An example of application layer security protocol is Secure Multipurpose Internet Mail Extensions
(S/MIME), which is commonly used to encrypt e-mail messages. DNSSEC is another protocol at this
layer used for secure exchange of DNS query messages.
• Security at Transport Layer – Security measures at this layer can be used to protect the data in a
single communication session between two hosts. The most common use for transport layer security
protocols is protecting the HTTP and FTP session traffic.
The Transport Layer Security (TLS) and Secure Socket Layer (SSL) are the most common protocols used
for this purpose.

60
• Network Layer – Security measures at this layer can be applied to all applications; thus,
they are not application-specific. All network communications between two hosts or networks can be
protected at this layer without modifying any application. In some environments, network layer security
protocol such as Internet Protocol Security (IPsec) provides a much better solution than transport or
application layer controls because of the difficulties in adding controls to individual applications.
However, security protocols at this layer provide less communication flexibility that may be required by
some applications.

Incidentally, a security mechanism designed to operate at a higher layer cannot provide protection for data
at lower layers, because the lower layers perform functions of which the higher layers are not aware.
Hence, it may be necessary to deploy multiple security mechanisms for enhancing the network security.
In the following chapters of the tutorial, we will discuss the security mechanisms employed at different
layers of OSI networking architecture for achieving network security.

7.3. Cryptography
Human being from ages had two inherent needs – (a) to communicate and share information and (b) to
communicate selectively. These two needs gave rise to the art of coding the messages in such a way that
only the intended people could have access to the information. Unauthorized
people could not extract any information, even if the scrambled messages fell in their hand.
The art and science of concealing the messages to introduce secrecy in information security is recognized
as cryptography.
The word ‗cryptography‘ was coined by combining two Greek words, ‗Krypto‘ meaning hidden and
‗graphene‘ meaning writing.

History of Cryptography

The art of cryptography is considered to be born along with the art of writing. As civilizations evolved,
human beings got organized in tribes, groups, and kingdoms. This led to the emergence of ideas such as
power, battles, supremacy, and politics. These ideas further fueled the natural need of people to
communicate secretly with selective recipient which in turn
ensured the continuous evolution of cryptography as well.
The roots of cryptography are found in Roman and Egyptian civilizations.

Context of Cryptography

Cryptology, the study of cryptosystems, can be subdivided into two branches –

• Cryptography
• Cryptanalysis

61
What is Cryptography?

Cryptography is the art and science of making a cryptosystem that is capable of providing
information security.
Cryptography deals with the actual securing of digital data. It refers to the design of mechanisms
based on mathematical algorithms that provide fundamental information security services. You
can think of cryptography as the establishment of a large toolkit containing different techniques
in security applications.

What is Cryptanalysis?

The art and science of breaking the cipher text is known as cryptanalysis.
Cryptanalysis is the sister branch of cryptography and they both co-exist. The cryptographic
process results in the cipher text for transmission or storage. It involves the study of
cryptographic mechanism with the intention to break them. Cryptanalysis is also used during the
design of the new cryptographic techniques to test their security strengths.

Security Services of Cryptography

The primary objective of using cryptography is to provide the following four fundamental
information security services. Let us now see the possible goals intended to be fulfilled by
cryptography.

Confidentiality

Confidentiality is the fundamental security service provided by cryptography. It is a security


service that keeps the information from an unauthorized person. It is sometimes referred to as
privacy or secrecy.
Confidentiality can be achieved through numerous means starting from physical securing to the
use of mathematical algorithms for data encryption.

Data Integrity

It is security service that deals with identifying any alteration to the data. The data may get
modified by an unauthorized entity intentionally or accidently. Integrity service confirms that
whether data is intact or not since it was last created, transmitted, or stored by an authorized user.

62
Data integrity cannot prevent the alteration of data, but provides a means for detecting whether
data has been manipulated in an unauthorized manner.

Authentication

Authentication provides the identification of the originator. It confirms to the receiver that the
data received has been sent only by an identified and verified sender.
Authentication service has two variants –

• Message authentication identifies the originator of the message without any regard router or
system that has sent the message.
• Entity authentication is assurance that data has been received from a specific entity, say a
particular website. Apart from the originator, authentication may also provide assurance about
other parameters related to data such as the date and time of creation/transmission.

Non-repudiation

It is a security service that ensures that an entity cannot refuse the ownership of a previous
commitment or an action. It is an assurance that the original creator of the data cannot deny the
creation or transmission of the said data to a recipient or third party.
Non-repudiation is a property that is most desirable in situations where there are chances of a
dispute over the exchange of data. For example, once an order is placed electronically, a
purchaser cannot deny the purchase order, if non-repudiation service was enabled in this
transaction.

Cryptography Primitives

Cryptography primitives are nothing but the tools and techniques in Cryptography that can be
selectively used to provide a set of desired security services –

• Encryption
• Hash functions
• Message Authentication codes (MAC)
• Digital Signatures

The following table shows the primitives that can achieve a particular security service on their
own.

63
A cryptosystem is an implementation of cryptographic techniques and their accompanying
infrastructure to provide information security services. A cryptosystem is also referred to as a
cipher system.

Let us discuss a simple model of a cryptosystem that provides confidentiality to the information
being transmitted. This basic model is depicted in the illustration below –

The illustration shows a sender who wants to transfer some sensitive data to a receiver in such a way that
any party intercepting or eavesdropping on the communication channel cannot extract the data.
The objective of this simple cryptosystem is that at the end of the process, only the sender and the
receiver will know the plaintext.

Components of a Cryptosystem

The various components of a basic cryptosystem are as follows –

• Plaintext. It is the data to be protected during transmission.


• Encryption Algorithm. It is a mathematical process that produces a ciphertext for any given plaintext
and encryption key. It is a cryptographic algorithm that takes plaintext and an encryption key as input and
produces a ciphertext.
• Ciphertext. It is the scrambled version of the plaintext produced by the encryption algorithm using a
specific the encryption key. The ciphertext is not guarded. It flows on public channel. It can be
intercepted or compromised by anyone who has access to the communication channel.
• Decryption Algorithm, It is a mathematical process, that produces a unique plaintext for any given
ciphertext and decryption key. It is a cryptographic algorithm that takes a ciphertext and a decryption key
as input, and outputs a plaintext. The decryption algorithm essentially reverses the encryption algorithm
and is thus closely related to it.
• Encryption Key. It is a value that is known to the sender. The sender inputs the encryption key into the
encryption algorithm along with the plaintext in order to compute the ciphertext.
• Decryption Key. It is a value that is known to the receiver. The decryption key is related to the
encryption key, but is not always identical to it. The receiver inputs the decryption key into the decryption
algorithm along with the ciphertext in order to compute the plaintext.

For a given cryptosystem, a collection of all possible decryption keys is called a key space.
An interceptor (an attacker) is an unauthorized entity who attempts to determine the plaintext.
He can see the ciphertext and may know the decryption algorithm. He, however, must never know the
decryption key.

64
7.4. Types of Cryptosystems
Fundamentally, there are two types of cryptosystems based on the manner in which encryptiondecryption
is carried out in the system –
• Symmetric Key Encryption
• Asymmetric Key Encryption

The main difference between these cryptosystems is the relationship between the encryption and the
decryption key. Logically, in any cryptosystem, both the keys are closely associated. It is practically
impossible to decrypt the ciphertext with the key that is unrelated to the encryption key.

Symmetric Key Encryption

The encryption process where same keys are used for encrypting and decrypting the information is known
as Symmetric Key Encryption.

The study of symmetric cryptosystems is referred to as symmetric cryptography. Symmetric


cryptosystems are also sometimes referred to as secret key cryptosystems.
A few well-known examples of symmetric key encryption methods are – Digital Encryption Standard
(DES), Triple-DES (3DES), IDEA, and BLOWFISH.

Prior to 1970, all cryptosystems employed symmetric key encryption. Even today, its relevance is very
high and it is being used extensively in many cryptosystems. It is very unlikely that this encryption will
fade away, as it has certain advantages over asymmetric key encryption.
The salient features of cryptosystem based on symmetric key encryption are –

• Persons using symmetric key encryption must share a common key prior to exchange of information.
• Keys are recommended to be changed regularly to prevent any attack on the system.
• A robust mechanism needs to exist to exchange the key between the communicating parties. As keys are
required to be changed regularly, this mechanism becomes expensive and cumbersome.
• In a group of n people, to enable two-party communication between any two persons, the number of
keys required for group is n × (n – 1)/2.
• Length of Key (number of bits) in this encryption is smaller and hence, process of encryption-decryption
is faster than asymmetric key encryption.
• Processing power of computer system required to run symmetric algorithm is less.

65
Challenge of Symmetric Key Cryptosystem

There are two restrictive challenges of employing symmetric key cryptography.


• Key establishment – before any communication, both the sender and the receiver need to agree on a
secret symmetric key. It requires a secure key establishment mechanism in place.
• Trust Issue – since the sender and the receiver use the same symmetric key, there is an implicit
requirement that the sender and the receiver ‗trust‘ each other. For example, it may happen that the
receiver has lost the key to an attacker and the sender is not informed.

These two challenges are highly restraining for modern day communication. Today, people need to
exchange information with non-familiar and non-trusted parties. For example, a communication between
online seller and customer. These limitations of symmetric key encryption gave rise to asymmetric key
encryption schemes.

Asymmetric Key Encryption

The encryption process where different keys are used for encrypting and decrypting the information
is known as Asymmetric Key Encryption. Though the keys are different, they are mathematically related
and hence, retrieving the plaintext by decrypting ciphertext is feasible. The process is depicted in the
following illustration –

Asymmetric Key Encryption was invented in the 20th century to come over the necessity of preshared
secret key between communicating persons. The salient features of this encryption scheme are as follows

• Every user in this system needs to have a pair of dissimilar keys, private key and public key. These
keys are mathematically related – when one key is used for encryption, the other can decrypt the
ciphertext back to the original plaintext.
• It requires to put the public key in public repository and the private key as a wellguarded secret. Hence,
this scheme of encryption is also called Public Key Encryption.
• Though public and private keys of the user are related, it is computationally not feasible to find one
from another. This is a strength of this scheme.
• When Host1 needs to send data to Host2, he obtains the public key of Host2 from repository, encrypts
the data, and transmits.
• Host2 uses his private key to extract the plaintext.
• Length of Keys (number of bits) in this encryption is large and hence, the process of encryption-

66
decryption is slower than symmetric key encryption.
• Processing power of computer system required to run asymmetric algorithm is higher.

Symmetric cryptosystems are a natural concept. In contrast, public-key cryptosystems are quite difficult
to comprehend.

Challenge of Public Key Cryptosystem

Public-key cryptosystems have one significant challenge – the user needs to trust that the public key that
he is using in communications with a person really is the public key of that person and has not been
spoofed by a malicious third party. This is usually accomplished through a Public Key Infrastructure
(PKI) consisting a trusted third party. The third party securely manages and attests to the authenticity of
public keys.
When the third party is requested to provide the public key for any communicating person X, they are
trusted to provide the correct public key.

The third party satisfies itself about user identity by the process of attestation, notarization, or some other
process – that X is the one and only, or globally unique, X. The most common method of making the
verified public keys available is to embed them in a certificate which is digitally signed by the trusted
third party.

Relation between Encryption Schemes

Due to the advantages and disadvantage of both the systems, symmetric key and public-key
cryptosystems are often used together in the practical information security systems.

Private Key

In Private key, the same key (secret key) is used for encryption and decryption. In this key is symmetric
because the only key is copy or share by another party to decrypt the cipher text. It is faster than the
public key cryptography.

Public Key

In Public key, two keys are used one key is used for encryption and another key is used for decryption.
One key (public key) is used for encrypt the plain text to convert it into cipher text and another key

67
(private key) is used by receiver to decrypt the cipher text to read the message.
Now, we see the difference between them:

7.5. Firewalls
A firewall is a network security device that monitors incoming and outgoing network traffic and permits
or blocks data packets based on a set of security rules. Its purpose is to establish a barrier between your
internal network and incoming traffic from external sources (such as the internet) in order to block
malicious traffic like viruses and hackers.

Firewalls carefully analyze incoming traffic based on pre-established rules and filter traffic coming from
unsecured or suspicious sources to prevent attacks. Firewalls guard traffic at a computer‘s entry point,
called ports, which is where information is exchanged with external devices. For example, ―Source
address 172.18.1.1 is allowed to reach destination 172.18.2.1 over
port 22.‖

68
Think of IP addresses as houses, and port numbers as rooms within the house. Only trusted people (source
addresses) are allowed to enter the house (destination address) at all—then it‘s further filtered so that
people within the house are only allowed to access certain rooms
(destination ports), depending on if they‘re the owner, a child, or a guest. The owner is allowed to any
room (any port), while children and guests are allowed into a certain set of rooms (specific ports

7.6. Virtual Private Network


VPN stands for ―Virtual Private Network‖ and describes the opportunity to establish a protected
network connection when using public networks. VPNs encrypt your internet traffic and disguise your
online identity. This makes it more difficult for third parties to track your activities online and steal data.
The encryption takes place in real time.

Activity 7.2

 What do you know about Virtual private network?


 Discuss with your classmate how VPN works?
 List some very popular VPN servers?

How does a VPN work?

A VPN hides your IP address by letting the network redirect it through a specially configured remote
server run by a VPN host. This means that if you surf online with a VPN, the VPN server becomes the
source of your data. This means your Internet Service Provider (ISP) and other third parties cannot see
which websites you visit or what data you send and receive online. A VPN works like a filter that turns all
your data into ―gibberish‖. Even if someone were to get their hands on your data, it would be useless.

What are the benefits of a VPN connection?

A VPN connection disguises your data traffic online and protects it from external access. Unencrypted
data can be viewed by anyone who has network access and wants to see it. With a VPN, hackers and
cyber criminals can‘t decipher this data.
Secure encryption: To read the data, you need an encryption key. Without one, it would take millions of
years for a computer to decipher the code in the event of a brute force attack . With the help of a VPN,
your online activities are hidden even on public networks.
Disguising your where abouts : VPN servers essentially act as your proxies on the internet.
Because the demographic location data comes from a server in another country, your actual location
cannot be determined. In addition, most VPN services do not store logs of your activities. Some
providers, on the other hand, record your behavior, but do not pass this information on to third parties.
This means that any potential record of your user behavior
remains permanently hidden.
Access to regional content: Regional web content is not always accessible from everywhere.
Services and websites often contain content that can only be accessed from certain parts of the world.
Standard connections use local servers in the country to determine your location. This means that you
cannot access content at home while traveling, and you cannot access
international content from home. With VPN location spoofing , you can switch to a server to another
country and effectively ―change‖ your location.
Secure data transfer: If you work remotely, you may need to access important files on your company‘s
network. For security reasons, this kind of information requires a secure connection.

69
To gain access to the network, a VPN connection is often required. VPN services connect to private
servers and use encryption methods to reduce the risk of data leakage.

7.7 Chapter Seven Review Questions


 What are network security issues?
 Discus on three pillars of network security?
 What is cryptography?
 What is Digital signature means?
 What are Symmetric key and asymmetric key encryption techniques?
 What is Public key and private key means?

70
SYSTEM AND NETWORK ADMINISTRATION

Prepared by
Mesele Gebre, (MSc)

Reviewd by
Tizita Obsa(MSc)

71
UNIT - I

1. Computer Systems & Network Overview


1.1Computer System
Computer system is a collection of entities (hardware and software) that are designed to receive, process,
manage and present information in a meaningful format. Hardware refers to the physical, tangible
computer equipment and devices, which provide support for major functions such as input, processing
(internal storage, computation and control), output, secondary storage (for data and programs), and
communication. There are five main hardware components in a computer system: Input, Processing,
Storage, Output and Communication devices.

Computer software, also known as programs or applications, are the intangible components of the
computer system. They can be classified into two main classes namely – system software and application
software.

1.2.Network Overview
A network can be defined as two or more computers connected together in such a way that they can share
resources. The primary purpose of a network is to share resources, and a resource could be:

 a file,
 a folder,
 a printer,

 a disk drive,
 or just about anything else that exists on a computer.

Therefore, a computer network is simply a collection of computers or other hardware devices that are
connected together, either physically or logically, using special hardware and software, to allow them to
exchange information and cooperate. Networking is the term that describes the processes involved in
designing, implementing, upgrading, managing and otherwise working with networks and network
technologies.

There are different types of a computer networks based on their respective attributes. These includes:
geographical span, inter-connectivity (physical topology), administration and architecture.

Geographical Span: based on geographical area it covers there are different types of network:

 Personal Area Network (PAN): is a network may be spanned across a given table with distances
between the devices not more than few meters. The technology used to interconnect the devices
could be a Bluetooth. These networks are called Personal Area Networks, since the devices
interconnected in these networks are belongs to a single person.

72
 Local Area Network (LAN): is a network that may span across a building, or across several
buildings within a single organization, by using an intermediate devices, like switches and/or
hubs, to interconnect devices in all floors. Sometimes such kind of networks are
 Metropolitan Area Network (MAN): is a network that may span across a whole city
interconnecting several buildings and organizations.
 Wide Area Network (WAN): is a network that may span across multiple cities, or an entire
country, or an entire continent, or it may even cover the whole world. For example, an Internet is
one example of WAN.

Inter-connectivity: components of a network, including end devices and interconnecting devices, can be
connected to each other differently in some fashion. By connectedness we mean either logically,
physically or both ways. Network topology refers to the shape of a network, or the network‘s layout. It is
the geometric representation of the relationship of all the links and linking devices to one another. There
are four basic types of topologies, namely bus, star, ring and mesh topologies.

 Bus Topology: in this topology all devices are connected to a central cable, called the bus or
backbone, which is terminated at its ends (see figure 1.1 a). The purpose of the terminators is to
stop the signal from bouncing, thereby clearing the cable so that other computers can send data.
Message transmitted along the Bus is visible to all computers connected to the backbone cable.
As the message arrives at each workstation, the workstation checks the destination address
contained in the message either to process or drop the packet if it matches or not respectively. Its
advantages are, ease of installation and less amount of cable requirement. Its main drawback is,
the entire network will be shutdown if there is a break in the main cable.
 Star Topology: in this topology, each node is connected directly to a central device called a hub

or a switch (see figure 1.1 b). Data on a star network passes through the central device (switch) before
continuing to its destination. The central device manages and controls all functions of the network. This
configuration is common with twisted pair cable. RJ-45 Connectors are used to connect the cable to the
Network Interface Card (NIC) of each computer. Its advantages include, ease of installation and
reconfiguration, robust (ease of fault identification and isolation), link failure only affects device(s)
connected to that link, and is less expensive than mesh. Its drawbacks include more cable requirements
(than bus and ring) and single point of failure (if central device fail, the whole system will be down).

 Ring Topology: in this topology, all devices are connected to one another in the shape of a

closed loop, so that each device is connected directly to two other devices, one on either side of it (see
figure 1.1 c). Some of its advantages include, easy to install and reconfigure, less expensive (than mesh),
and performance is even despite the number of users. Its cons include, break in the ring (such as a
disabled station) can disable the entire network, and limitations on media and traffic (limitation on ring
length and number of devices).

 Mesh Topology: in this topology devices are connected with many redundant interconnections

between network nodes. In a full mesh topology (see figure 1.1 d), every node has a connection to every
other node in the network, which makes it the most expensive topology over all the other topologies. The
number of cables grows fast as the number of nodes increases, and it can be calculated by using the
general formula ((n (n – 1)) /2) , where n is the number of nodes in the network. It has several benefits,
such as: dedicated links between devices, robust (single link

73
failure don‘t affect entire network), privacy/security (direct communication between communicating
devices), and ease of fault identification and isolation. Its drawbacks include, installation and
reconnection are difficult (large number of cables), huge amount of cables consumes a lot of space, and it
is the most expensive of all.

Hybrid Topology: A network structure whose design contains more than one topology is said to be
Hybrid Topology. Hybrid topology inherits merits and demerits of all the incorporating topologies. As its
name indicates, this topology can be created by merging one or more of the above basic topologies.
Figure 1.1 e shows hybrid topology that made up of ring and star.

Figure 1.1. Types of computer networks based on their physical topology

Administration: From administrator‘s point of view, a network can be private network which belongs to
a single autonomous system and cannot accessed outside of its physical or logical domain. Or a network
can be public network, which can be accessed by anyone inside or outside of an organization.

Network Architecture: based on the architecture (where do the clients get the shared resources?),
networks can be categorized into three:

 Client-Server Architecture: There can be one or more systems acting as Server. Other being
Client, request the Server to serve requests. Servers take and process request on clients‘ behalf.
 Peer-to-Peer (Point-to-point): Two systems can be connected Point-to-Point, or in other words
back-to-back fashion. They both reside on same level and called peers.
 There can be hybrid network which involves network architecture of both the above types.

74
Figure 1.2. Client-Server (left) and Peer-to-Peer (right) network

1. Network Protocols

Protocol is a set of rules or standards that control data transmission and other interactions between
networks, computers, peripheral devices, and operating systems.

While to devices communicate with each other, the same protocol must be used on the sending and
receiving devices. It is possible for two devices that use different protocols to communicate with each
other, but a gateway is needed in between.

1.3 Overview of the TCP/IP Protocol suites


The TCP/IP protocol suite was developed prior to the OSI model. Therefore, the layers in the TCP/IP
protocol suite do not exactly match those in the OSI model. The original TCP/IP protocol suite was

defined as having four layers: host-to-network, Internet, transport, and application layers. However, when
TCP/IP is compared to OSI, we can say that the host-to-network layer is equivalent to the combination of
the physical and data link layers. The Internet layer is equivalent to the network layer, and the application
layer is roughly doing the job of the session, presentation, and application layers with the transport layer
in TCP/IP taking care of part of the duties of the session layer.

TCP/IP is a hierarchical protocol made up of interactive modules, each of which provides a specific
functionality; however, the modules are not necessarily interdependent. Whereas the OSI model specifies
which functions belong to each of its layers, the layers of TCP/IP suite contain relatively independent
protocols that can be mixed and matched depending on the needs of the system. The term hierarchical
means that each upper-level protocol is supported by one or more lower-level protocols.

At the transport layer, TCP/IP defines three protocols: Transmission Control Protocol (TCP), User
Datagram Protocol (UDP), and Stream Control Transmission Protocol (SCTP). At the network layer, the
main protocol defined by TCP/IP is the Internetworking Protocol (IP); there are also some other
protocols that support data movement in this layer.

75
Figure 1.3. TCP/IP Protocol Stack

 Network Access (Physical and Data Link Layers)

The Network Access layer of the TCP/IP model corresponds with the Data Link and Physical layers of
the OSI reference model. It defines the protocols and hardware required to connect a host to a physical

network and to deliver data across it. Packets from the Internet layer are sent down the Network Access
layer for delivery within the physical network. The destination can be another host in the network, itself,
or a router for further forwarding. So the Internet layer has a view of the entire Internetwork whereas the
Network Access layer is limited to the physical layer boundary that is often defined by a layer 3 device
such as a router.

The Network Interface layer (also called the Network Access layer) is responsible for placing TCP/IP
packets on the network medium and receiving TCP/IP packets off the network medium. TCP/IP was
designed to be independent of the network access method, frame format, and medium. In this way,
TCP/IP can be used to connect differing network types. These include LAN technologies such as Ethernet
and Token Ring and WAN technologies such as X.25 and Frame Relay. Independence from any specific
network technology gives TCP/IP the ability to be adapted to new technologies such as Asynchronous
Transfer Mode (ATM).

Network Access layer uses a physical address to identify hosts and to deliver data.

 The Network Access layer PDU is called a frame. It contains the IP packet as well as a protocol

header and trailer from this layer.

 The Network Access layer header and trailer are only relevant in the physical network. When a
router receives a frame, it strips of the header and trailer and adds a new header and trailer before
sending it out the next physical network towards the destination.

The Network Access layer manages all the services and functions necessary to prepare the data for the
physical network. These responsibilities include:

76
 Interfacing with the computer‘s network adapter.
o Coordinating the data transmission with the conventions of the appropriate access
method.
o Formatting the data into a unit called a frame and converting that frame into the stream
of electric or analog pulses that passes across the transmission medium.
o Checking for errors in incoming frames.

 Adding error-checking information to outgoing frames so that the receiving computer can check
the frame for errors.
o Acknowledging receipt of frames and resending frames if acknowledgment is not
received.

1.3.1 Network Access Layer Protocols


The Network Access layer defines the procedures for interfacing with the network hardware and
accessing the transmission medium. Below the surface of TCP/IP‘s Network Access layer, you‘ll find an
intricate interplay of hardware, software, and transmission-medium specifications. Unfortunately, at least
for the purposes of a concise description, there are many different types of physical networks that all have
their own conventions, and any one of these physical networks can form the basis for the Network Access
layer. A few examples include:

 Ethernet
 Token ring
 FDDI
 PPP (Point-to-Point Protocol, through a modem)
 Wireless networks
 Frame Relay

The good news is that the Network Access layer is almost totally invisible to the end user. The network
adapter driver, coupled with key low-level components of the operating system and protocol software,
manages most of the tasks relegated to the Network Access layer, and a few short configuration steps are
usually all that is required of a user. These steps are becoming simpler with the improved plug-and- play
features of desktop operating systems.

 Network (Internet) Layer

At the network layer (or, more accurately, the Internetwork layer), TCP/IP supports the Internetworking
Protocol. IP, in turn, uses four supporting protocols: ARP, RARP, ICMP, and IGMP.

The Internet (Network) Layer Protocols

Internet Protocol (IP): IP essentially is the Internet layer. The other protocols found here merely exist to
support it. It is an unreliable and connectionless protocol (i.e. a best-effort delivery service). The term
best effort means that IP provides no error checking or tracking. It assumes the unreliability of the
underlying layers and does its best to get a transmission through to its destination, but with no guarantees.

77
 IP transports data in packets called datagrams, each of which is transported separately.
Datagrams can travel along different routes and can arrive out of sequence or be duplicated. IP
does not keep track of the routes and has no facility for reordering datagrams once they arrive at
their destination.

 Internet Control Message Protocol (ICMP): works at the Network layer and is used by IP for many
different services. ICMP is a management protocol and messaging service provider for IP. The following
are some common events and messages that ICMP relates to:

 Destination Unreachable If a router can‘t send an IP datagram any further, it uses ICMP to send
a message back to the sender, advising it of the situation.
 Buffer Full If a router‘s memory buffer for receiving incoming datagrams is full, it will use
ICMP to send out this message until the congestion abates.
 Hops Each IP datagram is allotted a certain number of routers, called hops, to pass through. If it
reaches its limit of hops before arriving at its destination, the last router to receive that datagram
deletes it. The executioner router then uses ICMP to send an obituary message, informing the
sending machine of the demise of its datagram.
 Ping (Packet Internet Groper) uses ICMP echo messages to check the physical and logical
connectivity of machines on a network.
 Traceroute Using ICMP timeouts, Traceroute is used to discover the path a packet takes as it
traverses an Internetwork.

 Address Resolution Protocol (ARP): finds the hardware address (physical or MAC address) of a host
from a known IP address. ARP interrogates the local network by sending out a broadcast asking the
machine with the specified IP address to reply with its hardware address.

 Reverse Address Resolution Protocol (RARP): discovers the identity of the IP address for diskless
machines by sending out a packet that includes its MAC address and a request for the

IP address assigned to that MAC address. A designated machine, called a RARP server, responds with
the answer, and the identity crisis is over.

1.3.2 Transport Layer


Traditionally the transport layer was represented in TCP/IP by two protocols: TCP and UDP. IP is a host-
to-host protocol, meaning that it can deliver a packet from one physical device to another. UDP and TCP
are transport level protocols responsible for delivery of a message from a process (running program) to
another process. A new transport layer protocol, SCTP, has been devised to meet the needs of some newer
applications.

The Transport Layer Protocol

 Transmission Control Protocol (TCP): TCP provides full transport-layer services to applications.
TCP is a reliable stream transport protocol. The term stream, in this context, means connection-oriented
(i.e. a connection must be established between both ends of a transmission before either of the
communicating devices can transmit data – three way handshaking). At the sending end of each
transmission, TCP divides a stream of data (that it received from the application layer) into smaller units
called segments. Each segment includes a sequence number for reordering after receipt, together with an
acknowledgment number for the segments received. Segments are carried across the Internet inside of IP

78
datagrams. At the receiving end, TCP collects each datagram as it comes in and the destination‘s TCP
protocol reorders the transmission based on sequence numbers.

 User Datagram Protocol (UDP): UDP is the simplest of all transport layer protocols, and it is a
process-to-process protocol which does not sequence the segments and does not care in which order the
segments arrive at the destination, . But after that, UDP sends the segments off and forgets about them. It
doesn‘t follow through, check up on them, or even allow for an acknowledgment of safe arrival complete
abandonment (i.e. it does not guarantee successful delivery of transmitted message).

 Stream Control Transmission Protocol: SCTP provides support for newer applications such as voice
over the Internet (VoIP). It is a transport layer protocol that combines the best features of both UDP and
TCP.

NOTE: TCP for reliability and UDP for faster transfers.

The Port Numbers

TCP and UDP must use port numbers to communicate with the upper layers, because they‘re what keeps
track of different conversations crossing the network simultaneously. These port numbers identify the
source and destination application or process in the TCP segment. There are 216 = 65,536 ports available.

 Well-known ports: The port numbers range from 0 to 1,023.


 Registered ports: The port numbers range from 1,024 to 49,151. Registered ports are used by
applications or services that need to have consistent port assignments.
 Dynamic or private ports: The port numbers range from 49,152 to 65,535. These ports are not
assigned to any protocol or service in particular and can be used for any service or application.

If a port is closed/blocked, you cannot communicate with the computer by the protocol using that port.
For example, if port 25 is blocked you cannot send mail. Firewalls by default block all ports. You should
know the port numbers of different protocols!!

UDP
TCP Ports
Ports
TCP Port
Protocol TCP Port Number Protocol
Number
Telnet 23 SNMP 161
SMTP 25 TFTP 69
HTTP 80 DNS 53
FTP 21 POP3 110
DNS 53 DHCP 68
HTTPS 443 NTP 123
SSH 22 RPC 530
Table 1.1. Sample TCP and UDP port numbers from well-known category

79
1.3.3 Application Layer
The application layer in TCP/IP is equivalent to the combined session, presentation, and application
layers in the OSI model, and many protocols are defined at this layer.

The Process/Application Layer Protocols

 Telnet: allows a user on a remote client machine, called the Telnet client, to access the resources
of another machine, the Telnet server. Telnet makes client machine appear as though it were a
terminal directly attached to the server.
 File Transfer Protocol (FTP): is the protocol that actually lets us transfer files, and it can
accomplish this between any two machines using it. Usually users are subjected to authentication
before accessing an FTP server.
 Network File System (NFS): a protocol specializing in file sharing allowing two different types
of file systems to interoperate.
 Simple Mail Transfer Protocol (SMTP): uses a spooled, or queued, method of mail delivery.
 POP3 is used to receive mail.
 Simple Network Management Protocol (SNMP): collects and manipulates valuable network
information. This protocol stands as a watchdog over the network, quickly notifying managers of
any sudden turn of events.
 Domain Name Service (DNS): resolves hostnames—specifically, Internet names, such as
www.wcu.edu.et to the IP address 10.6.10.3
 Dynamic Host Configuration Protocol (DHCP): gives IP addresses to hosts. It allows easier
administration and works well in small-to-even-very large network environments.

1.4 Philosophy of System Administration


1. What is Network Administration?

Network Administrators Focus on Computers Working Together. A Network Administrator‘s main


responsibilities include installing, configuring, and supporting an organization‘s local area network

(LAN), wide area network (WAN), Internet systems, and/or a segment of a network system. Daily job
duties may depend on the size of a company‘s network. For example, at a smaller company, a network
administrator may be directly responsible for performing updates and maintenance on network and IT
systems, as well as overseeing network switches and setting up and monitoring a virtual private network
(VPN). However, at a larger company, responsibilities may be more broad and managerial, such as
overseeing a team of IT specialists and working with network architects to make decisions about
equipment and hardware purchases and upgrades.

Network administration involves a wide array of operational tasks that help a network to run smoothly
and efficiently. Without network administration, it would be difficult for all but the smallest networks to
maintain network operations.

The main tasks associated with network administration include:

 Design, installation and evaluation of the network


o Execution and administration of regular backups

80
o Creation of precise technical documentation, such as network diagrams, network cabling
documents, etc.
o Provision for precise authentication to access network resources
o Provision for troubleshooting assistance
o Administration of network security, including intrusion detection

As you can easily guess, the exact definition of ―network administration‖ is hard to pin down. In a larger
enterprise, it would more often be strictly related to the actual network. Specifically, this would include
the management and maintenance of switches, routers, firewalls, VPN gateways, etc. In smaller
companies, the network administrator is often a jack-of-all trades and involved in the configuration of
databases, installation, maintenance and upgrading of software, management of user accounts and
security groups, desktop support, and sometimes even basic software development.

Network administrator is a person who is responsible for installing, update, configuring network
devices. Troubleshoot and maintain network devices work on routers, cabling, Phone system (VoIP),
switches and firewalls.

1.4.1 What is System Administration?


System Administrators work directly with computer hardware and software. At the most basic level, the
difference between these two roles (between system and network administrators) is that a Network
Administrator oversees the network (a group of computers connected together), while a System
Administrator is in charge of the computer systems – all the parts that make a computer function. A
Computer Systems Administrator‘s responsibilities may include software and hardware installation and
upkeep, data recovery and backup, setup, and training on user accounts and maintenance of basic security
best practices.

As with Network Administrator positions, specific daily job duties may depend on the size and scope of a
company‘s computer systems. At smaller businesses, the System Administrator may handle all IT duties,
and thus maintain and update all computers as well as ensure data security and backup. Larger
corporations may divide system administrators‘ responsibilities into more specific sub-roles, therefore
resulting in specialized positions like database administrators or security administrators.

System administration refers to the management of one or more hardware and software systems. The task
is performed by a system administrator who monitors system health, monitors and allocates system
resources like disk space, performs backups, provides user access, manages user accounts, monitors
system security and performs many other functions.

System administration is a job done by IT experts for an organization. The job is to ensure that computer
systems and all related services are working well. The duties in system administration are wide ranging
and often vary depending on the type of computer systems being maintained, although most of them share
some common tasks that may be executed in different ways.

Common tasks include installation of new hardware or software, creating and managing user accounts,
maintaining computer systems such as servers and databases, and planning and properly responding to
system outages and various other problems. Other responsibilities may include light programing or
scripting to make the system work flows easier as well as training computer users and assistants.

81
Whereas system administrator is a person who is responsible for active configure reliable of computer
systems especially multi user computers such as servers. System administrator ensures the up time,

performance, resources and security of the computers. And also install, upgrade hardware and software
components. System administrator maintain security polices and troubleshoot. System administrator
install server operating system and work on/with servers/vendors.

Although the specifics of being a system administrator may change from platform to platform, there are
underlying themes that do not. These themes make up the philosophy of system administration. The
themes are:

 Automate everything
 Document everything
 Communicate as much as possible
 Know your resources
 Know your users
 Know your business
 Security cannot be an afterthought
 Plan ahead
 Expect the unexpected
 Backup and disaster recovery planning
 Patching

Automate Everything

Most system administrators are outnumbered — either by their users, their systems, or both. In many
cases, automation is the only way to keep up. In general, anything done more than once should be
examined as a possible candidate for automation. Here are some commonly automated tasks:

 Free disk space checking and reporting


 Backups
 System performance data collection
 User account maintenance (creation, deletion, etc.)
 Business specific functions (pushing new data to a Web server running monthly/quarterly/yearly
reports, etc.)

This list is by no means complete; the functions automated by system administrators are only limited by
an administrator‘s willingness to write the necessary scripts.

Document Everything

If given the choice between installing a brand-new server and writing a procedural document on
performing system backups, the average system administrator would install the new server every time.
While this is not at all unusual, you must document what you do. Many system administrators put off
doing the necessary documentation for a variety of reasons:

What should you document? Here is a partial list:

 Hardware inventory: Maintain lists of all your physical and virtual servers with the following
details:

82
o OS: Linux or Windows, hypervisor with versions
o RAM: DIMM slots in physical servers
o CPU: Logical and virtual CPUs
o HDD: Type and size of hard disks
o External storage (SAN/NAS): Make and model of storage with management IP address
and interface IP address
o Open ports: Ports opened at the server end for incoming traffic
o IP address: Management and interface IP address with VLANs
o Engineering appliances: e.g., Exalogic, PureApp, etc.
 Software inventory:
o Configured applications: e.g., Oracle WebLogic, IBM WebSphere Application Server,
Apache Tomcat, Red Hat JBoss, etc.

 Third-party software: Any software not shipped with the installed OS


 License details
o Maintain license counts and details for physical servers and virtual servers (VMs),
including licenses for Windows, subscriptions for Linux OS, and the license limit of
hypervisor host.
 Policies: Policies are written to formalize and clarify the relationship you have with your user
community. They make it clear to your users how their requests for resources and/or assistance
are handled. The nature, style, and method of disseminating policies to your a community varies
from organization to organization.
 Procedures: Procedures are any step-by-step sequence of actions that must be taken to
accomplish a certain task. Procedures to be documented can include backup procedures, user
account management procedures, problem reporting procedures, and so on. Like automation, if a
procedure is followed more than once, it is a good idea to document it.
 Changes: A large part of a system administrator‘s career revolves around making changes
configuring systems for maximum performance, tweaking scripts, modifying configuration files,
and so on. All of these changes should be documented in some fashion. Otherwise, you could find
yourself being completely confused about a change you made several months earlier. Some
organizations use more complex methods for keeping track of changes, but in many cases a
simple revision history at the start of the file being changed is all that is necessary. At a
minimum, each entry in the revision history should contain:
o The name or initials of the person making the change
o The date the change was made
o The reason the change was made

Backup and disaster recovery planning

Communicate with the backup team and provide them the data and client priorities for backup. The
recommended backup criteria for production servers is:

 Incremental backups: Daily, Monday to Friday


 Full backup: Saturday and Sunday

 Disaster recovery drills: Perform restoration mock drills once a month (preferably, or quarterly
if necessary) with the backup team to ensure the data can be restored in case of an issue.

Patching

83
Operating system patches for known vulnerabilities must be implemented promptly. There are many types
and levels of patches, including:

 Security
 Critical
 Moderate

When a patch is released, check the bug or vulnerability details to see how it applies to your system (e.g.,
does the vulnerability affect the hardware in your system?), and take any necessary actions to apply the
patches when required. Make sure to cross-verify applications‘ compatibility with patches or upgrades.

Server hardening

Linux:

 Set a BIOS password: This prevents users from altering BIOS settings.
 Set a GRUB password: This stops users from altering the GRUB bootloader.
 Deny root access: Rejecting root access minimizes the probability of intrusions.
 Sudo users: Make sudo users and assign limited privileges to invoke commands.
 TCP wrappers: This is the weapon to protect a server from hackers. Apply a rule for the SSH
daemon to allow only trusted hosts to access the server, and deny all others. Apply similar rules
for other services like FTP, SSH File Transfer Protocol, etc.
 Firewalld/iptables: Configure firewalld and iptables rules for incoming traffic to the server.
Include the particular port, source IP, and destination IP and allow, reject, deny ICMP requests,
etc. for the public zone and private zone.
 Antivirus: Install antivirus software and update virus definitions regularly.
 Secure and audit logs: Check the logs regularly and when required.

 Rotate the logs: Keep the logs for limited period of time like ―for 7 days‖, to keep the sufficient
disk space for flawless operation.

Windows:

 Set a BIOS password: This prevents users from altering BIOS settings.
 Antivirus: Install antivirus software and update virus definitions regularly.
 Configure firewall rules: Prevent unauthorized parties from accessing your systems.
 Deny administrator login: Limit users‘ ability to make changes that could increase your
systems‘ vulnerabilities.

Use a syslog server

By configuring a syslog server in the environment to keep records of system and application logs, in the
event of an intrusion or issue, the sysadmin can check previous and real-time logs to diagnose and resolve
the problem.

Communicate as Much as Possible

84
When it comes to your users, you can never communicate too much. Be aware that small system changes
you might think are practically unnoticeable could very well completely confuse the administrative
assistant in Human Resources.

Know Your Resources

System administration is mostly a matter of balancing available resources against the people and
programs that use those resources. Therefore, your career as a system administrator will be a short and
stress-filled one unless you fully understand the resources you have at your disposal. Some of the
resources are ones that seem pretty obvious:

 System resources, such as available processing power, memory, and disk space
 Network bandwidth
 Available money in the IT budget

Security Cannot be an Afterthought

No matter what you might think about the environment in which your systems are running, you cannot
take security for granted. Even standalone systems not connected to the Internet may be at risk (although
obviously the risks will be different from a system that has connections to the outside world). Therefore,
it is extremely important to consider the security implications of everything you do. The following list
illustrates the different kinds of issues you should consider:

 The nature of possible threats to each of the systems under your care
 The location, type, and value of the data on those systems
 The type and frequency of authorized access to the systems

While you are thinking about security, do not make the mistake of assuming that possible intruders will
only attack your systems from outside of your company. Many times, the perpetrator is someone within
the company. So the next time you walk around the office, look at the people.

1.5 Review Questions


1. Discuss the different types of networks based on different criterion.
2. Discuss the layers of TCP/IP protocol stack, with a discussion of an example of at least one
protocol at each layer.
3. What is the purpose of studying system administration?
4. Write the main activities of a Network Administrator.
5. Write the main activities of a System Administrator.
6. Discuss the difference system administration and network administration.

85
UNIT - II

2. Workgroups
In computer networking a workgroup is a collection of computers on a LAN that share common
resources and responsibilities. Workgroup is Microsoft‘s term for a peer-to-peer L. Windows
WORKGROUPs can be found in homes, schools and small businesses. Computers running Windows OSs
in the same work group may share files, printers, or Internet connection. Workgroup contrasts with
domain, in which computers rely on centralized authentication.

A Windows workgroup is a group of standalone computers in a peer-to-peer network. Each computer in


the workgroup uses its own local accounts database to authenticate resource access. The computers in a
workgroup also do not have a common authentication process. The default-networking environment for a
clean windows load is workgroup.

In general, a given Windows workgroup environment can contain many computers but work best with 15
or fewer computers. As the number of computers increases, a workgroup eventually become very difficult
to administer and should be re-organized into multiple networks or set up as a client-server network.

The computers in a workgroup are considered peers because they are all equal and share resources among
each other without requiring a server. Since the workgroup doesn‘t share a common security and resource
database, users and resources must be defined on each computer. Joining a workgroup requires all
participants to use a matching name, all Windows computers (Windows 7, 8 and 10) are automatically
assigned to a default group named WORKGROUP (MSHOME in WindowsXP). To

access shared resources on other PCs within its group, a user must know the name of the workgroup that
computer belongs to plus the username and password of an account on the remote computer.

The main disadvantages of workgroups are:

 If a user account will be used for accessing resources on multiple machines, the user account will
need to be created on those machines this requires that the same username and password be used.
 The low security protocol used for authentication between nodes.
 Desktop computers have a fixed limit of 15 or less connections. Note that this is in reference to
connections to an individual desktop.

One of the most common mistakes when setting up a peer-to-peer network is misspelling the workgroup
name on one of the computers. For example, suppose you decide that all the computers should belong to a
workgroup named MYGROUP. If you accidentally spell the workgroup name MYGRUOP for one of the
computers, that computer will be isolated in its own workgroup. If you can‘t locate a computer on your
network, the workgroup name is one of the first things to check.

86
2.1. Windows Workgroups vs Homegroups and Domains
2.1.1 Domain Controller
Windows domains support client-server local networks. A specially configured computer called Domain
Controller running a Windows Server operating system serves as a central server for all clients.
Windows domains can handle more computers than workgroups due to the ability to maintain centralized
resource sharing and access control. A client PC can belong to either to a workgroup or to a Windows
domain, but not both. Assigning a computer to the domain automatically removes it from the workgroup.
(see section 2.2 for more on Domain Controllers)

Microsoft HomeGroup

Microsoft introduced the Homegroup concepts in windows 7. Homegroups are designed to simplify the
management of workgroups for administrators, particularly homeowners. Instead of requiring an
administrator to manually set up shared user accounts on every PC, HomeGroup security settings can be
managed through one shared login.

Joining a Homegroup does not remove a PC from its Windows WORKGROUP, the two sharing methods
co-exist. Computers running versions of Windows operating systems older than Windows 7 (like XP and
vista), however cannot be members of HomeGroups.

Other Computer Workgroup technologies

The open source software package samba (which uses SMB technologies) allows Apple macOS, Linux
and other Unix based system to join existing windows workgroups. Apple originally developed
AppleTalk to support workgroups on Macintosh computers but phased out this technology in the late
2000s in favor of newer standards like SMB.

Samba is a free software that provides file and print services for various Microsoft Windows clients and
can integrate with a Microsoft Windows Server domain, either as a Domain Controller (DC) or as a
domain member. As of version 4, it supports Active Directory and Microsoft Windows NT domains.
Samba runs on most Unix-like systems, such as Linux, Solaris, AIX and the BSD variants, including
Apple‘s macOS Server, and macOS client (Mac OS X 10.2 and greater). It is standard on nearly all
distributions of Linux and is commonly included as a basic system service on other Unix-based operating
systems as well. Samba is released under the terms of the GNU General Public License. The name Samba
comes from SMB (Server Message Block), the name of the proprietary protocol used by the Microsoft
Windows network file system.

2.2. Domain Controllers


A domain controller (DC) is a server computer that responds to security authentication requests within a
computer network domain. It is a network server that is responsible for allowing end devices to access
shared domain resources. It authenticates users, stores user account information and enforces security
policy for a domain. It is most commonly implemented in Microsoft Windows environments (see below
about Windows Domain), where it is the centerpiece of the Windows Active Directory service. However,

87
non-Windows domain controllers can be established via identity management software such as Samba
(see the last paragraph of section 2.1).

Domain controllers are typically deployed as a cluster to ensure high-availability and maximize
reliability. In a Windows environment, one domain controller serves as the Primary Domain Controller
(PDC) and all other servers promoted to domain controller status in the domain server as a Backup
Domain Controller (BDC). In Unix-based environments, one machine serves as the master domain
controller and others serve as replica domain controllers, periodically replicating database information
from the main domain controller and storing it in a read-only format.

On Microsoft Servers, a domain controller (DC) is a server computer that responds to security
authentication requests (logging in, etc.) within a Windows domain. A Windows domain is a form of a
computer network in which all user accounts, computers, printers and other security principals, are
registered with a central database located on one or more clusters of central computers known as domain
controllers. A domain is a concept introduced in Windows NT whereby a user may be granted access to
a number of computer resources with the use of a single username and password combination. You must
setup at least one Domain Controller in every Windows domain. Figure 2.1 shows the Domain Controller
in Windows domain.

Figure 2.1. Domain Controller

Windows Server can be one of three kinds: Active Directory ―domain controllers‖ (ones that provide
identity and authentication), Active Directory ―member servers‖ (ones that provide complementary
services such as file repositories and schema) and Windows Workgroup ―stand-alone servers‖. The term
―Active Directory Server‖ is sometimes used by Microsoft as synonymous to ―Domain Controller‖ but
the term is discouraged.

2.2.1 System requirements for a Domain Controller


This section outlines the minimum hardware requirements to run the latest Windows Server available as
this resource is prepared (i.e. Windows Server 2022). If your computer has less than the minimum
requirements, you will not be able to install the server correctly. Actual requirements will vary based on
your system configuration and the applications and features you install.

Processor

88
Processor performance depends not only on the clock frequency of the processor, but also on the number
of processor cores and the size of the processor cache. The following are the minimum processor
requirements for the product:

 1.4 GHz 64-bit processor


 Compatible with x64 instruction set

RAM

The following are the estimated minimum RAM requirements for the product:

 512 MB (2 GB for Server with Desktop Experience installation option)

Storage controller and disk space requirements

Computers that run Windows Server must include a storage adapter that is compliant with the PCI
Express architecture specification. Persistent storage devices on servers classified as hard disk drives must
not be PATA. Windows Server does not allow ATA/PATA/IDE/EIDE for boot, page, or data drives. The
estimated minimum disk space requirements for the system partition is 32 GB

Network adapter requirements

Network adapters used with this latest release should include an Ethernet adapter capable of at least 1
gigabit per second throughput.

The following is a list of minimum system requirements for older versions of Windows Servers:

Windows Server 2003 Windows Server 2008 32- Windows Server 2008 R2
Component
32-bit bit 64-bit
Computer and Server Computer with a Server Computer with a x64, 1.4 GHz if single core,
processor 133-MHz processor Minimum 1GHz processor 1.3GHz if multi core
Memory 25 TFTP 512 MB RAM
1.5 GB available hard- 20 GB available hard-disk 32 GB available hard-disk
Hard disk
disk space space space
Table 2.1. System requirements for a domain controller

3.3 LDAP & Windows Active Directory


2.3.1 Lightweight Directory Access Protocol (LDAP)
The Lightweight Directory Access Protocol (LDAP) is an open, vendor-neutral, industry standard
application protocol for accessing and maintaining distributed directory information services over an
Internet Protocol (IP) network. Directory services play an important role in developing intranet and
Internet applications by allowing the sharing of information about users, systems, networks, services, and
applications throughout the network. As examples, directory services may provide any organized set of

89
records, often with a hierarchical structure, such as a corporate email directory. Similarly, a telephone
directory is a list of subscribers with an address and a phone number.

A common use of LDAP is to provide a central place to store usernames and passwords. This allows
many different applications and services to connect to the LDAP server to validate users.

In the early engineering stages of LDAP, it was known as Lightweight Directory Browsing Protocol, or
LDBP. It was renamed with the expansion of the scope of the protocol beyond directory browsing and
searching, to include directory update functions. It was given its Lightweight name because it was not as
network intensive as its predecessors and thus was more easily implemented over the Internet due to its
relatively modest bandwidth usage.

LDAP has influenced subsequent Internet protocols, including later versions of X.500, XML Enabled
Directory (XED), Directory Service Markup Language (DSML), Service Provisioning Markup Language
(SPML), and the Service Location Protocol (SLP). It is also used as the basis for Microsoft‘s Active
Directory.

Protocol overview

A client starts an LDAP session by connecting to an LDAP server, called a Directory System Agent
(DSA), by default on TCP and UDP port 389, or on port 636 for LDAPS (LDAP over TLS/SSL, see
below). The client then sends an operation request to the server, and a server sends responses in return.
With some exceptions, the client does not need to wait for a response before sending the next request, and
the server may send the responses in any order. All information is transmitted using Basic Encoding
Rules (BER).

The client may request the following operations:

 StartTLS– use LDAPv3 Transport Layer Security (TLS) extension for a secure connection
 Bind – authenticate and specify LDAP protocol version
 Search – search for and/or retrieve directory entries
 Compare – test if a named entry contains a given attribute value
 Add a new entry
 Delete an entry
 Modify an entry
 Modify Distinguished Name (DN) – move or rename an entry
 Abandon – abort a previous request
 Extended Operation – generic operation used to define other operations
 Unbind – close the connection (not the inverse of Bind)

A common alternative method of securing LDAP communication is using an SSL tunnel. The default port
for LDAP over SSL is 636. The use of LDAP over SSL was common in LDAP Version 2 (LDAPv2) but
it was never standardized in any formal specification. This usage has been deprecated along with
LDAPv2, which was officially retired in 2003.

The protocol provides an interface with directories as follows:

 An entry consists of a set of attributes.


 An attribute has a name (an attribute type or attribute description) and one or more values.

90
 Each entry has a unique identifier: its Distinguished Name (DN). This consists of its Relative
Distinguished Name (RDN), constructed from some attribute(s) in the entry, followed by the
parent entry‘s DN. Think of the DN as the full file path and the RDN as its relative filename in
its parent folder (e.g. if /foo/bar/myfile.txt were the DN, then myfile.txt would be the RDN).

A DN may change over the lifetime of the entry, for instance, when entries are moved within a tree. To
reliably and unambiguously identify entries, a UUID might be provided in the set of the entry‘s
operational attributes.

2.3.2 Windows Active Directory


Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. It is
included in most Windows Server operating systems as a set of processes and services. Initially, it was
used only for centralized domain management. However, it eventually became an umbrella title for a
broad range of directory-based identity-related services.

A server running the Active Directory Domain Service (AD DS) role is called a domain controller. It
authenticates and authorizes all users and computers in a Windows domain type network, assigning and
enforcing security policies for all computers, and installing or updating software. For example, when a
user logs into a computer that is part of a Windows domain, Active Directory checks the submitted
password and determines whether the user is a system administrator or normal user. Also, it allows
management and storage of information, provides authentication and authorization mechanisms, and
establishes a framework to deploy other related services: Certificate Services, AD Federation Services,
Lightweight Directory Services, and Rights Management Services. Active Directory uses LDAP versions
2 and 3, Microsoft‘s version of Kerberos, and DNS.

Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, and
revised it to extend functionality and improve administration in Windows Server 2003. Additional
improvements came with subsequent versions of Windows Server. In Windows Server 2008, additional
services were added to Active Directory, such as Active Directory Federation Services. The part of the
directory in charge of management of domains, which was previously a core part of the operating system,
was renamed Active Directory Domain Services (ADDS) and became a server role like others. Active
Directory became the umbrella title of a broader range of directory-based services, everything related to
identity was brought under Active Directory‘s banner.

2.3.3 Active Directory Services


Active Directory Services consist of multiple directory services. The best known is Active Directory
Domain Services, commonly abbreviated as AD DS or simply AD.

Domain Services (DS)

AD DS is the foundation stone of every Windows domain network. It stores information about members
of the domain, including devices and users, verifies their credentials and defines their access rights. The
server running this service is called a domain controller. A domain controller is contacted when a user
logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app
sideloaded into a device.

91
Other Active Directory services (excluding LDS, which is discussed below) as well as most of Microsoft
server technologies rely on or use Domain Services; examples include Group Policy, Encrypting File
System, BitLocker, Domain Name Services, Remote Desktop Services, Exchange Server and SharePoint
Server.

Lightweight Directory Services (LDS)

Active Directory Lightweight Directory Services, formerly known as AD Application Mode (ADAM), is
an implementation of LDAP protocol for AD DS. AD LDS runs as a service on Windows Server. AD
LDS shares the code base with AD DS and provides the same functionality, including an identical API,
but does not require the creation of domains or domain controllers. It provides a Data Store for storage of
directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS,
however, multiple AD LDS instances can run on the same server.

Certificate Services (CS)

AD Certificate Services (AD CS) establishes an on-premises public key infrastructure. It can create,
validate and revoke public key certificates for internal uses of an organization. These certificates can be
used to encrypt files, emails, and network traffic (when used by virtual private networks or IPSec
protocol). AD CS requires an AD DS infrastructure.

Federation Services (FS)

AD Federation Services (AD FS) is a single sign-on service. With an AD FS infrastructure in place, users
may use several web-based services (e.g. Internet forum, blog, online shopping, webmail) or network
resources using only one set of credentials stored at a central location, as opposed to having to

be granted a dedicated set of credentials for each service. AD FS‗s purpose is an extension of that of AD
DS: The latter (AD Ds) enables users to authenticate with and use the devices that are part of the same
network, using one set of credentials. The former (AD FS) enables them to use the same set of credentials
in a different network.

As the name suggests, AD FS works based on the concept of federated identity. AD FS requires an AD
DS infrastructure, although its federation partner may not.

Rights Management Services (RMS)

AD Rights Management Services (AD RMS) is a server software for information rights management
shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting
access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the
operations authorized users can perform on them.

2.3.4 Logical Structure


As a directory service, an Active Directory instance consists of a database and corresponding executable
code responsible for servicing requests and maintaining the database. The executable part, known as
Directory System Agent, is a collection of Windows services and processes that run on Windows 2000
and later. Objects in Active Directory databases can be accessed via LDAP, ADSI (a component object
model interface), messaging API and Security Accounts Manager services.

92
Figure 2.1. Sample Network Diagram to indicate a Domain

Objects

Active Directory structures are arrangements of information about objects. The objects fall into two broad
categories: resources (e.g., printers) and security principals (user or computer accounts and groups).
Security principals are assigned unique security identifiers (SIDs).

Each object represents a single entity—whether a user, a computer, a printer, or a group—and its
attributes. Certain objects can contain other objects. An object is uniquely identified by its name and has a
set of attributes—the characteristics and information that the object represents— defined by a schema,
which also determines the kinds of objects that can be stored in Active Directory.

The schema object lets administrators extend or modify the schema when necessary. However, because
each schema object is integral to the definition of Active Directory objects, deactivating or changing these
objects can fundamentally change or disrupt a deployment. Schema changes automatically

propagate throughout the system. Once created, an object can only be deactivated—not deleted. Changing
the schema usually requires planning.

Forests, trees, and domains

The Active Directory framework that holds the objects can be viewed at a number of levels. The forest,
tree, and domain are the logical divisions in an Active Directory network.

Within a deployment, objects are grouped into domains. The objects for a single domain are stored in a
single database (which can be replicated). Domains are identified by their DNS name structure, the
namespace. A domain is defined as a logical group of network objects (computers, users, devices) that
share the same Active Directory database.

A tree is a collection of one or more domains and domain trees in a contiguous namespace, and is linked
in a transitive trust hierarchy.

93
At the top of the structure is the forest. A forest is a collection of trees that share a common global
catalog, directory schema, logical structure, and directory configuration. The forest represents the security
boundary within which users, computers, groups, and other objects are accessible.

Organizational Units

The objects held within a domain can be grouped into organizational units (OUs). OUs can provide
hierarchy to a domain, ease its administration, and can resemble the organization‘s structure in managerial
or geographical terms. Microsoft recommends using OUs rather than domains for structure and to
simplify the implementation of policies and administration. The OU is the recommended level at which to
apply group policies, which are Active Directory objects formally named group policy objects (GPOs),
although policies can also be applied to domains or sites (see below). The OU is the level at which
administrative powers are commonly delegated, but delegation can be performed on individual objects or
attributes as well.

Organizational units (OUs) do not each have a separate namespace. As a consequence, for compatibility
with Legacy NetBios implementations, user accounts with an identical account name

are not allowed within the same domain even if the accounts objects are in separate OUs. This‘s because
account name, a user object attribute, must be unique within the domain. However, two users in
different OUs can have the same common name (CN), the name under which they are stored in the
directory itself such as ―fred.staff-ou.domain‖ and ―fred.student-ou.domain‖, where ―staff-ou‖ and
―student-ou‖ are the Ous.

Note:
The reason for lack of duplicate names through hierarchical directory placement is that Microsoft
primarily relies on the principles of NetBIOS (i.e. a flat-namespace method of network object
management). Allowing for duplication of object names in the directory, or completely removing the use
of NetBIOS names, would prevent backward compatibility with legacy software and equipment.
However, disallowing duplicate object names in this way is a violation of the LDAP RFCs on which
Active Directory is supposedly based.

As the number of users in a domain increases, duplicate naming issue even gets more complicated.
Workarounds include adding a digit to the end of username. Alternatives include creating a separate ID
system of unique user ID numbers to use as account names in place of actual users‘ names, and allowing
users to nominate their preferred word sequence within an acceptable use policy.

Because duplicate usernames cannot exist within a domain, account name generation poses a significant
challenge for large organizations that cannot be easily subdivided into separate domains, such as students
in a public school system or university who must be able to use any computer across the network.

Physical Structure

Sites are physical (rather than logical) groupings defined by one or more IP subnets. AD also holds the
definitions of connections, distinguishing low-speed (e.g., WAN, VPN) from high-speed (e.g., LAN)
links. Site definitions are independent of the domain and OU structure and are common across the forest.
Sites are used to control network traffic generated by replication and also to refer clients to the nearest
domain controllers (DCs).

94
Physically, the Active Directory information is held on one or more peer domain controllers (DCs). Each
DC has a copy of the AD. Servers joined to AD that are not domain controllers are called Member
Servers. A subset of objects in the domain partition replicate to domain controllers that are configured as
global catalogs. Global catalog (GC) servers provide a global listing of all objects in the

Forest. Global Catalog servers replicate to themselves all objects from all domains and, hence, provide a
global listing of objects in the forest. However, to minimize replication traffic and keep the GC‘s database
small, only selected attributes of each object are replicated. This is called the partial attribute set (PAS).

Replication

Active Directory synchronizes changes using multi-master replication. Replication by default is ‗pull‗
rather than ‗push‘, meaning that replicas pull changes from the server where the change was effected. The
Knowledge Consistency Checker (KCC) creates a replication topology of site links using the defined
sites to manage traffic. Intra-site replication is frequent and automatic as a result of change notification,
which triggers peers to begin a pull replication cycle. Inter-site replication intervals are typically less
frequent and do not use change notification by default, although this is configurable and can be made
identical to intra-site replication. Replication of Active Directory uses Remote Procedure Calls (RPC)
over IP (RPC/IP).

2.3.5 Implementation
In general, a network utilizing Active Directory has more than one licensed Windows server computer.
Backup and restore of Active Directory is possible for a network with a single domain controller, but
Microsoft recommends more than one domain controller to provide automatic failover protection of the
directory. Domain controllers are also ideally single-purpose for directory operations only, and should not
run any other software or role.

Certain Microsoft products such as SQL Server and Exchange can interfere with the operation of a
domain controller, necessitating isolation of these products on additional Windows servers. Combining
them can make configuration or troubleshooting of either the domain controller or the other installed
software more difficult. A business intending to implement Active Directory is therefore recommended to
purchase a number of Windows server licenses, to provide for at least two separate domain controllers,
and optionally, additional domain controllers for performance or redundancy, a separate file server, a
separate Exchange server, a separate SQL Server, and so forth to support the various server roles.

Physical hardware costs for the many separate servers can be reduced through the use of virtualization,
although for proper failover protection, Microsoft recommends not running multiple virtualized domain
controllers on the same physical hardware.

2.3.6 Trusting
To allow users in one domain to access resources in another, Active Directory uses trusts. Trusts inside a
forest are automatically created when domains are created. The forest sets the default boundaries of trust,
and implicit, transitive trust is automatic for all domains within a forest.

Terminology

95
 One-way trust: One domain allows access to users on another domain, but the other domain does
not allow access to users on the first domain.
o Two-way trust: Two domains allow access to users on both domains.
 Trusted domain: The domain that is trusted; whose users have access to the
trusting domain.
 Transitive trust: A trust that can extend beyond two domains to other trusted
domains in the forest.
 Intransitive trust: A one way trust that does not extend beyond two domains.
 Explicit trust: A trust that an admin creates. It is not transitive and is one way
only.
 Cross-link trust: An explicit trust between domains in different trees or in the
same tree when a descendant/ancestor (child/parent) relationship does not exist
between the two domains.
 Shortcut: Joins two domains in different trees, transitive, one- or two-way.
 Forest trust: Applies to the entire forest. Transitive, one- or two-way.
 Realm: Can be transitive or nontransitive (intransitive), one- or two-way.
 External: Connect to other forests or non-AD domains. Nontransitive, one- or
two-way.
 PAM trust: A one-way trust used by Microsoft Identity Manager from a
(possibly low-level) production forest to a (Windows Server 2016 functionality
level) ‗bastion‘ forest, which issues time-limited group memberships.

2.3.7 Management solutions


Microsoft Active Directory management tools include:

 Active Directory Administrative Center (Introduced with Windows Server 2012 and above),

 Active Directory Users and Computers,


o Active Directory Domains and Trusts,
 Active Directory Sites and Services,
 ADSI Edit,
 Local Users and Groups,
 Active Directory Schema snap-ins for Microsoft Management Console (MMC),
 SysInternals ADExplorer

These management tools may not provide enough functionality for efficient workflow in large
environments. Some third-party solutions extend the administration and management capabilities. They
provide essential features for a more convenient administration processes, such as automation, reports,
integration with other services, etc.

2.3.7 Unix integration


Varying levels of interoperability with Active Directory can be achieved on most Unix-like operating
systems (including Unix, Linux, Mac OS X or Java and Unix-based programs) through standards-
compliant LDAP clients, but these systems usually do not interpret many attributes associated with
Windows components, such as Group Policy and support for one-way trusts.

Third parties offer Active Directory integration for Unix-like platforms, including:

96
 PowerBroker Identity Services – Allows a non-Windows client to join Active Directory
o ADmitMac (Thursby Software Systems)
 Samba – Can act as a domain controller

Administration (querying, modifying, and monitoring) of Active Directory can be achieved via many
scripting languages, including PowerShell, VBScript, JScript/JavaScript, Perl, Python, and Ruby. Free
and non-free AD administration tools can help to simplify and possibly automate AD management tasks.
Since October 2017 Amazon AWS offers integration with Microsoft Active Directory.

2.4 Review Questions


 Discuss the difference between Workgroup and Homegroup.
 What are the system requirements of domain controller?
 Discuss some of the active directory services.
 To allow one user from one domain to use services in other domain, active directory uses trust.
 Discuss the different terminologies used in trusting.
 Discuss the difference between forests, trees and domains.
 Discuss the logical and physical structure of domains

97
UNIT III

3 USERS AND CAPABILITIES


A user account is a collection of settings and information that tells Windows which files and folders you
can access, what you can do on your computer, what are your preferences, and what network resources
you can access when connected to a network.

The user account allows you to authenticate to Windows or any other operating system so that you are
granted authorization to use them. Multi-user operating systems such as Windows don‘t allow a user to
use them without having a user account.

A user account in Windows is characterized by the following attributes:

 User name: the name you are giving to that account.


 Password: the password associated with the user account (in Windows 7 or older versions you
can also use blank passwords).
 User group: a collection of user accounts that share the same security rights and permissions. A
user account must be a member of at least one user group.
 Type: all user accounts have a type which defines their permissions and what they can do in
Windows.

Administrator: The ―Administrator‖ user account has complete control over the PC. He or she can
install anything and make changes that affect all users of that PC.

Standard: The ―Standard‖ user account can only use the software that‘s already installed by the
administrator and change system settings that don‘t affect other users.

Guest: The ―Guest‖ account is a special type of user account that has the name Guest and no password.
This is only for users that need temporary access to the PC. This user can only use the software that‘s
already installed by the administrator and cannot make any changes to system settings.

All user accounts have specific capabilities, privileges, and rights. When you create a user account, you
can grant the user specific capabilities by making the user a member of one or more groups. This gives
the user the capabilities of these groups. You then assign additional capabilities by making a user a
member of the appropriate groups or withdraw capabilities by removing a user from a group.

An important part of an administrator‘s job is being able to determine and set permissions, privileges, and
logon rights as necessary. Although you can‘t change a group‘s built-in capabilities, you can change a
group‘s default privileges and logon rights. For example, you could revoke network access to a computer
by removing a group‘s right to access the computer from the network.

98
3.1 What is File & Folder Permissions?
Permissions are a method for assigning access rights to specific user accounts and user groups. Through
the use of permissions, Windows defines which user accounts and user groups can access which files and
folders, and what they can do with them. To put it simply, permissions are the operating system‘s way of
telling you what you can or cannot do with a file or folder.

On Windows operating system, to learn the permissions of any folder, right click on it and select
―Properties.‖ In the Properties window, go to the Security tab. In the ―Group or user names‖ section you
will see all the user accounts and use groups that have permissions to that folder. If you select a group or a
user account, then see its assigned permissions, in the ―Permissions for Users‖ section.

In Windows, a user account or a user group can receive one of the following permissions to any file or
folder:

 Read: allows the viewing and listing of a file or folder. When viewing a folder, you can view all
its files and subfolders.

 Write: allows writing to a file or adding files and subfolders to a folder.


 List folder contents: this permission can be assigned only to folders. It permits the viewing and
listing of files and subfolders, as well as executing files that are found in that folder.
 Read & execute: permits the reading and accessing of a file‘s contents as well as its execution.
When dealing with folders, it allows the viewing and listing of files and subfolders, as well as the
execution of files.
 Modify: when dealing with files, it allows their reading, writing and deletion. When dealing with
folders, it allows the reading and writing of files and subfolders, plus the deletion of the folder.
 Full control: it allows reading, writing, changing and deleting of any file and subfolder.
Generally, files inherit the permissions of the folder where they are placed, but users can also
define specific permissions that are assigned only to a specific file. To make your computing life
simpler, it is best to edit permissions only at a folder level.

Assigning User Rights

The most efficient way to assign user rights is to make the user a member of a group that already has the
right. In some cases, however, you might want a user to have a particular right but not have all the other
rights of the group. One way to resolve this problem is to give the user the rights directly. Another way to
resolve this is to create a special group for users that need the right. This is the approach used with the
Remote Desktop Users group, which was created by Microsoft to grant Allow Logon Through Terminal
Services to groups of users.

You assign user rights through the Local Policies node of Group Policy. Local policies can be set on a
per-computer basis using a computer‘s local security policy or on a domain or OU basis through an
existing group policy for the related domain or OU. When you do this, the local policies apply to all
accounts in the domain or OU.

3.2. Policy Tools & Roaming Profiles


 What is Roaming profile?

99
A Windows profile is a set of files that contains all settings of a user including per-user configuration
files and registry settings. In an Active Directory or NT4 domain you can set that the profile of a user is
stored on a server. This enables the user to log on to different Windows domain members and use the
same settings.

When using roaming user profiles, a copy of the profile is downloaded from the server to the Windows
domain member when a user logs into. Until the user logs out, all settings are stored and updated in the
local copy. During the log out, the profile is uploaded to the server.

Assigning a Roaming Profile to a User

Depending on the Windows version, Windows uses different folders to store the roaming profile of a user.
However, when you set the profile path for a user, you always set the path to the folder without any
version suffix. For example:

\\server\profiles\user_name

A roaming user profile is a file synchronization concept in the Windows NT family of operating systems
that allows users with a computer joined to a Windows domain to log on to any computer on the same
domain and access their documents and have a consistent desktop experience, such as applications
remembering toolbar positions and preferences, or the desktop appearance staying the

same, while keeping all related files stored locally, to not continuously depend on a fast and reliable
network connection to a file server.

All Windows operating systems since Windows NT 3.1 are designed to support roaming profiles.
Normally, a standalone computer stores the user‘s documents, desktop items, application preferences, and
desktop appearance on the local computer in two divided sections, consisting of the portion that could
roam plus an additional temporary portion containing items such as the web browser cache. The Windows
Registry is similarly divided to support roaming; there are System and Local Machine hives that stay on
the local computer, plus a separate User hive (HKEY CURRENT USER) designed to be able to roam
with the user profile.

When a roaming user is created, the user‘s profile information is instead stored on a centralized file server
accessible from any network-joined desktop computer. The login prompt on the local computer checks to
see if the user exists in the domain rather than on the local computer; no preexisting account is required
on the local computer. If the domain login is successful, the roaming profile is copied from the central file
server to the desktop computer, and a local account is created for the user.

When the user logs off from the desktop computer, the user‘s roaming profile is merged from the local
computer back to the central file server, not including the temporary local profile items. Because this is a
merge and not a move/delete, the user‘s profile information remains on the local computer in addition to
being merged to the network.

When the user logs in on a second desktop computer, this process repeats, merging the roaming profile
from the server to the second desktop computer, and then merging back from the desktop to the server
when the user logs off.

100
When the user returns to the first desktop computer and logs in, the roaming profile is merged with the
previous profile information, replacing it. If profile caching is enabled, the server is capable of merging
only the newest files to the local computer, reusing the existing local files that have not changed since the
last login, and thereby speeding up the login process.

Windows stores information about a particular user in a so-called profile. Some examples of the sort of
data that gets stored in a profile are (N.B. this list is not exhaustive):

 Application data and settings


 The ―Documents‖/‖My Documents‖ folder
 The ―Downloads‖ folder, which is where your internet browser may save to by default
 Files stored on your Desktop
 Directories you create under c:\users\[your-username]

Members of some groups in the department have a roaming profile. This means that the master copy of
the profile is stored on a fileserver. When you log in to a Windows computer, the contents of your profile
will be synchronized from the fileserver to the local computer. When you log out of the computer, any
changes to the profile are then synchronized back to the server. Instructions for checking whether or not
you have a roaming profile are available.

There are two main reasons why a roaming profile might be useful in the department. Firstly, because the
contents of the profile are stored centrally, whenever you log on to any computer in the department you
will have the same application data and settings (e.g., internet browser bookmarks, preferences in
Microsoft Office etc.).

Secondly, because the master copy of your roaming profile is stored on a Departmentally-managed
fileserver, all data stored within it is automatically backed up.

What are the main differences of roaming and local profiles?

Windows roaming and local profiles are similar in that they both store Windows user settings and data. A
local profile is one that is stored directly on the computer. The main advantage to using a local profile is
that the profile is accessible even when the computer is disconnected from the network. A major
drawback of a local profile is that the user profile data is not being automatically backed up by the server.
Since most users rarely back up their computers, if a hard drive fails, any data that is stored within local
profiles on that machine would be lost.

Roaming profiles are stored on a server and can be accessed by logging into any computer on the
network. In a roaming profile, when a user logs onto the network, his/her profile is copied from the server
to the user‘s desktop. When the user logs off of their computer, the profile (including any

changes that the user might have made) is copied back to the server. A major drawback of roaming
profiles is that they can slow down the network. Windows user profiles often become very large as the
user profile data continues to grow. If you have a large roaming profile, the login and logoff times may
take a significant amount of time.

The solution to this problem is to use folder redirection with roaming profiles. Folder redirection allows
specific folders (such as the Desktop and Documents folder) to be permanently stored on the server.
Doing so eliminates the need for the redirected folder to be copied as a part of the logon and logoff
processes.

101
In summary, for a hassle-free network experience one should choose the default local profile. However, if
you need roaming profiles enabled, Concise can assist you with the configuration and deployment of
roaming profiles utilizing folder redirection so you can have the best of both worlds!

3.3. Advanced Concepts I

 The Registry

The Windows Registry is a hierarchical database that stores low-level settings for the Microsoft
Windows operating system and for applications that opt to use the registry. The kernel, device drivers,
services, Security Accounts Manager, and user interface can all use the registry. The registry also allows
access to counters for profiling system performance.

In other words, the registry or Windows Registry contains information, settings, options, and other values
for programs and hardware installed on all versions of Microsoft Windows operating systems. For
example, when a program is installed, a new subkey containing settings such as a program‘s location, its
version, and how to start the program, are all added to the Windows Registry.

When introduced with Windows 3.1, the Windows Registry primarily stored configuration information
for COM-based components. Windows 95 and Windows NT extended its use to rationalize and centralize
the information in the profusion of INI files, which held the configurations for individual programs, and
were stored at various locations. It is not a requirement for Windows applications to use the Windows
Registry. For example, .NET Framework applications use XMLfiles for configuration, while portable
applications usually keep their configuration files with their executables.

Prior to the Windows Registry, .INI files stored each program‘s settings as a text file or binary file, often
located in a shared location that did not provide user-specific settings in a multi-user scenario. By
contrast, the Windows Registry stores all application settings in one logical repository (but a number of
discrete files) and in a standardized form. According to Microsoft, this offers several advantages over

.INI files. Since file parsing is done much more efficiently with a binary format, it may be read from or
written to more quickly than a text INI file. Furthermore, strongly typed data can be stored in the registry,
as opposed to the text information stored in .INI files. This is a benefit when editing keys manually using
regedit.exe, the built-in Windows Registry Editor. Because user-based registry settings are loaded from a
user-specific path rather than from a read-only system location, the registry allows multiple users to share
the same machine, and also allows programs to work for less privileged users. Backup and restoration is
also simplified as the registry can be accessed over a network connection for remote
management/support, including from scripts, using the standard set of APIs, as long as the Remote
Registry service is running and firewall rules permit this.

Keys and values

The registry contains two basic elements: keys and values. Registry keys are container objects similar to
folders. Registry values are non-container objects similar to files. Keys may contain values and subkeys.
Keys are referenced with a syntax similar to Windows‘ path names, using backslashes to indicate levels
of hierarchy. Keys must have a case insensitive name without backslashes.

102
The hierarchy of registry keys can only be accessed from a known root key handle (which is anonymous
but whose effective value is a constant numeric handle) that is mapped to the content of a registry key
preloaded by the kernel from a stored ―hive―, or to the content of a subkey within another root key, or
mapped to a registered service or DLL that provides access to its contained subkeys and values.

There are seven predefined root keys, traditionally named according to their constant handles defined in
the Win32 API, or by synonymous abbreviations (depending on applications):

 HKEY_LOCAL_MACHINE or HKLM
o HKEY_CURRENT_CONFIG or HKCC
 HKEY_CLASSES_ROOT or HKCR
 HKEY_CURRENT_USER or HKCU
 HKEY_USERS or HKU
 HKEY_PERFORMANCE_DATA (only in Windows NT, but invisible in the
Windows Registry Editor)
 HKEY_DYN_DATA (only in Windows 9x, and visible in the Windows Registry
Editor)

Like other files and services in Windows, all registry keys may be restricted by access control lists
(ACLs), depending on user privileges, or on security tokens acquired by applications, or on system
security policies enforced by the system (these restrictions may be predefined by the system itself, and
configured by local system administrators or by domain administrators). Different users, programs,
services or remote systems may only see some parts of the hierarchy or distinct hierarchies from the same
root keys.

Registry values are name/data pairs stored within keys. Registry values are referenced separately from
registry keys. Each registry value stored in a registry key has a unique name whose letter case is not
significant. The Windows API functions that query and manipulate registry values take value names
separately from the key path and/or handle that identifies the parent key. Registry values may contain

backslashes in their names, but doing so makes them difficult to distinguish from their key paths when
using some legacy Windows Registry API functions (whose usage is deprecated in Win32).

The terminology is somewhat misleading, as each registry key is similar to an associative array, where
standard terminology would refer to the name part of each registry value as a ―key‖. The terms are a
holdout from the 16-bit registry in Windows 3, in which registry keys could not contain arbitrary
name/data pairs, but rather contained only one unnamed value (which had to be a string). In this sense, the
Windows 3 registry was like a single associative array, in which the keys (in the sense of both ‗registry
key‘ and ‗associative array key‘) formed a hierarchy, and the registry values were all strings. When the
32-bit registry was created, so was the additional capability of creating multiple named values per key,
and the meanings of the names were somewhat distorted. For compatibility with the previous behavior,
each registry key may have a ―default‖ value, whose name is the empty string.

103
Each value can store arbitrary data with variable length and encoding, but which is associated with a
symbolic type (defined as a numeric constant) defining how to parse this data. The standard types are:

Type Meaning and encoding of the data stored in the


Symbolic Type Name
ID registry value
0 REG_NONE No type (the stored value, if any)
A string value, normally stored and exposed in UTF-16LE
1 REG_SZ (when using the Unicode version of Win32 API
functions), usually terminated by a NUL character
An ―expandable‖ string value that can contain
2 REG_EXPAND_SZ environment variables, normally stored and exposed in
UTF-16LE, usually terminated by a NUL character
3 REG_BINARY Binary data (any arbitrary data)
REG_DWORD / A DWORD value, a 32-bit unsigned integer (numbers
4
REG_DWORD_LITTLE_ENDIAN between 0 and 4,294,967,295 [232 – 1]) (little-endian)
A DWORD value, a 32-bit unsigned integer (numbers
5 REG_DWORD_BIG_ENDIAN
between 0 and 4,294,967,295 [232 – 1])
A symbolic link (UNICODE) to another registry key,
6 REG_LINK
specifying a root key and the path to the target key
A multi-string value, which is an ordered list of non-
empty strings, normally stored and exposed in Unicode,
7 REG_MULTI_SZ
each one terminated by a null character, the list being
normally terminated by a second null character.
A resource list (used by the Plug-n-Play hardware
8 REG_RESOURCE_LIST
enumeration and configuration)
A resource descriptor (used by the Plug-n-Play
9 REG_FULL_RESOURCE_DESCR IPTOR
hardware enumeration and configuration)
A resource requirements list (used by the Plug-n-
10 REG_RESOURCE_REQUIREME NTS_LIST
Play hardware enumeration and configuration)
A QWORD value, a 64-bit integer (either big- or
REG_QWORD /
11 little- endian, or unspecified) (introduced in
REG_QWORD_LITTLE_ENDIAN
Windows 2000)

Table 3.1. List of Standard Registry value types

When an administrator runs the command regedit, pre-defined keys called root keys, high-level keys or
HKEYS display in the left pane of the Registry Editor window. A pre-defined key and its nested subkeys
are collectively called a hive.

An application must open a key before it can add data to the registry, so having pre-defined keys that are
always open helps an application navigate the registry. Although pre-defined keys cannot be changed,
subkeys can be modified or deleted as long as the user has permission to do so and the subkey is not
located directly under a high-level key.

Before making any changes to registry keys, however, Microsoft strongly recommends the registry be
backed up and that the end user only change values in the registry that they understand or have been told
to change by a trusted advisor. Keys and subkeys are referred to with a syntax that‘s similar to Windows‘

104
path names, using backslashes to indicate levels in the hierarchy. Edits to the registry that cause syntax
errors can make the computer inoperable.

Root keys

The keys at the root level of the hierarchical database are generally named by their Windows API
definitions, which all begin ―HKEY‖. They are frequently abbreviated to a three- or four-letter short name
starting with ―HK‖ (e.g. HKCU and HKLM). Technically, they are predefined handles (with known
constant values) to specific keys that are either maintained in memory, or stored in hive files stored in the
local filesystem and loaded by the system kernel at boot time and then shared (with various access rights)
between all processes running on the local system, or loaded and mapped in all processes started in a user
session when the user logs on the system.

The registry is a hierarchical database where information is presented on a number of levels. Hive keys
are on the first level. There are seven hive keys as we discussed previously. Registry keys are on the
second level, subkeys are on the third and then come values. If we consider the registry in terms of a
hierarchical file system.

The HKEY_LOCAL_MACHINE (local machine-specific configuration data) and


HKEY_CURRENT_USER (user-specific configuration data) nodes have a similar structure to each other;
user applications typically look up their settings by first checking for them in
―HKEY_CURRENT_USER\Software\Vendor‘s name\Application‘s name\Version\Setting name‖, and if
the setting is not found, look instead in the same location under the HKEY_LOCAL_MACHINE
key[citation needed]. However, the converse may apply for administrator-enforced policy settings where
HKLM may take precedence over HKCU. The Windows Logo Program has specific requirements for
where different types of user data may be stored, and that the concept of least privilege be followed so
that administrator-level access is not required to use an application.

HKEY_CLASSES_ROOT (HKCR)

This key contains several subkeys with information about extensions of all registred file types and COM
servers. This information is necessary for opening files with a double-click, or for drag-and-drop
operations. Besides, the HKEY_CLASSES_ROOT key provides combined data to applications that were
created for earlier versions of Windows.

HKEY_CURRENT_USER (HKCU)

This key store settings which are specific to the currently logged-in user (Windows Start menu, desktop,
etc.). Its subkeys store information about environment variables, program groups, desktop settings, screen
colors, network connections, printers and additional application settings. This information is gathered
from the Security ID subkey (SID) of HKEY_USERS for the current user. In fact, this key stores all
information related to the profile of the user who is currently working with Windows.

HKEY_LOCAL_MACHINE (HKLM)

Abbreviated HKLM, HKEY_LOCAL_MACHINE stores settings that are specific to the local computer.

The key located by HKLM is actually not stored on disk, but maintained in memory by the system kernel
in order to map all the other subkeys. Applications cannot create any additional subkeys. On Windows

105
NT, this key contains four subkeys, ―SAM‖, ―SECURITY‖, ―SYSTEM‖, and ―SOFTWARE‖,
that are loaded at boot time within their respective files located in the %SystemRoot

%\System32\config folder. A fifth subkey, ―HARDWARE‖, is volatile and is created dynamically, and as
such is not stored in a file (it exposes a view of all the currently detected Plug-and-Play devices). On
Windows Vista and above, a sixth and seventh subkey, ―COMPONENTS‖ and ―BCD‖, are mapped in
memory by the kernel on-demand and loaded from %SystemRoot%\system32\config\COMPONENTS or
from boot configuration data, \boot\BCD on the system partition.

 The ―HKLM\SAM‖ key usually appears as empty for most users (unless they are granted access
by administrators of the local system or administrators of domains managing the local system). It
is used to reference all ―Security Accounts Manager‖ (SAM) databases for all domains into which
the local system has been administratively authorized or configured (including the local domain
of the running system, whose SAM database is stored in a subkey also named ―SAM‖: other
subkeys will be created as needed, one for each supplementary domain). Each SAM database
contains all builtin accounts (mostly group aliases) and configured accounts (users, groups and
their aliases, including guest accounts and administrator accounts) created and configured on the
respective domain, for each account in that domain, it notably contains the user name which can
be used to log on that domain, the internal unique user identifier in the domain, a cryptographic
hash of each user‘s password for each enabled authentication protocol, the location of storage of
their user registry hive, various status flags (for example if the account can be enumerated and be
visible in the logon prompt screen), and the list of domains (including the local domain) into
which the account was configured.
 The ―HKLM\SECURITY‖ key usually appears empty for most users (unless they are granted
access by users with administrative privileges) and is linked to the Security database of the

domain into which the current user is logged on (if the user is logged on the local system domain, this key
will be linked to the registry hive stored by the local machine and managed by local system administrators
or by the builtin ―System‖ account and Windows installers). The kernel will access it to read and enforce
the security policy applicable to the current user and all applications or operations executed by this user. It
also contains a ―SAM‖ subkey which is dynamically linked to the SAM database of the domain onto
which the current user logged on.

 The ―HKLM\SYSTEM‖ key is normally only writable by users with administrative privileges on
the local system. It contains information about the Windows system setup, data for the secure
random number generator (RNG), the list of currently mounted devices containing a filesystem,
several numbered ―HKLM\SYSTEM\Control Sets‖ containing alternative configurations for
system hardware drivers and services running on the local system (including the currently used
one and a backup), a ―HKLM\SYSTEM\Select‖ subkey containing the status of these Control
Sets, and a ―HKLM\SYSTEM\CurrentControlSet‖ which is dynamically linked at boot time to
the Control Set which is currently used on the local system. Each configured Control Set
contains:
o an ―Enum‖ subkey enumerating all known Plug-and-Play devices and associating them
with installed system drivers (and storing the device-specific configurations of these
drivers),
o a ―Services‖ subkey listing all installed system drivers (with non device-specific
configuration, and the enumeration of devices for which they are instantiated) and all
programs running as services (how and when they can be automatically started),
o a ―Control‖ subkey organizing the various hardware drivers and programs running as
services and all other system-wide configuration,

106
o a ―Hardware Profiles‖ subkey enumerating the various profiles that have been tuned
(each one with ―System‖ or ―Software‖ settings used to modify the default profile, either
in system drivers and services or in the applications) as well as the ―Hardware
Profiles\Current‖ subkey which is dynamically linked to one of these profiles.

 The ―HKLM\SOFTWARE‖ subkey contains software and Windows settings (in the default
hardware profile). It is mostly modified by application and system installers. It is organized by
software vendor (with a subkey for each), but also contains a ―Windows‖ subkey for some
settings of the Windows user interface, a ―Classes‖ subkey containing all registered associations
from file extensions, MIME types, Object Classes IDs and interfaces IDs (for OLE, COM/DCOM
and ActiveX), to the installed applications or DLLs that may be handling these types on the local
machine (however these associations are configurable for each user, see below), and a ―Policies‖
subkey (also organized by vendor) for enforcing general usage policies on applications and
system services (including the central certificates store used for authenticating, authorizing or
disallowing remote systems or services running outside the local network domain).
 The ―HKLM\SOFTWARE\Wow6432Node‖ key is used by 32-bit applications on a 64-bit
Windows OS, and is equivalent to but separate from ―HKLM\SOFTWARE‖. The key path is
transparently presented to 32-bit applications by WoW64 as HKLM\SOFTWARE (in a similar
way that 32-bit applications see %SystemRoot%\Syswow64 as %SystemRoot%\System32)

HKEY_USERS (HKU)

While the HKEY_CURRENT_USER key stores the settings of the current user, this key stores Windows
settings for all users. Its subkeys contain information about all user profiles, and one of the subkeys
always corresponds to the HKEY_CURRENT_USER key (via the Security ID (SID) parameter of the
user). Another subkey, HKEY_USERS\DEFAULT, stores information about system settings at the
moment before the start of the current user session.

HKEY_CURRENT_CONFIG (HKCC)

This key store information about a hardware profile which is used by the local computer at system startup.
Hardware profiles allow selecting drivers of supported devices for the specified session.

HKEY_PERFORMANCE_DATA

This key provides runtime information into performance data provided by either the NT kernel itself, or
running system drivers, programs and services that provide performance data. This key is not stored in

any hive and not displayed in the Registry Editor, but it is visible through the registry functions in the
Windows API, or in a simplified view via the Performance tab of the Task Manager (only for a few
performance data on the local system) or via more advanced control panels (such as the Performances
Monitor or the Performances Analyzer which allows collecting and logging these data, including from
remote systems).

HKEY_DYN_DATA

This key is used only on Windows 95, Windows 98 and Windows ME. It contains information about
hardware devices, including Plug and Play and network performance statistics. The information in this
hive is also not stored on the hard drive. The Plug and Play information is gathered and configured at
startup and is stored in memory.

107
3.3.1 Automating Administrative Tasks – Windows Host Scripting
The Microsoft Windows Script Host (WSH) (formerly named Windows Scripting Host) is an automation
technology for Microsoft Windows operating systems that provides scripting abilities comparable to
batch files, but with a wider range of supported features. This tool was first provided on Windows 95
after Build 950a on the installation discs as an optional installation configurable and installable by means
of the Control Panel. Windows Script Host is distributed and installed by default on Windows 98 and
later versions of Windows. It is also installed if Internet Explorer 5 (or a later version) is installed.
Beginning with Windows 2000, the Windows Script Host became available for use with user login
scripts.

It is language-independent in that it can make use of different Active Scripting language engines. By
default, it interprets and runs plain-text JScript (.JS & .JSE files) and VBScript (.VBS & .VBE files).

Users can install different scripting engines to enable them to script in other languages, for instance
PerlScript. The language independent filename extension WSF can also be used. The advantage of the
Windows Script File (.WSF) is that it allows multiple scripts (―jobs‖) as well as a combination of
scripting languages within a single file.

WSH engines include various implementations for the Rexx, BASIC, Perl, Ruby, Tcl, PHP, JavaScript,
Delphi, Python, XSLT, and other languages.

Usage

Windows Script Host may be used for a variety of purposes, including logon scripts, administration and
general automation. Microsoft describes it as an administration tool. WSH provides an environment for
scripts to run – it invokes the appropriate script engine and provides a set of services and objects for the
script to work with. These scripts may be run in GUI mode (WScript.exe) or command line mode
(CScript.exe), or from a COM object (wshom.ocx), offering flexibility to the user for interactive or non-
interactive scripts. Windows Management Instrumentation is also scriptable by this means.

The WSH, the engines, and related functionality are also listed as objects which can be accessed and
scripted and queried by means of the VBA and Visual Studio object explorers and those for similar tools
like the various script debuggers, e.g. Microsoft Script Debugger, and editors.

WSH implements an object model which exposes a set of Component Object Model (COM) interfaces.
So in addition to ASP, IIS, Internet Explorer, CScript and WScript, the WSH can be used to automate and
communicate with any Windows application with COM and other exposed objects, such as using
PerlScript to query Microsoft Access by various means including various ODBC engines and SQL,
ooRexxScript to create what are in effect Rexx macros in Microsoft Excel, Quattro Pro, Microsoft Word,
Lotus Notes and any of the like, the XLNT script to get environment variables and print them in a new
TextPad document, and so on.

The VBA functionality of Microsoft Office, Open Office (as well as Python and other installable macro
languages) and Corel WordPerfect Office is separate from WSH engines although Outlook 97 uses
VBScript rather than VBA as its macro language.

VBScript, JScript, and some third-party engines have the ability to create and execute scripts in an
encoded format which prevents editing with a text editor; the file extensions for these encoded scripts is

108
.vbe and .jse and others of that type.

Unless otherwise specified, any WSH scripting engine can be used with the various Windows server
software packages to provide CGI scripting. The current versions of the default WSH engines and all or
most of the third party engines have socket abilities as well; as a CGI script or otherwise, PerlScript is the
choice of many programmers for this purpose and the VBScript and various Rexx-based engines are also
rated as sufficiently powerful in connectivity and text-processing abilities to also be useful. This

also goes for file access and processing—the earliest WSH engines for VBScript and JScript do not since
the base language did not, whilst PerlScript, ooRexxScript, and the others have this from the beginning.

Any scripting language installed under Windows can be accessed by external means of PerlScript,
PythonScript, VBScript and the other engines available can be used to access databases (Lotus Notes,
Microsoft Access, Oracle Database, Paradox) and spreadsheets (Microsoft Excel, Lotus 1-2-3, Quattro
Pro) and other tools like word processors, terminal emulators, command shells and so on. This can be
accomplished by means of the WSH, so any language can be used if there is an installed engine.

Examples

The first example is very simple; it shows some VBScript which uses the root WSH COM object
―WScript‖ to display a message with an ‗OK‘ button. Upon launching this script the CScript or WScript
engine would be called and the runtime environment provided. Content of a file hello0.vbs:

Save the file as ‗hello0.vbs‘


WScript.Echo ―Hello world‖ WScript.Quit

WSH programming can also use the JScript language. Content of a file hello1.js:

Save the file as ‗hello1.js‘


WSH.Echo(―Hello world‖); WSH.Quit();

Or, code can be mixed in one WSF file, such as VBScript and JScript, or any other: Content of a file

hello2.wsf:

Save the file as ‗hello2.wsf‘


<job> <script language=‖VBScript‖> MsgBox ―hello world (from vb)‖ </script> <script
language=‖JScript‖> WSH.echo(―hello world (from js)‖); </script> </job>

Security Concerns

Windows applications and processes may be automated using a script in Windows Script Host. Viruses
and malware could be written to exploit this ability. Thus, some suggest disabling it for security reasons.
Alternatively, antivirus programs may offer features to control .vbs and other scripts which run in the
WSH environment.

109
Since version 5.6 of WSH, scripts can be digitally signed programmatically using the Scripting.Signer
object in a script itself, provided a valid certificate is present on the system. Alternatively, the signcode
tool from the Platform SDK, which has been extended to support WSH filetypes, may be used at the
command line.

By using Software Restriction Policies introduced with Windows XP, a system may be configured to
execute only those scripts which are stored in trusted locations, have a known MD5 hash, or have been
digitally signed by a trusted publisher, thus preventing the execution of untrusted scripts.

3.4. Advanced Concepts II


 Routing and NAT

Routing refers to establishing the routes that data packets take on their way to a particular destination.
This term can be applied to data traveling on the Internet, over 3G or 4G networks, or over similar

networks used for telecom and other digital communications setups. Routing can also take place within
proprietary networks.

In general, routing involves the network topology, or the setup of hardware, that can effectively relay
data. Standard protocols help to identify the best routes for data and to ensure quality transmission.
Individual pieces of hardware such as routers are referred to as ―nodes‖ in the network. Different
algorithms and protocols can be used to figure out how to best route data packets, and which nodes should
be used. For example, some data packets travel according to a distance vector model that primarily uses
distance as a factor, whereas others use Link-State Protocol, which involves other aspects of a ―best path‖
for data.

Data packets are also made to give networks information. Headers on packets provide details about origin
and destination. Standards for data packets allow for conventional design, which can help with future
routing methodologies. As the world of digital technology evolves, routing will also evolve according to
the needs and utility of a particular network.

In Internetworking, the process of moving a packet of data from source to destination. Routing is usually
performed by a dedicated device called a router. Routing is a key feature of the Internet because it enables
messages to pass from one computer to another and eventually reach the target machine. Each
intermediary computer performs routing by passing along the message to the next computer. Part of this
process involves analyzing a routing table to determine the best path.

Routing is often confused with bridging, which performs a similar function. The principal difference
between the two is that bridging occurs at a lower level and is therefore more of a hardware function
whereas routing occurs at a higher level where the software component is more important. And because
routing occurs at a higher level, it can perform more complex analysis to determine the optimal path for
the packet.

Network Address Translation (NAT)

NAT translates the IP addresses of computers in a local network to a single IP address. This address is
often used by the router that connects the computers to the Internet. The router can be connected to a DSL
modem, cable modem, T1 line, or even a dial-up modem. When other computers on the Internet

110
attempt to access computers within the local network, they only see the IP address of the router. This adds
an extra level of security, since the router can be configured as a firewall, only allowing authorized
systems to access the computers within the network.

Once a system from outside the network has been allowed to access a computer within the network, the IP
address is then translated from the router‘s address to the computer‘s unique address. The address is
found in a ―NAT table‖ that defines the internal IP addresses of computers on the network. The NAT
table also defines the global address seen by computers outside the network. Even though each computer
within the local network has a specific IP address, external systems can only see one IP address when
connecting to any of the computers within the network.

To simplify, network address translation makes computers outside the local area network (LAN) see only
one IP address, while computers within the network can see each system‘s unique address. While this aids
in network security, it also limits the number of IP addresses needed by companies and organizations.
Using NAT, even large companies with thousands of computers can use a single IP address for
connecting to the Internet. Now that‘s efficient.

Network Address Translation (NAT) is the process where a network device, usually a firewall, assigns a
public address to a computer (or group of computers) inside a private network. The main use of NAT is to
limit the number of public IP addresses an organization or company must use, for both economy and
security purposes.

The most common form of network translation involves a large private network using addresses in a
private range (10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, or 192.168.0 0 to

192.168.255.255). The private addressing scheme works well for computers that only have to access
resources inside the network, like workstations needing access to file servers and printers. Routers inside
the private network can route traffic between private addresses with no trouble. However, to access
resources outside the network, like the Internet, these computers have to have a public address in order for
responses to their requests to return to them. This is where NAT comes into play.

Internet requests that require Network Address Translation (NAT) are quite complex but happen so
rapidly that the end user rarely knows it has occurred. A workstation inside a network makes a request to
a computer on the Internet. Routers within the network recognize that the request is not for a

resource inside the network, so they send the request to the firewall. The firewall sees the request from the
computer with the internal IP. It then makes the same request to the Internet using its own public address,
and returns the response from the Internet resource to the computer inside the private network. From the
perspective of the resource on the Internet, it is sending information to the address of the firewall. From
the perspective of the workstation, it appears that communication is directly with the site on the Internet.
When NAT is used in this way, all users inside the private network access the Internet have the same
public IP address when they use the Internet. That means only one public address is needed for hundreds
or even thousands of users.

Most modern firewalls are stateful – that is, they are able to set up the connection between the internal
workstation and the Internet resource. They can keep track of the details of the connection, like ports,
packet order, and the IP addresses involved. This is called keeping track of the state of the connection. In
this way, they are able to keep track of the session composed of communication between the workstation
and the firewall, and the firewall with the Internet. When the session ends, the firewall discards all of the
information about the connection.

111
There are other uses for Network Address Translation (NAT) beyond simply allowing workstations with
internal IP addresses to access the Internet. In large networks, some servers may act as Web servers and
require access from the Internet. These servers are assigned public IP addresses on the firewall, allowing
the public to access the servers only through that IP address. However, as an additional layer of security,
the firewall acts as the intermediary between the outside world and the protected internal network.
Additional rules can be added, including which ports can be accessed at that IP address. Using NAT in
this way allows network engineers to more efficiently route internal network traffic to the same resources,
and allow access to more ports, while restricting access at the firewall. It also allows detailed logging of
communications between the network and the outside world.

Additionally, NAT can be used to allow selective access to the outside of the network, too. Workstations
or other computers requiring special access outside the network can be assigned specific external IPs
using NAT, allowing them to communicate with computers and applications that require a unique public
IP address. Again, the firewall acts as the intermediary, and can control the session in both directions,
restricting port access and protocols.

NAT is a very important aspect of firewall security. It conserves the number of public addresses used
within an organization, and it allows for stricter control of access to resources on both sides of the
firewall.

3.4.1 Proxies and Gateways


What is proxy server?

A proxy server acts as a gateway between you and the Internet. It‘s an intermediary server separating end
users from the websites they browse. Proxy servers provide varying levels of functionality, security, and
privacy depending on your use case, needs, or company policy.

Modern proxy servers do much more than forwarding web requests, all in the name of data security and
network performance. Proxy servers act as a firewall and web filter, provide shared network connections,
and cache data to speed up common requests. A good proxy server keeps users and the internal network
protected from the bad stuff that lives out in the wild Internet. Lastly, proxy servers can provide a high
level of privacy.

A proxy server is a bridge between you and the rest of the Internet. Normally, when you use your browser
to surf the Internet, you‘ll connect directly to the website you‘re visiting. Proxies communicate with
websites on your behalf.

When you use a proxy, your browser first connects to the proxy, and the proxy forwards your traffic to
the website. That‘s why proxy servers are also known as ―forward proxies.‖ A proxy will also receive the
website‘s response and send it back to you.

In everyday use, the word ―proxy‖ refers to someone who is authorized to take an action on your behalf

 such as voting in an important meeting that you can‘t attend. A proxy server fills the same role,
but online. Instead of you communicating directly with the websites you visit, a proxy steps in to
handle that relationship for you.

What does a proxy server do, exactly?

112
As your intermediary on the web, proxy servers have many useful roles. Here‘s a few of the primary uses
for a proxy server:

 Firewalls: A firewall is a type of network security system that acts as a barrier between a
network and the wider Internet. Security professionals configure firewalls to block unwanted
access to the networks they are trying to protect, often as an anti-malware or anti-hacking
countermeasure. A proxy server between a trusted network and the Internet is the perfect place to
host a firewall designed to intercept and either approve or block incoming traffic before it reaches
the network.
o Content filters: Just as proxy servers can regulate incoming connection requests with a
firewall, they can also act as content filters by blocking undesired outgoing traffic.
Companies may configure proxy servers as content filters to prevent employees from
accessing the blocked websites while at work.
o Bypassing content filters: That‘s right — you can outsmart a proxy with another proxy.
If your company‘s proxy has blocked your favorite website, but it hasn‘t blocked access
to your personal proxy server or favorite web-based proxy, you can access your proxy
and use it to reach the websites you want.
o Caching: Caching refers to the temporary storage of frequently accessed data, which
makes it easier and faster to access it again in the future. Proxies can cache websites so
that they‘ll load faster than if you were to send your traffic all the way through the
Internet to the website‘s server. This reduces latency — the time it takes for data to travel
through the Internet.
o Security: In addition to hosting firewalls, proxy servers can also enhance security by
serving as the singular public face of the network. From an outside point of view, all the
network‘s users are anonymous, hidden behind the proxy‘s IP address. If a hacker wants
to access a specific device on a network, it‘ll be a lot harder for them to find it.
o Sharing Internet connections: Businesses or even homes with a single Internet
connection can use a proxy server to funnel all their devices through that one connection.
Using a Wi-Fi router and wireless-capable devices is another solution to this issue.

What is a Gateway

A gateway is a node (router) in a computer network, a key stopping point for data on its way to or from
other networks. Thanks to gateways, we are able to communicate and send data back and forth. The
Internet wouldn‘t be any use to us without gateways (as well as a lot of other hardware and software).

In a workplace, the gateway is the computer that routes traffic from a workstation to the outside network
that is serving up the Web pages. For basic Internet connections at home, the gateway is the Internet
Service Provider that gives you access to the entire Internet.

A node is simply a physical place where the data stops for either transporting or reading/using. (A
computer or modem is a node; a computer cable isn‘t.) Here are a few node notes:

 On the Internet, the node that‘s a stopping point can be a gateway or a host node.
o A computer that controls the traffic your Internet Service Provider (ISP) receives is a
node.
o If you have a wireless network at home that gives your entire family access to the
Internet, your gateway is the modem (or modem-router combo) your ISP provides so you
can connect to their network. On the other end, the computer that controls all of the data
traffic your Internet Service Provider (ISP) takes and sends out is itself a node.

113
o When a computer-server acts as a gateway, it also operates as a firewall and a proxy
server. A firewall keeps out unwanted traffic and outsiders off a private network. A proxy
server is software that ―sits‖ between programs on your computer that you use (such as a
Web browser) and a computer server—the computer that serves your network. The proxy
server‘s task is to make sure the real server can handle your online data requests.

A gateway is a hardware device that acts as a ―gate‖ between two networks. It may be a router, firewall,
server, or other device that enables traffic to flow in and out of the network. While a gateway protects the
nodes within network, it also a node itself. The gateway node is considered to be on the ―edge‖ of the
network as all data must flow through it before coming in or going out of the network. It may also
translate data received from outside networks into a format or protocol recognized by devices within the
internal network. A router is a common type of gateway used in home networks. It allows computers
within the local network to send and receive data over the Internet. A firewall is a more

advanced type of gateway, which filters inbound and outbound traffic, disallowing incoming data from
suspicious or unauthorized sources. A proxy server is another type of gateway that uses a combination of
hardware and software to filter traffic between two networks.

A gateway is a network node used in telecommunications that connects two networks with different
transmission protocols together. Gateways serve as an entry and exit point for a network as all data must
pass through or communicate with the gateway prior to being routed. In most IP-based networks, the only
traffic that does not go through at least one gateway is traffic flowing among nodes on the same local area
network (LAN) segment. The term default gateway or network gateway may also be used to describe the
same concept.

The primary advantage of using a gateway in personal or enterprise scenarios is simplifying Internet
connectivity into one device. In the enterprise, a gateway node can also act as a proxy server and a
firewall. Gateways can be purchased through popular technology retailers, such as Best Buy, or rented
through an Internet service provider.

How gateways work

All networks have a boundary that limits communication to devices that are directly connected to it. Due
to this, if a network wants to communicate with devices, nodes or networks outside of that boundary, they
require the functionality of a gateway. A gateway is often characterized as being the combination of a
router and a modem.

The gateways is implemented at the edge of a network and manages all data that is directed internally or
externally from that network. When one network wants to communicate with another, the data packet is
passed to the gateway and then routed to the destination through the most efficient path. In addition to
routing data, a gateway will also store information about the host network‘s internal paths and the paths of
any additional networks that are encountered.

Gateways are basically protocol converters, facilitating compatibility between two protocols and
operating on any layer of the open systems interconnection (OSI) model.

Types of gateways

Gateways can take several forms and perform a variety of tasks. Examples of this include:

114
 Web application firewalls- This type filters traffic to and from a web server and looks at
application-layer data.
o Cloud storage gateways- This type translates storage requests with various cloud storage
service API calls. It allows organizations to integrate storage from a private cloud into
applications without migrating into a public cloud.
o API, SOA or XML gateways – This type manages traffic flowing into and out of a
service, microservices-oriented architecture or XML-based web service.
o IoT gateways- This type aggregates sensor data from devices in an IoT environment,
translates between sensor protocols and processes sensor data before sending it onward.
o Media gateways- This type converts data from the format required for one type of
network to the format required for another.
o Email security gateways- This type prevents the transmission of emails that break
company policy or will transfer information with malicious intent.
o VoIP trunk gateways- This type facilitates the use of plain old telephone service
equipment, such as landline phones and fax machines, with a voice over IP (VoIP)
network.

Additionally, a service provider may develop their own personal gateways that can be used by customers.
For instance, Amazon Web Services (AWS) has Gateway that allows a developer to connect non-AWS
applications to AWS back end resources.

3.5. Review Questions

gistery.
-known routing protocols.

What are the benefits of NAT to an organization? Discuss the different ways used to
implement NAT.

115
UNIT- IV

4. Resource Monitoring & Management


As stated earlier, a great deal of system administration revolves around resources and their
efficient use. By balancing various resources against the people and programs that use those
resources, you waste less money and make your users as happy as possible. However, this leaves
two questions:

i. What are resources?

i. How is it possible to know what resources are being used (and to what extent)?

The purpose of this chapter is to enable you to answer these questions by helping you to learn
more about resources and how they can be monitored.

Before you can monitor resources, you first have to know what resources there are to monitor.
All systems have the following resources available:

 CPU power
 Bandwidth
 Memory
 Storage

These resources have a direct impact on system performance, and therefore, on your users‘ productivity
and happiness. At its simplest, resource monitoring is nothing more than obtaining information
concerning the utilization of one or more system resources.

However, it is rarely this simple. First, one must take into account the resources to be monitored. Then it
is necessary to examine each system to be monitored, paying particular attention to each system‘s
situation.

The systems you monitor fall into one of two categories:

 The system is currently experiencing performance problems at least part of the time and you
would like to improve its performance.
 The system is currently running well and you would like it to stay that way.

The first category means you should monitor resources from a system performance perspective,
while the second category means you should monitor system resources from a capacity planning
perspective.

116
Because each perspective has its own unique requirements, the following sections explore each
category in more depth.

System Performance Monitoring

System performance monitoring is normally done in response to a performance problem. Either the
system is running too slowly, or programs (and sometimes even the entire system) fail to run at all. In
either case, performance monitoring is normally done as the first and last steps of a three-step process:

i. Monitoring to identify the nature and scope of the resource shortages that are causing the
performance problems
ii. The data produced from monitoring is analyzed and a course of action (normally performance
tuning and/or the procurement of additional hardware) is taken to resolve the problem
iii. Monitoring to ensure that the performance problem has been resolved

Because of this, performance monitoring tends to be relatively short-lived in duration and more detailed
in scope.

4.1. Monitoring System Capacity


Monitoring system capacity is done as part of an ongoing capacity planning program. Capacity planning
uses long-term resource monitoring to determine rates of change in the utilization of system resources.
Once these rates of change are known, it becomes possible to conduct more accurate long- term planning
regarding the procurement of additional resources.

Monitoring done for capacity planning purposes is different from performance monitoring in two ways:

i. The monitoring is done on a more-or-less continuous basis

i. The monitoring is usually not as detailed

The reason for these differences stems from the goals of a capacity planning program. Capacity planning
requires a ―big picture‖ viewpoint; short-term or anomalous resource usage is of little concern. Instead,
data is collected over a period of time, making it possible to categorize resource utilization in terms of
changes in workload. In more narrowly-defined environments, (where only one application is run, for
example) it is possible to model the application‘s impact on system resources. This can be done with
sufficient accuracy to make it possible to determine, for example, the impact of 5 more customer service
representatives running the customer service application during the busiest time of the day.

4.1.1. What to Monitor?


As stated earlier, the resources present in every system are CPU power, bandwidth, memory, and storage.
At first glance, it would seem that monitoring would need only consist of examining these four different
things.

Unfortunately, it is not that simple. For example, consider a disk drive. What things might you want to
know about its performance?

117
 How much free space is available?
 How many I/O operations on average does it perform each second?

 How long on average does it take each I/O operation to be completed?


 How many of those I/O operations are reads? How many are writes?
 What is the average amount of data read/written with each I/O?

There are more ways of studying disk drive performance; these points have only scratched the surface.
The main concept to keep in mind is that there are many different types of data for each resource.

The following subsections explore the types of utilization information that would be helpful for each of
the major resource types.

In its most basic form, monitoring CPU power can be no more difficult than determining if CPU
utilization ever reaches 100%. If CPU utilization stays below 100%, no matter what the system is doing,
there is additional processing power available for more work.

However, it is a rare system that does not reach 100% CPU utilization at least some of the time. At that
point it is important to examine more detailed CPU utilization data. By doing so, it becomes possible to
start determining where the majority of your processing power is being consumed. Here are some of the
more popular CPU utilization statistics:

 User Versus System


o Context Switches
 Interrupts
 Runnable Processes

A process may be in different states. For example, it may be:

 Waiting for an I/O operation to complete


o Waiting for the memory management subsystem to handle a page fault In these cases, the
process has no need for the CPU.

However, eventually the process state changes, and the process becomes runnable. As the name
implies, a runnable process is one that is capable of getting work done as soon as it is scheduled
to receive CPU time. However, if more than one process is runnable at any given time, all but
one (assuming a single-processor computer system) of the runnable processes must wait for their
turn at the CPU. By monitoring the number of runnable processes, it is possible to determine
how CPU-bound your system is.

Other performance metrics that reflect an impact on CPU utilization tend to include different
services the operating system provides to processes. They may include statistics on memory
management, I/O processing, and so on. These statistics also reveal that, when system
performance is monitored, there are no boundaries between the different statistics. In other
words, CPU utilization statistics may end up pointing to a problem in the I/O subsystem, or
memory utilization statistics may reveal an application design flaw.

118
Therefore, when monitoring system performance, it is not possible to examine any one statistic
in complete isolation; only by examining the overall picture it is possible to extract meaningful
information from any performance statistics you gather.

Monitoring bandwidth is more difficult than the other resources described here. The reason for
this is due to the fact that performance statistics tend to be device-based, while most of the places
where bandwidth is important tend to be the buses that connect devices. In those instances where
more than one device shares a common bus, you might see reasonable statistics for each device,
but the aggregate load those devices place on the bus would be much greater.

Another challenge to monitoring bandwidth is that there can be circumstances where statistics
for the devices themselves may not be available. This is particularly true for system expansion
buses and datapaths. However, even though 100% accurate bandwidth-related statistics may not
always be available, there is often enough information to make some level of analysis possible,
particularly when related statistics are taken into account.

Some of the more common bandwidth-related statistics are:

 Bytes received/sent: Network interface statistics provide an indication of the bandwidth


utilization of one of the more visible buses — the network.
o Interface counts and rates: These network-related statistics can give indications of
excessive collisions, transmit and receive errors, and more. Through the use of these
statistics (particularly if the statistics are available for more than one system on your
network), it is possible to perform a modicum of network troubleshooting even before the
more common network diagnostic tools are used.
 Transfers per Second: Normally collected for block I/O devices, such as disk
and high- performance tape drives, this statistic is a good way of determining
whether a particular device‘s bandwidth limit is being reached. Due to their
electromechanical nature, disk and tape drives can only perform so many I/O
operations every second; their performance degrades rapidly as this limit is
reached.

If there is one area where a wealth of performance statistics can be found, it is in the area of
monitoring memory utilization. Due to the inherent complexity of today‘s demand-paged virtual
memory operating systems, memory utilization statistics are many and varied. It is here that the
majority of a system administrator‘s work with resource management takes place.

The following statistics represent a cursory overview of commonly-found memory management


statistics:

 Page Ins/Page Outs: These statistics make it possible to gauge the flow of pages from system
memory to attached mass storage devices (usually disk drives). High rates for both of these
statistics can mean that the system is short of physical memory and is thrashing, or spending more
system resources on moving pages into and out of memory than on actually running applications.
o Active/Inactive Pages: These statistics show how heavily memory-resident pages are
used. A lack of inactive pages can point toward a shortage of physical memory.

119
 Free, Shared, Buffered, and Cached Pages: These statistics provide additional detail over the
more simplistic active/inactive page statistics. By using these statistics, it is possible to determine
the overall mix of memory utilization.
o Swap Ins/Swap Outs: These statistics show the system‘s overall swapping behavior.
Excessive rates here can point to physical memory shortages.

Monitoring storage normally takes place at two different levels:

 Monitoring for sufficient disk space


o Monitoring for storage-related performance problems

The reason for this is that it is possible to have dire problems in one area and no problems
whatsoever in the other. For example, it is possible to cause a disk drive to run out of disk space
without once causing any kind of performance-related problems. Likewise, it is possible to have
a disk drive that has 99% free space, yet is being pushed past its limits in terms of performance.

However, it is more likely that the average system experiences varying degrees of resource shortages in
both areas. Because of this, it is also likely that — to some extent — problems in one area impact the
other. Most often this type of interaction takes the form of poorer and poorer I/O performance as a disk
drive nears 0% free space although, in cases of extreme I/O loads, it might be possible to slow I/O
throughput to such a level that applications no longer run properly.

In any case, the following statistics are useful for monitoring storage:

 Free Space: Free space is probably the one resource all system administrators watch closely; it
would be a rare administrator that never checks on free space (or has some automated way of
doing so).
o File System-Related Statistics: These statistics (such as number of files/directories,
average file size, etc.) provide additional detail over a single free space percentage. As
such, these

statistics make it possible for system administrators to configure the system to give the best
performance, as the I/O load imposed by a file system full of many small files is not the same as
that imposed by a file system filled with a single massive file.

 Transfers per Second: This statistic is a good way of determining whether a particular device‘s
bandwidth limitations are being reached.
o Reads/Writes per Second: A slightly more detailed breakdown of transfers per second,
these statistics allow the system administrator to more fully understand the nature of the
I/O loads a storage device is experiencing. This can be critical, as some storage
technologies have widely different performance characteristics for read versus write
operations.

4.1.2. Monitoring Tools


As your organization grows, so does the number of servers, devices, and services you depend on. The
term system covers all of the computing resources of your organization. Each element in the system
infrastructure relies on underlying services or provides services to components that are closer to user.

120
In networking, it is typical to think of a system as a layered stack. User software sits at the top of the
stack and system applications and services on the next layer down. Beneath the services and applications,
you will encounter operating systems and firmware. The performance of software elements needs to be
monitored as an application stack.

Users will notice performance problems with the software that they use, but those problems rarely arise
within that software. All layers of the application stack need to be examined to find the root cause of
performance issues. You need to head off problems with real-time status monitoring before they occur.
Monitoring tools help you spot errors and service failures before they start to impact users.

The system stack continues on below the software. Hardware issues can be prevented through hardware
monitoring. You will need to monitor servers, network devices, interface performance, and network link
capacity. You need to monitor many different types of interacting system elements to keep your IT
services running smoothly.

Why do System Performance Monitoring?

Knowing whether a computer has issues is fairly straightforward when the computer is right in front of
you. Knowing what’s causing the problem? That’s harder. But a computer sitting by itself is not as
useful as it could be. Even the smallest small-office/home-office network has multiple nodes: laptops,
desktops, tablets, WiFi access points, Internet gateway, smartphones, file servers and/or media servers,
printers, and so on. That means you are in charge of ―infrastructure‖ rather than just ―equipment.‖ Any
component might start misbehaving and could cause issues for the others.

You most likely rely on off-premises servers and services, too. Even a personal website raises the nagging
question, ―Is my site still up?‖ And when your ISP has problems, your local network‘s usefulness suffers.
You need an activity monitor. Organizations rely more and more on servers and services hosted in the
cloud: SaaS applications (email, office apps, business packages, etc); file storage; cloud hosting for your
own databases and apps; and so on. This requires sophisticated monitoring capabilities that can handle
hybrid environments.

Bandwidth monitoring tools and NetFlow and sFlow based traffic analyzers help you stay aware of the
activity, capacity, and health of your network. They allow you to watch traffic as it flows through routers
and switches, or arrives at and leaves hosts.

But what of the hosts on your network, their hardware, and the services and applications running there?
Monitoring activity, capacity, and health of hosts and applications is the focus of system monitoring.

System Monitoring Software Essentials

In order to keep your system fit for purpose, your monitoring activities need to cover the following
priorities:

 Acceptable delivery speeds


 Constant availability
 Preventative maintenance
 Software version monitoring and patching
 Intrusion detection

 Data integrity

121
 Security monitoring
 Attack mitigation
 Virus prevention and detection

Lack of funding may cause you to compromise on monitoring completeness. The expense of
monitoring can be justified because of it:

 reduces user/customer support costs


 prevents loss of income caused by system outages or attack vulnerability
 prevents data leakage leading to litigation
 prevents hardware damage and loss of business-critical data

Minimum system monitoring software capabilities

More sophisticated system monitoring package provides a much broader range of capabilities,
such as:

 Monitoring multiple servers. Handling servers from various vendors running various operating
systems. Monitoring servers at multiple sites and in cloud environments.
 Monitoring a range of server metrics: availability, CPU usage, memory usage, disk space,
response time, and upload/download rates. Monitoring CPU temperature and power supply
voltages.
 Monitoring applications. Using deep knowledge of common applications and services to
monitor key server processes, including web servers, database servers, and application stacks.
 Automatically alerting you of problems, such as servers or network devices that are overloaded
or down, or worrisome trends. Customized alerts that can use multiple methods to contact you –
email, SMS text messages, pager, etc.
 Triggering actions in response to alerts, to handle certain classes of problems automatically.
 Collecting historical data about server and device health and behavior.

 Displaying data. Crunching the data and analyzing trends to display illuminating visualizations
of the data.
 Reports. Besides displays, generating useful predefined reports that help with tasks like
forecasting capacity, optimizing resource usage, and predicting needs for maintenance and
upgrades.
 Customizable reporting. A facility to help you create custom reports.
 Easy configurability, using methods like auto-discovery and knowledge of server and
application types.
 Non-intrusive: imposing a low overhead on your production machines and services. Making
smart use of agents to offload monitoring where appropriate.
 Scalability: Able to grow with your business, from a small or medium business (SMB) to a large
enterprise.

Task Manager (old name Windows Task Manager) is a task manager, system monitor, and startup
manager included with all versions of Microsoft Windows since Windows NT 4.0 and Windows 2000.

Windows Task Manager provides information about computer performance and shows detailed
information about the programs and processes running on the computer, including name of running

122
processes, CPU load, commit charge, I/O details, logged-in users, and Windows services; if connected to
the network, you can also view the network status and quickly understand how the network works.

Microsoft improves the task manager between each version of Windows, sometimes quite dramatically.
Specifically, the task managers in Windows 10 and Windows 8 are very different from those in Windows
7and Windows Vista, and the task managers in Windows 7 and Vista are very different from those in
Windows XP. A similar program called Tasks exists in Windows 98 and Windows 95.

How to Open the Task Manager

Starting Task Manager is always a concern for many of you. Now we will list some easy and quick ways
for you to open it. Some of them might come in handy if you don‘t know how to open a Task Manager or
you can‘t open Task Manager the way you‘re used to.

You are probably familiar with the way that pressing Ctrl+Alt+Delete on your keyboard. Before
Windows Vista was released, this way can bring you directly to Task Manager. Starting with Windows
Vista, pressing Ctrl+Alt+Delete now leads to the Windows Security interface, which provides options for
locking your PC, switching users, signing out, changing a password, and running Task Manager. The
quickest way to start Task Manager is to press Ctrl+Shift+Esc, and it will take you directly to it.

If you prefer using a mouse over a keyboard, one of the quickest ways to launch Task Manager is to
right-click on any blank area on the taskbar and select Task Manager. Just need two clicks.

You can also run Task Management by hitting Windows+R to open the Run box, typing taskmgr and
then hitting Enter or clicking OK. In fact, you can also open the Task Manager by Star menu, Windows
Explorer, or creating a shortcut… While we have listed these four convenient ways which are totally
enough for you.

Figure 4.1. How to Start Task Manager

Explanation of the Tabs in Task Manager

Now we are going to discuss all the useful tabs you can find in the Task Manager nowadays, mostly in
Windows 8 and Windows 10.

123
Figure 4.2. Sample Screen Shot of a Task Manager Window

Processes: The Processes tab contains a list of all running programs and applications on your computer
(listed under Apps), as well as any background processes and Windows processes that are running.

In this tab, you can close running programs, see how each program uses your computer resources, and
more.

The Processes tab is available in all versions of Windows. Starting with Windows 8, Microsoft has
combined the Applications and Processes tab into the Processes tab, so Windows 8/10 displays all
running programs in addition to processes and services.

Performance: The Performance tab is available in all versions of Windows that is a summary of what‘s
going on, overall, with your major hardware components, including CPU, memory, disk drive, Wi-Fi, and
network usage. It displays how much the computer‘s available system resources are being used, so you
can check the valuable information.

App History: The App History tab displays the CPU usage and network utilization that each Windows
app has used from the date listed on the screen until the time you enter Task Manager. App History is
only available in Task Manager in Windows 10 and Windows 8.

Startup: The Startup tab shows every program that is launched automatically each time you start your
computer, along with several important details about each program, including the Publisher, Status, and
Startup impact which is the most valuable information – shows the impact rating of high, medium or low.

This tab is great for identifying and then disabling programs that you don‘t need them to run
automatically. Disabling Windows auto-start programs is a very simple way to speed up your computer.
Startup tab is only available in Task Manager in Windows 10 and Windows 8.

Users: The Users tab shows users currently signed in to the computer and the processes are running
within each. The Users tab is available in all Windows versions of Task Manager but only shows
processes that each user is running in Windows 10 and Windows 8.

Details: The Details tab contains full details of each process running on your computer. The information
provided in this tab is useful during advanced troubleshooting. Details tab is available in Task Manager in
Windows 10 and Windows 8, and the features of the Processes tab are similar to Details in earlier
versions of Windows.

124
Services: The Services tab is available in Task Manager in Windows 10, 8, 7, and Vista that shows all of
the Windows Services currently running on the computer with the Description and Status. The status is
Running or Stopped, and you can change it.

What to Do in the Task Manager?

Task manager always gives you some limited control over those running tasks, like set process priorities,
processor affinity, start and stop services, and forcibly terminate processes.

Well, one of the most common things done in Task Manager is to use End Task to prevent programs from
running. If a program no longer responds, you can select End Task from the Task Manager to close the
program without restarting the computer.

Resource Monitor (Resmon) is a system application included in Windows Vista and later versions of
Windows that allows users to look at the presence and allocation of resources on a computer. This
application allows administrators and other users determine how system resources are being used by a
particular hardware setup.

How to start Resource Monitor

Users and administrators have several options to start Resource Monitor. It is included in several versions
of Windows, and some options to start the tool are only available in select versions of the operating
system.

The first two methods should work on all versions of Windows that are supported by Microsoft.

1. Windows-R to open the run box. Type resmon.exe, and hit the Enter-key.
2. Windows-R to open the run box. Type perfmon.exe /res, and hit the Enter-key.
3. On Windows 10: Start → All Apps → Windows Administrative Tools → Resource Monitor
4. Old Windows: Start → All Programs → Accessories → System Tools → Resource Monitor
5. Open Task Manager with Ctrl+Shift+Esc→ Performance tab, click open Resource Monitor.

Figure 4.3. Opening Resource Monitor from Task Manager

The Resource Monitor interface looks the same on Windows 7, 8.1 and 10. The program uses tabs to
separate data, it loads an overview when you start it, including CPU, Memory, Disk, and Network are the
five tabs of the program including all the processes that use the resources.

125
The sidebar displays graphs that highlight the CPU, Disk, Network, and Memory use over a period of 60
seconds.

You can hide and show elements with a click on the arrow icon in title bars. Another option that you have
to customize the interface is to move the mouse cursor over dividers in the interface to drag the visible
area. Use it to increase or decrease the visible area of the element.

You may want to hide the graphs, for instance, to make more room for more important data and run the
Resource Monitor window in as large of a resolution as possible.

The overview tab is a good starting point, as it gives you an overview of the resource usage. It highlights
CPU and memory usage, disk utilization, and network use in real-time.

Each particular listing offers a wealth of information. The CPU box lists process names and IDs, the
network box IP addresses and data transfers, the memory box hard faults, and the disk box read and write
operations.

One interesting option that you have right here and there is to select one or multiple processes under CPU
to apply filters to the Disk, Network and Memory tab.

If you select a particular process under CPU, Resource Monitor lists the disk, network and memory usage
of that process only in its interface. This is one of the differences to the Task Manager, as you cannot do
something like that in the tool.

Figure 4.4. Sample Screen Shot of Resource Monitor

Monitor CPU Usage with Resource Monitor

You need to switch to the CPU tab if you want to monitor CPU utilization in detail. You find the
processes listing of the overview page there, and also the three new listings Services, Associated Handles
and Associated Modules.

You can filter by processes to display data only for those processes. This is quite handy, as it is a quick
way to see links between processes, and services and other files on the system. Note that the graphs are
different to the ones displayed before. The graphs on the CPU tab lists the usage of each core, Service
CPU usage, and total CPU usage.

126
Associated Modules lists files such as dynamic link libraries that are used by a process. Associated
Handles point to system resources such as files or Registry values. These offer specific information but
are useful at times. You can run a search for handles, for instance, to find out why you can‘t delete a file
at that point in time.

Resource Monitor gives you some control over processes and services on the CPU tab. Right-click on any
process to display a context menu with options to end the selected process or entire process tree, to
suspend or resume processes, and to run a search online.

The Services context menu is limited to starting, stopping and restarting services, and to search online for
information.

Processes may be displayed using colors. A red process indicates that it is not responding, and a blue one
that it is suspended

Memory in Resource Monitor

The memory tab lists processes just like the CPU tab does, but with a focus on memory usage. It features
a physical memory view on top of that that visualizes the distribution of memory on the Windows
machine.

If this is your first time accessing the information, you may be surprised that quite a bit of memory may
be hardware reserved. The graphs highlight the used physical memory, the commit charge, and the hard
faults per second. Each process is listed with its name and process ID, the hard faults, and various
memory related information.

 Commit: Amount of virtual memory reserved by the operating system for the process.
 Working Set: Amount of physical memory currently in use by the process.
 Shareable: Amount of physical memory in use by the process that can be shared with other
processes.
 Private: Amount of physical memory in use by the process that cannot be used by other
processes.

Disk Activity information

The Disk tab of the Windows Resource Monitor lists the disk activity of processes and storage
information. It visualizes the disk usage in total and for each running process. You get a reading of each
processes‘ disk read and write activity, and can use the filtering options to filter by a particular process or
several processes.

The Storage listing at the bottom lists all available drives, the available and total space on the drive, as
well as the active time. The graphs visualize the disk queue length. It is an indicator for requests of that
particular disk and is a good indicator to find out if disk performance cannot keep up with I/O operations.

Network Activity in Resource Monitor

The Network tab lists network activity, TCP connections and listening ports. It lists network activity of
any running process in detail. It is useful, as it tells you right away if processes connect to the Internet.

127
You do get TCP connection listings that highlight remote servers that processes connect to, the bandwidth
use, and the local listening ports.

Bandwidth

Bandwidth describes the maximum data transfer rate of a network. It measures how much data can be sent
over a specific connection in a given amount of time. For example, a gigabit Ethernet connection has a
bandwidth of 1,000 Mbps (125 megabytes per second). An Internet connection via cable modem may
provide 25 Mbps of bandwidth.

While bandwidth is used to describe network speeds, it does not measure how fast bits of data move from
one location to another. Since data packets travel over electronic or fiber optic cables, the speed of each
bit transferred is negligible. Instead, bandwidth measures how much data can flow through a specific
connection at one time.

When visualizing bandwidth, it may help to think of a network connection as a tube and each bit of data
as a grain of sand. If you pour a large amount of sand into a skinny tube, it will take a long time for the
sand to flow through it. If you pour the same amount of sand through a wide tube, the sand will finish
flowing through the tube much faster. Similarly, a download will finish much faster when you have a
high-bandwidth connection rather than a low-bandwidth connection.

Data often flows over multiple network connections, which means the connection with the smallest
bandwidth acts as a bottleneck. Generally, the Internet backbone and connections between servers have
the most bandwidth, so they rarely serve as bottlenecks. Instead, the most common Internet bottleneck is
your connection to your ISP.

Bandwidth vs. Speed

Internet speed is a major vice to any Internet user. Even though Internet speed and data transfer mostly
revolve around bandwidth, your Internet speed can also be different from the Internet bandwidth
expectations. What tends to make it complicated is that the terms bandwidth, speed, and bandwidth speed
are used interchangeably, but they are actually different things. Most people refer to speed as how long it
takes to upload and download files, videos, livestreams, and other content.

Bandwidth is the size of the pipe or the overall capacity for data. Keep in mind that you could have great
bandwidth and not so great speed if your end system, your network, can‘t handle all of the flow of
information.

They key is making sure everything matches up. If you want to know more about your Internet
performance, you can use an Internet speed test. This could help you see if your Internet service
provider is providing the actual Internet connection that you are expecting, or if there are problems at the
network level with being able to handle the data.

Network bandwidth

Use of bandwidth can also be monitored by a network bandwidth monitor. Network bandwidth is a fixed
commodity. There are several ways to use network bandwidth. First, you can control the data flow in your
Internet connection. That is you can streamline data from one point to another point. Next, you can also
optimize data so that it consumes less bandwidth from what is allocated.

128
In summary, bandwidth is the amount of information and Internet connection can handle in a given
period. An Internet connection operates much faster or slower depending on whether the bandwidth is
large or small. With a larger bandwidth, the set of data transmission is much faster than an Internet
connection with a lower bandwidth.

4.1.3. Network Printers


Network printing allows us to efficiently use printing resources. With network printing we first connect
all of our work stations to a network and then we implement a network printer. In general there are two
ways this can be done. In first method we take a regular printer and plug it into the back of one of the
PCs. On the picture below that PC is named Workstation 1. Then we share that printer on the network by
going to the printer properties in Windows.

Figure 4.5. Sample Shared Printer through a workstation

In this configuration other hosts on the network can send the print job through the network to the
Workstation 1, which then sends the print job to the print device. This is the cheaper method, but we

depend on the Workstation 1, which has to be turned on all the time. If someone is using that computer,
then we depend on that person too. This method is used in home or small office scenarios. To connect to
the shared printer we can use the UNC path in the format: \\computername\sharename.

UNC (Universal Naming Convention) path is a standard for identifying servers, printers and other
resources in a network. It uses double slashes (for Unix and Linux) or backslashes (for Windows) to
precede the name of the computer. //servername/path Unix\\servername\path DOS/Windows

In second method we implement the type of printer that has its own network interface installed (either
wired or wireless). This way we can connect our printer directly to the network so the print jobs can be
sent from workstations directly to that network printer.

Figure 4.6. Shared printer with its own dedicated NIC (Network Interface Card)

129
The print job doesn‘t have to go through the workstation such as in the first case. To connect to a network
attached printer we can create a printer object using a TCP/IP port. We use the IP address and port name
information to connect to the printer.

Print Port

When a client needs to send a print job to the network printer, client application formats the print job and
sends it to the print driver. Just as a traditional print job, it‘s saved on the local work station in the

spool. Then the job is sent from the spool to the printer. In traditional set up the computer will send the
job through the parallel or USB cable to the printer. In the network printing set up, the job is redirected.
The print job goes out through the network board, then the network, and then arrives at the destination
network printer.

Drivers

Each network host that wants to use the network printer must have the corresponding printer driver
installed. When we share a printer in Windows, the current printer driver is automatically delivered to
clients that connect to the shared printer. If the client computers run a different version of Windows, we
can add the necessary printer drivers to the printer object. To add drivers for network users we can use the
‗Advanced‘ and ‗Sharing‘ tab in printer properties.

Print Server

An important component of any network printer that we have is the print server. The print server manages
the flow of documents sent to the printer. Using a print server lets us customize when and how documents
print. There are different types of print servers. In the first scenario where we have attached ordinary
printer to our workstation, the printer has no print server hardware built in. In this case the operating
system running on Workstation 1 functions as a print server. It receives the jobs from the other clients,
saves them locally in a directory on the hard drive and spools them off to the printer one at a time as the
printer becomes ready. The computer can fill other roles on the network in addition to being the print
server. Most operating systems include print server software.

Some printers, like our printer from the second scenario, have a built in print server that‘s integrated into
the hardware of the printer itself. It receives the print jobs from the various clients, queues them up, gives
them priority and sends them on through the printing mechanism as it becomes available. We often refer
to this type of print server as internal print server. We use special management software to connect to this
kind of print server and manage print jobs.

Print servers can also be implemented in another way. We can purchase an external print server. The
external print server has one interface that connects to the printer (parallel or USB interface), and it also
has a network jack that plugs into our HUB or switch. It provides all the print server functions but it‘s

all built into the hardware of the print server itself. So, when clients send a job to the printer, the jobs are
sent through the network to the hardware print server which then formats, prioritizes, saves them in the
queue, and then spools them off to the printer one at a time as the printer becomes available. Different
operating systems implement servers in different ways, and different external or internal print servers also
function in different ways. Because of that we need to check our documentation to see how to set it up
with our specific hardware or software.

130
4.2. Remote Administration
Remote administration is an approach being followed to control either a computer system or a network or
an application or all three from a remote location. Simply put, Remote administration refers to any
method of controlling a computer from a remote location. A remote location may refer to a computer in
the next room or one on the other side of the world. It may also refer to both legal and illegal remote
administration. Generally, remote administration is essentially adopted when it is difficult or impractical
to a person to be physically present and do administration on a system‘s terminal.

4.2.1. Requirements to Perform Remote Administration


Internet connection

One of the fundamental requirements to perform remote administration is network connectivity. Any
computer with an Internet connection, TCP/IP or on a Local Area Network can be remotely administered.

For non-malicious administration, the user must install or enable server software on the host system in
order to be viewed. Then the user/client can access the host system from another computer using the
installed software.

Usually, both systems should be connected to the Internet, and the IP address of the host/server system
must be known. Remote administration is therefore less practical if the host uses a dial-up modem,
which is not constantly online and often has a Dynamic IP.

Connecting: When the client connects to the host computer, a window showing the Desktop of the host
usually appears. The client may then control the host as if he/she were sitting right in front of it.

Windows has a built-in remote administration package called Remote Desktop Connection. A free
cross-platform alternative is VNC, which offers similar functionality.

4.2.2. Common Tasks/Services for which Remote Administration is Used


Generally, remote administration is needed for user management, file system management, software
installation/configuration, network management, Network Security/Firewalls, VPN, Infrastructure
Design, Network File Servers, Auto-mounting etc. and kernel optimization/ recompilation. The following
are some of the tasks/ services for which remote administration need to be done:

 General
o Controlling one‘s own computer from a remote location (e.g. to access the software or
data on a personal computer from an Internet café).
 ICT Infrastructure Management
o Remote administration essentially needed to administer the ICT infrastructure such as the
servers, the routing and switching components, the security devices and other such
related.
 Shutdown
o Shutting down or rebooting a computer over a network.
 Accessing Peripherals

 Using a network device, like printer

131
o retrieving streaming data, much like a CCTV system.
 Modifying
o Editing another computer‘s Registry settings,
o remotely connect to another machine to troubleshoot issues
o modifying system services,
o installing software on another machine,
o modifying logical groups.
 Viewing
o remotely run a program or copy a file
o remotely assisting others,
o supervising computer or Internet usage (monitor the remote computers activities)
o access to a remote system‘s ―Computer Management‖ snap-in.
 Hacking
o Computers infected with malware, such as Trojans, sometimes open back doors into
computer systems which allow malicious users to hack into and control the computer.
Such users may then add, delete, modify or execute files on the computer to their own
ends.

4.2.3. Remote Desktop Solutions


Most people who are used to a Unix-style environment know that a machine can be reached over the
network at the shell level using utilities like telnet or ssh. And some people realize that X Windows
output can be redirected back to the client workstation. But many people don‘t realize that it is easy to use
an entire desktop over the network. The following are some of proprietary and open source applications
that can be used to achieve this.

SSH (Secure Shell): Secure Shell (SSH) is a proprietary cryptographic network tool for secure data
communication between two networked computers that connects, via a secure channel over an insecure
network, a server and a client (running SSH server and SSH client programs, respectively). The protocol
specification distinguishes between two major versions that are referred to as SSH-1 and SSH-2.

The best-known application of the tool is for access to shell accounts on Unix-like operating systems-
GNU/Linux, OpenBSD, FreeBSD, but it can also be used in a similar fashion for accounts on Windows.

SSH is generally used to log into a remote machine and execute commands. It also supports tunneling,
forwarding TCP ports and X11 connections, it can transfer files using the associated SSH file transfer
(SFTP) or secure copy (SCP) protocols. SSH uses the client-server model.

SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of
exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path
over the Internet, through a firewall to a virtual machine.

OpenSSH (OpenBSD Secure Shell): OpenSSH is a tool providing encrypted communication sessions
over a computer network using the SSH protocol. It was created as an open source alternative to the
proprietary Secure Shell software suite offered by SSH Communications Security.

Telnet: Telnet is used to connect a remote computer over network. It provides a bidirectional
interactive text- oriented communication facility using a virtual terminal connection on internet or local
area networks. Telnet provides a command-line interface on a remote host. Most network equipment and
operating systems with a TCP/IP stack support a Telnet service for remote configuration (including

132
systems based on Windows NT). Telnet is used to establish a connection to Transmission Control
Protocol (TCP) on port number 23, where a Telnet server application (telnetd) is listening.

Experts in computer security, recommend that the use of Telnet for remote logins should be
discontinued under all normal circumstances, for the following reasons:

 Telnet, by default, does not encrypt any data sent over the connection (including passwords), and
so it is often practical to eavesdrop on the communications and use the password later for
malicious purposes; anybody who has access to a router, switch, hub or gateway located on the
network between the two hosts where Telnet is being used can intercept the packets passing by
and obtain login, password and whatever else is typed with a packet analyzer.
 Most implementations of Telnet have no authentication that would ensure communication is
carried out between the two desired hosts and not intercepted in the middle.
 Several vulnerabilities have been discovered over the years in commonly used Telnet daemons.

rlogin: rlogin is an utility for Unix-like computer operating systems that allows users to log in on another
host remotely through network, communicating through TCP port 513.

rlogin has several serious security problem- all information, including passwords is transmitted in
unencrypted mode. rlogin is vulnerable to interception. Due to serious security problems, rlogin was
rarely used across distrusted networks (like the public Internet) and even in closed networks.

Rsh: The remote shell (rsh) can connect a remote host across a computer network. The remote system to
which rsh connects runs the rsh daemon (rshd). The daemon typically uses the well-known Transmission
Control Protocol (TCP) port number 514. In security point of view, it is not recommended.

VNC (Virtual Network Computing): VNC is a remote display system which allows the user to view the
desktop of a remote machine anywhere on the Internet. It can also be directed through SSH for security.

Install VNC server on a computer (server) and install client on local PC. Setup is extremely easy and
server is very stable. On client side, set the resolution and connect to IP of VNC server.

FreeNX: FreeNX allows to access desktop from another computer over the Internet. One can use this to
login graphically to a desktop from a remote location. One example of its use would be to have a FreeNX
server set up on home computer, and graphically logging in to the home computer from work computer,
using a FreeNX client.

Wireless Remote Administration : Remote administration software has recently started to appear on
wireless devices such as the BlackBerry, Pocket PC, and Palm devices, as well as some mobile phones.

Generally these solutions do not provide the full remote access seen on software such as VNC or
Terminal Services, but do allow administrators to perform a variety of tasks, such as rebooting computers,
resetting passwords, and viewing system event logs, thus reducing or even eliminating the need for
system administrators to carry a laptop or be within reach of the office.

AetherPal and Netop are some of the tools used for full wireless remote access and administration on
Smartphone devices.

Wireless remote administration is usually the only method to maintain man-made objects in space.

133
Remote Desktop Connection (RDC)

Remote Desktop Connection (RDC) is a Microsoft technology that allows a local computer to connect to
and control a remote PC over a network or the Internet. It is done through a Remote Desktop Service
(RDS) or a terminal service that uses the company‘s proprietary Remote Desktop Protocol (RDP).
Remote Desktop Connection is also known simply as Remote Desktop.

Typically, RDC requires the remote computer to enable the RDS and to be powered on. The connection is
established when a local computer requests connection to a remote computer using an RDC-enabled
software. On authentication, the local computer has full or restricted access to the remote computer.
Besides desktop computers, servers and laptops, RDC also supports connecting to virtual machines. This
technology was introduced in Windows XP.

Alternatively referred to as remote administration, remote admin is way to control another computer
without physically being in front of it. Below are examples of how remote administration could be used.

 Remotely run a program or copy a file.


 Remotely connect to another machine to troubleshoot issues.
 Remotely shutdown a computer.
 Install software to another computer.
 Monitor the remote computers activity.

Remote Admin allows system administrators or support personnel to remotely access Officelinx Admin
from their own workstation, eliminating the need to be in front of the server in order to perform
administrative functions.

4.2.4. Disadvantages of Remote Administration


Remote administration has many disadvantages too apart from its advantages. The first and foremost
disadvantage is the security. Generally, certain ports to be open at Server level to do remote
administration. Due to open ports, the hackers/attackers takes advantage to compromise the system. It is
advised that remote administration to be used only in emergency or essential situations only to do
administration remotely. In normal situations, it is ideal to block the ports to avoid remote administration.

4.3. Performance
 Redundant Array of Inexpensive (or Independent) Disks (RAID)

RAID is a data storage virtualization technology that combines multiple physical disk drive components
into one or more logical units for the purposes of data redundancy, performance improvement, or
both. This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as
Single Large Expensive Disk (SLED).

Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the
required level of redundancy and performance. The different schemes, or data distribution layouts, are
named by the word ―RAID‖ followed by a number, for example RAID 0 or RAID 1. Each scheme, or
RAID level, provides a different balance among the key goals: reliability, availability, performance,
and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read
errors, as well as against failures of whole physical drives.

134
4.3.1.1. Standard levels

Originally, there were five standard levels of RAID, but many variations have evolved, including several
nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data
formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID
Disk Drive Format (DDF) standard:

RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of
a RAID 0 volume is the same; it is the sum of the capacities of the drives in the set. But because striping
distributes the contents of each file among all drives in the set, the failure of any drive causes the entire
RAID 0 volume and all files to be lost. In comparison, a spanned volume preserves the files on the
unfailing drives. The benefit of RAID 0 is that the throughput of read and write operations to any file is
multiplied by the number of drives because, unlike spanned volumes, reads and writes are done
concurrently.

The cost is increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes
entire volume to be lost, the average failure rate of the volume rises with the number of attached drives.

Figure 4.1. RAID 0 setup


NOTES: In data storage, data striping is the technique of segmenting logically sequential data, slike a file, so that
consecutive segments are stored on different physical storage devices. It is useful when processor requests data more
quickly than single storage device can provide it. By spreading segments across multiple devices which can be
accessed concurrently, total data throughput is increased.
In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real
time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete
logical representation of separate volume copies.
Parity stripe or parity disk in a RAID array provides error-correction. Parity bits are written at the rate of one
parity bit per n bits, where n is the number of disks in the array. When a read error occurs, each bit in the error
region is recalculated from its set of n bits. In this way, using one parity bit creates ―redunancy‖ for a region from
the size of one bit to the size of one disk.

RAID 1 consists of data mirroring, without parity or striping . Data is written identically to two or
more drives, thereby producing a ―mirrored set‖ of drives. Thus, any read request can be serviced by any
drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that
accesses the data first (depending on its seek time and rotational latency), improving performance.
Sustained read throughput, if the controller or software is optimized for it, approaches the sum of
throughputs of every drive in the set, just as for RAID 0.

Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write
throughput is always slower because every drive must be updated, and the slowest drive limits the write
performance. The array continues to operate as long as at least one drive is functioning.

135
Figure 4.2. RAID 1 setup

Figure 4.3. RAID 2 setup

RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is
synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity
is calculated across corresponding bits and stored on at least one parity drive. This level is of historical
significance only; as of 2014 it is not used by any commercially available system.

RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized
and data is striped such that each sequential byte is on a different drive. Parity is calculated across
corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is
not commonly used in practice. The following figure shows a RAID 3 setup of 6-byte blocks and two
parity bytes, shown are blocks of data in different colors.

Figure 4.4. RAID 3 setup

RAID 4 consists of block-level striping with dedicated parity. The main advantage of RAID 4 over
RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole
group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data
drives. As a result, more I/O operations can be executed in parallel, improving the performance of small
transfers. The figure below shows a setup of RAID 4 with dedicated parity disk with each color
representing the group of blocks in the respective parity block (a strip).

136
Figure 4.5. RAID 4 setup

RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is
distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a
single drive, subsequent reads can be calculated from the distributed parity such that no data is lost.
RAID 5 requires at least three disks. Like all single-parity concepts, large RAID 5 implementations are
susceptible to system failures because of trends regarding array rebuild time and the chance of drive
failure during rebuild. Rebuilding an array requires reading all data from all disks, opening a chance for a
second drive failure and the loss of the entire array. The figure below shows a setup of RAID 5 layout
with each color represent the group of data blocks and associated party block (a stripe).

Figure 4.6. RAID 5 layout

RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault
tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-
availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four
disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the
failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and
manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the
drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of
RAID 5. RAID 10 also minimizes these problems. The figure below shows a RAID 6 setup, which is
identical to RAID 5 other than the addition of a second parity block.

137
Figure 4.7. RAID 6 setup

4.4 Review Questions


1. Discuss why we need resource monitoring in our infrastructure, and what are the resources that
we are going to monitor.
2. Discuss the different kinds of resource monitoring tools that are already available in Windows
operating systems.
3. Besides the free and already available resource monitoring and management tools mentioned
above, discuss some of other well-known free and commercial tools available for system
administrators.
4. Why remote administration is needed? Explain.
5. List the different network clients.
6. What are the different remote administration tools?

138
UNIT - V

5.1. Introduction
5.1.1What is Unix/Linux?
The Unix OS is a set of programs that act as a link between the computer and the user. The computer
programs that allocate the system resources and coordinate all the details of the computer‘s internals is
called the operating system or the kernel. Users communicate with the kernel through a program known
as the shell. The shell is a command line interpreter; it translates commands entered by the user and
converts them into a language that is understood by the kernel.

Linux is a community of open-source Unix like operating systems that are based on the Linux Kernel. It
was initially released by Linus Torvalds on September 17, 1991. It is a free and open-source operating
system and the source code can be modified and distributed to anyone commercially or non-
commercially under the GNU General Public License (GNU/GPL).

Initially, Linux was created for personal computers and gradually it was used in other machines like
servers, mainframe computers, supercomputers, etc. Nowadays, Linux is also used in embedded systems
like routers, automation controls, televisions, digital video recorders, video game consoles, smartwatches,
etc. The biggest success of Linux is Android (operating system) which is based on the Linux kernel that is
running on smartphones and tablets. Due to android OS, Linux has the largest installed base of all
general-purpose operating systems. Linux is generally packaged in a Linux distribution.

5.1.2. Linux Distribution


Linux distribution is an operating system that is made up of a collection of software based on Linux
kernel or you can say distribution contains the Linux kernel and supporting libraries and software. And
you can get Linux based operating system by downloading one of the Linux distributions and these
distributions are available for different types of devices like embedded devices, personal computers, etc.
Around 600+ Linux Distributions are available and some of the popular Linux distributions are:

 MX Linux
 Manjaro
 Linux Mint
 Elementary
 Ubuntu
 Debian
 Solus
 Fedora

139
 OpenSUSE
 Arch Linux
 Kubuntu
 Debian

5.1.3. Unix/Linux Architecture


Here is a basic block diagram of a Unix system.

Figure 5.1. Block diagram of Unix system

The main concept that unites all the versions of Unix is the following four basics:

 Kernel: The kernel is the heart of the operating system. It interacts with the hardware and most of
the tasks like memory management, task scheduling and file management.
 Shell: The shell is the utility that processes your requests. When you type in a command at your
terminal, the shell interprets the command and calls the program that you want. The shell uses
standard syntax for all commands. C Shell, Bourne Shell and Korn Shell are the most famous
shells which are available with most of the Unix variants.
 Commands and Utilities: There are various commands and utilities which you can make use of
in your day-to-day activities. cp, mv, cat and grep, etc. are few examples of commands and
utilities. There are over 250 standard commands plus numerous others provided through 3rd party
software. All the commands come along with various options.
 Files and Directories: All the data of Unix is organized into files. All files are then organized
into directories. These directories are further organized into a tree-like structure called the
filesystem.

5.1.4 Open Source


The idea behind Open Source software is rather simple: when programmers can read, distribute and
change code, the code will mature. People can adapt it, fix it, debug it, and they can do it at a speed that
dwarfs the performance of software developers at conventional companies. This software will be more
flexible and of a better quality than software that has been developed using the conventional channels,
because more people have tested it in more different conditions than the closed software developer ever
can.

While Linux is probably the most well-known Open Source initiative, there is another project that
contributed enormously to the popularity of the Linux operating system. This project is called SAMBA,
and its achievement is the reverse engineering of the Server Message Block (SMB)/Common Internet File
System (CIFS) protocol used for file- and print-serving on PC-related machines, natively supported by

140
MS Windows NT and OS/2, and Linux. Packages are now available for almost every system and provide
interconnection solutions in mixed environments using MS Windows protocols: Windows-compatible (up
to and includingWinXP) file- and print-servers.

5.1.5. Properties of Linux


Linux Pros

A lot of the advantages of Linux are a consequence of Linux‘ origins, deeply rooted in UNIX, except for
the first advantage, of course:

 Linux is free: If you want to spend absolutely nothing, Linux can be downloaded in its entirety
from the Internet completely for free. No registration fees, no costs per user, free updates, and
freely available source code in case you want to change the behavior of your system. The license
commonly used is the GNU Public License (GPL), and it says anybody who may want to do so,
has the right to change Linux and eventually to redistribute a changed version, on the one
condition that the code is still available after redistribution. In practice, you are free to

grab a kernel image, for instance to add support for Amharic voice recognition and sell your new code, as
long as your customers can still have a copy of that code.

 Linux is portable to any hardware platform: A vendor who wants to sell a new type of
computer and who doesn‘t know what kind of OS his new machine will run (say the CPU in your
car or washing machine), can take a Linux kernel and make it work on his hardware, because
documentation related to this activity is freely available.
 Linux was made to keep on running: a Linux system expects to run without rebooting all the
time. That is why a lot of tasks are being executed at night or scheduled automatically for other
calm moments, resulting in higher availability during busier periods and a more balanced use of
the hardware. This property allows for Linux to be applicable also in environments where people
don‘t have the time or the possibility to control their systems night and day.
 Linux is secure and versatile: The security model used in Linux is based on the UNIX idea of
security, which is known to be robust and of proven quality. But Linux is not only fit for use as a
fort against enemy attacks from the Internet: it will adapt equally to other situations, utilizing the
same high standards for security. Your development machine or control station will be as secure
as your firewall.
 Linux is scalable: From a Palmtop with 2 MB of memory to a petabyte storage cluster with
hundreds of nodes: add or remove the appropriate packages and Linux fits all. You don‘t need a
supercomputer anymore, because you can use Linux to do big things using the building blocks
provided with the system. If you want to do little things, such as making an operating system for
an embedded processor or just recycling your old 486, Linux will do that as well.
 The Linux OS and most Linux applications have very short debug-times: Because Linux has
been developed and tested by thousands of people, both errors and people to fix them are usually
found rather quickly. It sometimes happens that there are only a couple of hours between
discovery and fixing of a bug.

Linux Cons

 There are far too many different distributions: At first glance, the amount of Linux
distributions can be frightening, or ridiculous, depending on your point of view. But it also means

141
that everyone will find what he or she needs. You don‘t need to be an expert to find a suitable
release.

 Linux is not very user friendly and confusing for beginners: It must be said that Linux, at
least the core system, is less userfriendly to use than MS Windows and certainly more difficult
than MacOS, but… In light of its popularity, considerable effort has been made to make Linux
even easier to use, especially for new users. More information is being released daily to help fill
the gap for documentation available to users at all levels.
 Is an Open Source product trustworthy? How can something that is free also be reliable?
Linux users have the choice whether to use Linux or not, which gives them an enormous
advantage compared to users of proprietary software, who don‘t have that kind of freedom. After
long periods of testing, most Linux users come to the conclusion that Linux is not only as good,
but in many cases better and faster than the traditional solutions.

5.1.6. Linux and GNU


Although there are a large number of Linux implementations, you will find a lot of similarities in the
different distributions. Linux may appear different depending on the distribution, your hardware and
personal taste, but the fundamentals on which all graphical and other interfaces are built, remain the same.
The Linux system is based on GNU tools (Gnu‘s Not UNIX), which provide a set of standard ways to
handle and use the system.

All GNU tools are open source, so they can be installed on any system. Most distributions offer pre-
compiled packages of most common tools, such as RPM packages on RedHat and Debian packages (also
called deb or dpkg) on Debian, so you needn‘t be a programmer to install a package on your system.
However, if you are and like doing things yourself, you will enjoy Linux all the better, since most
distributions come with a complete set of development tools, allowing installation of new software purely
from source code. This setup also allows you to install software even if it does not exist in a pre-packaged
form suitable for your system.

The Linux kernel (the bones of your system) is not part of the GNU project but uses the same license
as GNU software. A great majority of utilities and development tools (the meat of your system), which
are not Linux-specific, are taken from the GNU project. Because any usable system must contain both the
kernel and at least a minimal set of utilities, some people argue that such a system should be called a
GNU/Linux system.

5.1.7. About Linux Files and the File System


A simple description of the UNIX system, also applicable to Linux, is this: ―On a UNIX system,
everything is a file; if something is not a file, it is a process.‖ This statement is true because there are
special files that are more than just files (named pipes and sockets, for instance), but to keep things
simple, saying that everything is a file is an acceptable generalization. A Linux system, just like UNIX,
makes no difference between a file and a directory, since a directory is just a file containing names of
other files. Programs, services, texts, images, and so forth, are all files. Input and output devices, and
generally all devices, are considered to be files, according to the system.

Sorts of Files

142
Most files are just files, called regular files; they contain normal data, for example text files, executable
files or programs, input for or output from a program and so on. The -l option to ls displays the file type,
using the first character of each input line:

hello@it4th:~$ ls -l -rw-r–r– 1 root root 405 May 12 2020 drwxrwxr-x 6 hello hello 4096 July 19 18:25
Android -rw-rw-r– 1 hello hello 260 Nov 12 11:39 hello.vbs

The following table gives an overview of the characters determining the file type:

Symbol Meaning
– Reglar file
D Directory
L Link
C Special file
S Socket
P Named pipe
B Block device

Linux File System

For convenience, the Linux file system is usually thought of in a tree structure as shown below:

Figure 5.2. Linux file system layout

This is a layout from a sample Linux system. Depending on the system administrator, the operating
system and the mission of the UNIX machine, the structure may vary, and directories may be left out or
added at will. The names are not even required; they are only a convention.

The tree of the file system starts at the trunk or slash, indicated by a forward slash (/). This directory,
containing all underlying directories and files, is also called root directory or ―the root‖ of the file
system.

Directory Content
/bin Common programs, shared by the system, the system administrator and the users.
/boot The startup files and the kernel, vmlinuz. In some recent distributions also grub data. Grub is the

143
GRand Unified Boot loader and is an attempt to get rid of the many different boot-loaders we know
today.
/dev Contains references to all the CPU peripheral hardware, which are represented as files with special
properties.
/etc Most important system configuration files are in /etc, this directory contains data similar to those in
the Control Panel in Windows.
/home Home directories of the common users.
/lib Library files, includes files for all kinds of programs needed by the system and the users.
/lost+found Every partition has a lost+found for files that were saved during failures are here.
/misc For miscellaneous purposes.
/mnt Standard mount point for external file systems, e.g. a CD-ROM or a digital camera.
/opt Typically contains extra and third party software.
/proc Virtual file system containing system resources information. You can type man proc command on
terminal to see more information about the meaning of the files in proc.
/root The administrative user‘s home directory. Mind the difference between /, the root directory and /root,
the home directory of the root user.
/sbin Programs for use by the system and the system administrator.
/tmp Temporary space to be used by the system, and its contents will be cleaned upon reboot, so don‘t use
this for saving any work!
/usr Programs, libraries, documentation etc. for all user-related programs.
/var Storage for all variable and temporary files created by users, such as log files, temporary files
downloaded from the Internet, or to keep an image of a CD before burning it.

Table 5.1. Subdirectories of the root directory

Absolute/Relative Pathnames

Directories are arranged in a hierarchy with root (/) at the top. The position of any file within the
hierarchy is described by its pathname. Elements of a pathname are separated by a single / (forward
slash). A pathname is absolute, if it is described in relation to root, thus absolute pathnames always
begin with a / (forward slash). Following are some examples of absolute filenames:

/etc/passwd

/home/hello/programming/notes

A pathname can also be relative to your current working directory. Relative pathnames never begin
with /. Relative to user hello‘s home directory, some pathnames might look like this:

programming/notes personal/reserved

To determine where you are within the filesystem hierarchy at any time, enter the command pwd to print
the current working directory:

$pwd

/home/hello/Desktop

NOTE: There are two kinds of major partitions on a Linux system:


Data partition: normal Linux system data, including the root partition containing all the data to start up

144
and run the system; andSwap partition: expansion of the computer’s physical memory, extra memory on
hard disk.

The file system in reality

For most users and for most common system administration tasks, it is enough to accept that files and
directories are ordered in a tree-like structure. The computer, however, doesn‘t understand a thing about
trees or tree-structures. Every partition has its own file system. By imagining all those file systems
together, we can form an idea of the tree-structure of the entire system, but it is not as simple as that. In a
file system, a file is represented by an inode, a kind of serial number containing information about the
actual data that makes up the file: to whom this file belongs, and where is it located on the hard disk.
Every partition has its own set of inodes; throughout a system with multiple partitions, files with the same
inode number can exist.

Each inode describes a data structure on the hard disk, storing the properties of a file, including the
physical location of the file data. When a hard disk is initialized to accept data storage, usually during

the initial system installation process or when adding extra disks to an existing system, a fixed number of
inodes per partition is created. This number will be the maximum amount of files, of all types (including
directories, special files, links etc.) that can exist at the same time on the partition. We typically count on
having 1 inode per 2 to 8 kilobytes of storage.

At the time a new file is created, it gets a free inode. In that inode is the following information:

 Owner and group owner of the file


 File type (regular, directory, …)
 Permissions on the file
 Date and time of creation, last read and change
 Date and time this information has been changed in the inode
 Number of links to this file
 File size
 An address defining the actual location of the file data.

The only information not included in an inode, is the file name and directory. These are stored in the
special directory files. By comparing file names and inode numbers, the system can make up a tree-
structure that the user understands. Users can display inode numbers using the -i option to ls (ls -i). The
inodes have their own separate space on the disk.

5.2. Linux Systems and Network Concepts


5.2.1. What is Networking?
A network consists of multiple machines (computers) that are connected together and share each other all
kinds of information. This connection between the network can be developed through waves and signals
or wires, depending on which is most convenient for work and the type of information that needs to be
shared.

145
In the network multiple machines (host) are connected to the communication sub-net that allows the
dialog between them. They can communicate in two basic ways:

 Through channels point to point (PPP)


 Through broadcast channels

When we talk about Network Architecture, we are talking about the set of levels and protocols of a
computers network.

5.2.2 Network Configuration and Information


5.2.2.1. Configuration of network interfaces

All the big, user-friendly Linux distributions come with various graphical tools, allowing for easy setup of
the computer in a local network, for connecting it to an Internet Service Provider or for wireless access.
These tools can be started up from the command line or from a menu:

 Ubuntu configuration is done selecting System→Administration→Networking.


o RedHat Linux comes with redhat-config-network, which has both a graphical and a
text mode interface.

 Suse’s YAST or YAST2 is an all-in-one configuration tool.


o Mandrake/Mandriva comes with a Network and Internet Configuration Wizard, which
is preferably started up from Mandrake‘s Control Center.
 On Gnome systems: gnome-network-preferences.

Your system documentation provides plenty of advice and information about availability and use of tools.

Information that you will need to provide:

 For connecting to the local network, i.e. with your home computers, or at work: hostname,
domainname and IP address. If you want to set up your own network, best do some more reading
first. At work, this information is likely to be given to your computer automatically when you
boot up. When in doubt, it is better not to specify any information than making it up.
o For connecting to the Internet: username and password for your ISP, telephone number
when using a modem. Your ISP usually automatically assigns you an IP address and all
the other things necessary for your Internet applications to work.

5.2.2.2. Network configuration files

The graphical helper tools edit a specific set of network configuration files, using a couple of basic
commands. The exact names of the configuration files and their location in the file system is largely
dependent on your Linux distribution and version. However, a couple of network configuration files are
common on all UNIX systems:

/etc/hosts

The /etc/hosts file always contains the localhost IP address, 127.0.0.1, which is used for interprocess
communication. Never remove this line! Sometimes contains addresses of additional hosts, which can be
contacted without using an external naming service such as DNS (the Domain Name Server).

146
A sample hosts file for a small home network:

# Do not remove the following line, or various programs # that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
192.168.52.10 wsu.edu.et WSU
Read more in man hosts.

/etc/resolv.conf

The /etc/resolv.conf file configures access to a DNS server. This file contains your domain name and the
name server(s) to contact:

search mylan.com nameserver 193.134.20.4


Read more in the resolv.conf man page.

/etc/nsswitch.conf

The /etc/nsswitch.conf file defines the order in which to contact different name services. For Internet use,
it is important that dns shows up in the ―hosts‖ line:

$ grep hosts /etc/nsswitch.conf hosts: files dns

This instructs your computer to look up hostnames and IP addresses first in the /etc/hosts file, and to
contact the DNS server if a given host does not occur in the local hosts file. Other possible name services
to contact are LDAP, NIS and NIS+.

More in man nsswitch.conf.

5.2.2.3. Network configuration commands

The ip Command

The distribution-specific scripts and graphical tools are front-ends to ip (or ifconfig and route on older
systems) to display and configure the kernel‘s networking configuration. The ip command is used for
assigning IP addresses to interfaces, for setting up routes to the Internet and to other networks, for
displaying TCP/IP configurations etcetera. The following commands show IP address and routing
information:

Things to note:

 two network interfaces, even on a system that has only one network interface card:
o ―lo‖ is the local loop, used for internal network communication;

✗ Don‘t ever change local loop configuration, or your machine will start malfunctioning!

 ―eth0‖ is a common name for a real interface.

147
o Wireless interfaces are usually defined as ―wlan0―;
 modem interfaces as ―ppp0―, but there might be other names as well.
 IP addresses, marked with ―inet―:
 the local loop always has 127.0.0.1,
 the physical interface can have any other combination.
 The hardware address of your interface, which might be required as part of the
authentication procedure to connect to a network, is marked with ―ether―. The
local loop has 6 pairs of all zeros, the physical loop has 6 pairs of hexadecimal
characters, of which the first 3 pairs are vendor-specific.

The ifconfig Command

While ip is the most novel way to configure a Linux system, ifconfig is still very popular. Use it without
option for displaying network interface information:

Here, too, we note the most important aspects of the interface configuration:

 The IP address is marked with ―inet addr―.


o The hardware address follows the ―HWaddr‖ tag.

Both ifconfig and ip display more detailed configuration information and a number of statistics about
each interface and, maybe most important, whether it is ―UP‖ and ―RUNNING―.

Network Interface Names

On a Linux machine, the device name lo or the local loop is linked with the internal 127.0.0.1 address.
The computer will have a hard time making your applications work if this device is not present; it is
always there, even on computers which are not networked.

The first ethernet device, eth0 in the case of a standard network interface card, points to your local LAN
IP address. Normal client machines only have one network interface card. Routers, connecting networks
together, have one network device for each network they serve. If you use a modem to connect to the
Internet, your network device will probably be named ppp0.

There are many more names, for instance for Virtual Private Network interfaces (VPNs), and multiple
interfaces can be active simultaneously, so that the output of the ifconfig or ip commands might become
quite extensive when no options are used. Even multiple interfaces of the same type can be active. In that
case, they are numbered sequentially: the first will get the number 0, the second will get

a suffix of 1, the third will get 2, and so on. This is the case on many application servers, on machines
which have a failover configuration, on routers, firewalls and many more.

Checking the Host Configuration with netstat Apart from the ip command for displaying the network
configuration, there‘s the common netstat command which has a lot of options and is generally useful on
any UNIX system. Routing information can be displayed with the -nr option to the netstat command:

When this machine tries to contact a host that is on another network than its own, indicated by the line
starting with 0.0.0.0, it will send the connection requests to the machine (router) with IP address
192.168.42.1, and it will use its primary interface, eth0, to do this.

148
Hosts that are on the same network, the line starting with 192.168.42.0, will also be contacted through the
primary network interface, but no router is necessary, the data are just put on the network.

Machines can have much more complicated routing tables than this one, with lots of different
―Destination-Gateway‖ pairs to connect to different networks. If you have the occasion to connect to an
application server, for instance at work, it is most educating to check the routing information.

The host Command: To display information on hosts or domains, use the host command:

The ping Command: To check if a host is alive, use ping. If your system is configured to send more than
one packet, interrupt ping with the Ctrl+C key combination:

The traceroute Command: To check the route that packets follow to a network host, use the traceroute
command:

5.3. Linux User Administration


5.3.1 The Privileged root Account
For many tasks, the system administrator needs special privileges. Accordingly, he can make use of a
special user account called root. As root, a user is the so-called super user. In brief: He may do
anything.

The normal file permissions and security precautions do not apply to root. He has allowing him nearly
unbounded access (unlimited privileges) to all data, devices and system components. He can institute
system changes that all other users are prohibited from by the Linux kernel‘s security mechanisms. This
means that, as root , you can change every file on the system no matter who it belongs to.

Obtaining Administrator Privileges

There are two ways of obtaining administrator privileges:

1. You can log in as user root directly. After entering the correct root password you will obtain a
shell with administrator privileges. However, you should avoid logging in to the GUI as root,
since then all graphical applications would run with root privileges, which is not necessary and
can lead to security problems. Nor should direct root logins be allowed across the network.
2. You can, from a normal shell, use the su command to obtain a new shell with administrator
privileges. su , like login , asks for a password and opens the root shell only after the correct root
password has been input. In GUIs like KDE there are similar methods.

5.3.2. Why Users?


Computers used to be large and expensive, but today an office workplace without its own PC (―personal
computer‖) is nearly inconceivable, and a computer is likely to be encountered in most domestic ―dens‖
as well. And while it may be sufficient for a family to agree that Dad, Mom and the kids will put their
files into different directories, this will no longer do in companies or universities — once shared disk
space or other facilities are provided by central servers accessible to many users, the computer system
must be able to distinguish between different users and to assign different access rights to them. After all,
Ms Chaltu from the ICT department has as little business looking at the company‘s payroll data as Mr

149
Abebe from Human Resources has accessing the detailed plans for next year‘s products. And a measure
of privacy may be desired even at home— some adult contents should not be open to prying eyes as a
matter of course.

The second reason for distinguishing between different users follows from the fact that various aspects of
the system should not be visible, much less changeable, without special privileges. Therefore Linux
manages a separate user identity (root) for the system administrator, which makes it possible to keep
information such as users‘ passwords hidden from ―common‖ users. The bane of older Windows
systems—programs obtained by e-mail or indiscriminate web surfing that then wreak havoc on the entire
system—will not plague you on Linux, since anything you can execute as a common user will not be in a
position to wreak system-wide havoc.

Therefore, generally when a computer is used by many people it is usually necessary to differentiate
between the users, for example, so that their private files can be kept private. This is important even if the
computer can only be used by a single person at a time, as with most microcomputers. Thus, each user is
given a unique username, and that name is used to log in.

Linux distinguishes between different users by means of different user accounts. The common
distributions typically create two user accounts during installation, namely root for administrative tasks
and another account for a ―normal‖ user. You (as the administrator) may add more accounts later, or, on
a client PC in a larger network, they may show up automatically from a user account database stored
elsewhere.

Under Linux, every user account is assigned a unique number, the so-called user ID (or UID, for short).
Every user account also features a textual user name (such as root or abebe) which is easier to remember
for humans. In most places where it counts—e. g., when logging in, or in a list of files and their owners—
Linux will use the textual name whenever possible.

 Users and Groups

To work with a Linux computer you need to log in first. This allows the system to recognise you and to
assign you the correct access rights (of which more later). Everything you do during your session (from
logging in to logging out) happens under your user account. In addition, every user has a home directory,
where only they can store and manage their own files, and where other users often have no read
permission and very emphatically no write permission. (Only the system administrator – root – may read
and write all files.)

Several users who want to share access to certain system resources or files can form a group. Linux
identifies group members either fixedly by name or transiently by a login procedure similar to that for
users. Groups have no ―home directories‖ like users do, but as the administrator you can of course create
arbitrary directories meant for certain groups and having appropriate access rights.

Groups, too, are identified internally using numerical identifiers (―group IDs‖ or GIDs).

You might be bothered (and rightfully so!) by the fact that this somewhat sensitive information is
apparently made available on a casual basis to arbitrary system users. If you (as administrator) want to
protect your users‘ privacy better than your Linux distribution does by default, you can use the

$ chmod or /var/log/wtmp # make sure that you have admin privileges

150
command to remove general read permissions from the file that last consults for the telltale data. Users
without administrator privileges then get to see something like

$ last last: /var/log/wtmp: Permission denied

People and Pseudo-Users

Besides ―natural‖ persons—the system‘s human users—the user and group concept is also used to
allocate access rights to certain parts of the system. This means that, in addition to the personal accounts
of the ―real‖ users like you, there are further accounts that do not correspond to actual human users but
are assigned to administrative functions internally. They define functional ―roles‖ with their own accounts
and groups.

After installing Linux, you will find several such pseudo-users and groups in the /etc/passwd and

/etc/group files. The most important role is that of the root user (which you know) and its eponymous
group. The UID and GID of root are 0 (zero).

root ’s privileges are tied to UID 0; GID 0 does not confer any additional access privileges.

Further pseudo-users belong to certain software systems (e. g., news for Usenet news using INN, or
postfix for the Postfix mail server) or certain components or devices (such as printers, tape or floppy
drives). You can access these accounts, if necessary, like other user accounts via the su command. These
pseudo-users pseudo-users for privileges are helpful as file or directory owners, in order to fit the access
rights tied to file ownership to special requirements without having to use the root account. The same
applies to groups; the members of the disk group, for example, have block-level access to the system‘s
disks.

5.3.3. User and Group Information


The /etc/passwd File

The /etc/passwd file is the system user database. There is an entry in this file for every user on the
system—a line consisting of attributes like the Linux user name, ―real‖ name, etc. After the system is first
installed, the file contains entries for most pseudo-users.

⟨user name⟩ : ⟨password⟩ : ⟨UID⟩ : ⟨GID⟩ : ⟨GECOS⟩ : ⟨home directory⟩ : ⟨shell⟩

 User name: This name should consist of lowercase letters and digits; the first character should be
a letter. Unix systems often consider only the first eight characters—Linux does not have this
limitation but in heterogeneous networks you should take it into account.

NB: Resist the temptation to use special characters in user names, even if the system lets you do so—not
all tools that create new user accounts are picky, and you could of course edit /etc/passwd by hand. What
seems to work splendidly at first glance may lead to problems elsewhere later. You should also stay away
from user names consisting of only uppercase letters or only digits.

151
 Password: Traditionally, this field contains the user‘s encrypted password. Today, most Linux
distributions use ―shadow passwords‖; instead of storing the password in the publicly readable

/etc/passwd file, it is stored in /etc/shadow which can only be accessed by the administrator and some
privileged programs. In /etc/passwd , a ― x ‖ calls attention to this circumstance. Every user can avail
himself of the passwd program to change his password.

 UID: The numerical user identifier—a number between 0 and 232 − 1. By convention, UIDs from
0 to 99 are reserved for the system, UIDs from 100 to 499 are for use by software packages if
they need pseudo-user accounts. With most popular distributions, real users‘ UIDs start from 500
(or 1000).
o Precisely because the system differentiates between users not by name but by UID, the
kernel

treats two accounts as completely identical if they contain different user names but the same UID—at
least as far as the access privileges are concerned.

 GID: The GID of the user‘s primary group after logging in. By virtue of the assignment in

/etc/passwd, every user must be a member of at least one group. The user‘s secondary groups (if
applicable) are determined from entries in the /etc/group file.

Many distros, such as Red Hat or Debian GNU/Linux, create new group whenever a
new account is created, with the GID equalling the account‘s UID. The idea behind this is to allow more
sophisticated assignments of rights than with the approach that puts all users into the same group users.
Consider the following situation: Abu personal assistant of CEO Kebede (user name Abu & Kebe
respectively)Abu sometimes needs to access files stored inside Kebe’s home directory that other users
should not be able to get at.The method used by Red Hat, Debian & co., ―one group per user‖, makes it
straightforward to put user Abu into group Kebe and to arrange for Kebe‘s files to be readable for all
group members (default case) but not others. With the ―one group for everyone‖ approach it would have
been necessary to introduce a new group completely from scratch, and to reconfigure the Abe and Kebe
accounts accordingly.

 GECOS: This is the comment field, also known as the ―GECOS field‖. GECOS stands for
General Electric Comprehensive Operating System‖ and has nothing whatever to do with Linux,
except that in the early days of Unix this field was added to /etc/passwd in order to keep
compatibility data for a GECOS remote job entry service.
o This field contains various bits of information about the user, in particular his ―real‖
name and

optional data such as the office number or telephone number. This information is used by programs such
as mail.

 home directory: This directory is that user‘s personal area for storing his own files. A newly
created home directory is by no means empty, since a new user normally receives a number of
―profile‖ files as his basic equipment. When a user logs in, his shell uses his home directory as its
current directory, i.e., immediately after logging in the user is deposited there.
 shell: The name of the program to be started by login after successful authentication — this is
usually a shell. The seventh field extends through the end of the line.

152
Some of the fields shown here may be empty. Absolutely necessary are only the user name, UID, GID
and home directory. For most user accounts, all the fields will be filled in, but pseudo-users might use
only part of the fields.

NB: as an administrator you should not edit /etc/passwd by hand. There is a number of programs that will
help you create and maintain user accounts.

The /etc/shadow File

For security, nearly all current Linux distributions store encrypted user passwords in the /etc/shadow file
(―shadow passwords‖). This file is unreadable for normal users; only root may write to it, while members
of the shadow group may read it in addition to root . If you try to display the file as a normal user an
error occurs. Use of /etc/shadow is not mandatory but highly recommended.

This file contains one line for each user, with the following format:

⟨user name⟩:⟨password⟩:⟨change⟩:⟨min⟩:⟨max⟩:⟨warn⟩:⟨grace⟩:⟨lock⟩:⟨reserved⟩

Here is the meaning of the individual fields:

 user name: This must correspond to an entry in the /etc/passwd file. This field joins the two
files.

 password: The user‘s encrypted password. An empty field generally means that the user can log
in without a password. An asterisk or an exclamation point prevent the user in question from
logging in. It is common to lock user‘s accounts without deleting them entirely by placing an
asterisk or exclamation point at the beginning of the corresponding password.

You might think that if passwords are encrypted they can also be decrypted again. This would open all of
the system‘s accounts to a clever cracker who manages to obtain a copy of /etc/shadow. However, in
reality this is not the case, since password ―encryption‖ is a oneway street. It is impossible to recover the
decrypted representation of a Linux password from the ―encrypted‖ form because the method used for
encryption prevents this. The only way to ―crack‖ the encryption is by encrypting likely passwords and
checking whether they match what is in /etc/shadow .

 change: The date of the last password change, in days since 1 January 1970.

 min: The minimal number of days that must have passed since the last password change before
the password may be changed again.
 max: The maximal number of days that a password remains valid without having to be changed.
After this time has elapsed the user must change his password.
 warn: The number of days before the expiry of the ⟨max⟩ period that the user will be warned
about having to change his password. Generally, the warning appears when logging in.

 grace: The number of days, counting from the expiry of the ⟨max⟩ period, after which the
account will be locked if the user doesn‘t change his password. (During the time from expiry of
⟨max⟩ period and the expiry of this grace period the user may log in but must immediately
change his password.)

153
 lock: The date on which the account will be definitively locked, again in days since 1 January
1970.

The /etc/group File

By default, Linux keeps group information in the /etc/group file. This file contains one-line entry for
each group in the system, which consists of fields separated by colons (:). More precisely, /etc/group
contains four fields per line.

⟨group name⟩ : ⟨password⟩ : ⟨GID⟩ : ⟨members⟩

Their meaning is as follows:

 group name: The name of the group, for use in directory listings, etc.

 password: An optional password for this group. This lets users who are not members of the
group via /etc/shadow or /etc/group assume membership of the group using newgrp. A ―*‖ as an
invalid character prevents normal users from changing to the group in question. A ―x‖ refers to
the separate password file /etc/gshadow.
 GID: The group‘s numerical group identifier.

 members: A comma-separated list of user names. This list contains all users who have this group
as a secondary group, i.e., who are members of this group but have a different value in the GID
field of their /etc/passwd entry. (Users with this group as their primary group may also be listed
here but that is unnecessary.)
o 5.1.2. Managing User Accounts and Group Information

After a new Linux distribution has been installed, there is often just the root account for the system
administrator and the pseudo-users‘ accounts. Any other user accounts must be created first (and most
distributions today will gently but firmly nudge the installing person to create at least one ―normal‖ user
account).

As the administrator, it is your job to create and manage the accounts for all required users (real and
pseudo). To facilitate this, Linux comes with several tools for user management. With them, this is mostly
a straightforward task, but it is important that you understand the background.

Creating User Accounts

The procedure for creating a new user account is always the same (in principle) and consists of the
following steps:

1. You must create entries in the /etc/passwd (and possibly /etc/shadow) files.

 If necessary, an entry (or several) in the /etc/group file is necessary.

 You must create the home directory, copy a basic set of files into it, and transfer ownership of
the lot to the new user.
 If necessary, you must enter the user in further databases, e. g., for disk quotas, database access
privilege tables and special applications.

154
All files involved in adding a new account are plain text files. You can perform each step manually
using a text editor. However, as this is a job that is as tedious as it is elaborate, it behooves you to let the
system help you, by means of the useradd program.

Refer to the labmanual and man page (of useradd) on how to create new users

After new user has been created using useradd, the new account is not yet accessible; the system
administrator must first set up a password.

The passwd Command

The passwd command is used to set up passwords for users. If you are logged in as root , then

$ passwd new_user

asks for a new password for new_user (You must enter it twice as it will not be echoed to the screen).

The passwd command is also available to normal users, to let them change their own passwords
(changing other users‘ passwords is root ‘s prerogative):

$ passwd Changing password for Hello. (current) UNIX password: # just type, it will not be echoed
to the screen Enter new UNIX password:
Retype new UNIX password: passwd: password updated successfully

On the side, passwd serves to manage various settings in /etc/shadow. For example, you can look at a
user‘s ―password state‖ by calling the passwd command with the -S option:

$ passwd S hello hello P 12/11/2021 0 99999 7 1

From the above output, the first field is th user name, followed by the password state (‗PS‘ or ‗P‘ if
password is set, ‗LK‘ or ‗L‘ for a locked account, and ‗NP‘ for an account with no password at all). The
other fields are the date of the last password change (third), the minimum and maximum interval for
changing the password (fourth and fifth), the expiry warning interval (sixth), and the ―grace period‖
before the account is locked completely after the password has expired (last) respectively.

You can change some of these settings by means of passwd options. Here are a few examples:

$ passwd l hello # lock the account $ passwd u hello # unlock the account $
passwd n 7 hello # Password change at most every 7 days $ passwd x 30 hello #
Password change at least every 30 days $ passwd w 3 hello # 3 days grace period before
password expires

Changing the remaining settings in /etc/shadow requires the chage command:

$ chage E 2021/12/21 hello # Lock the account from 21 December 2021 $ chage E l hello #
Cancel expiry date $ chage I 7 hello # Grace period 1 week from password expiry $ chage m 7

155
hello # Like passwd n $ chage M 7 hello # Like passwd x $ chage W 3
hello # Like passwd w

You cannot retrieve a clear-text password even if you are the administrator. Even checking /etc/shadow
doesn‘t help, since this file stores all passwords already encrypted. If a user forgets their password, it is
usually sufficient to reset their password using the passwd command.

Should you have forgotten root password and not be logged in as root by any chance, your last option is
to boot Linux to a shell, or boot from a rescue disk or CD. After that, you can use an editor to clear the
⟨password⟩ field of the root entry in /etc/passwd.

Deleting User Accounts

To delete a user account, you need to remove the user‘s entries from /etc/passwd and /etc/shadow, delete
all references to that user in /etc/group, and remove the user‘s home directory as well as all other files
created or owned by that user. If the user has, e.g., a mail box for incoming messages in

/var/mail, that should also be removed.

There is a suitable command to automate these steps. The userdel command removes a user account
completely. Its syntax:

userdel [ r ] ⟨user name⟩

The -r option ensures that the user‘s home directory (including its content) and his mail box in

/var/mail will be removed; other files belonging to the user—e. g., crontab files—must be delete
manually. A quick way to locate and remove files belonging to a certain user is the

find / uid ⟨UID⟩ delete

command. Without the -r option, only the user information is removed from the user database; the home
directory remains in place.

Changing User Accounts and Group Assignment

User accounts and group assignments are traditionally changed by editing the /etc/passwd and

/etc/group files. However, many systems contain commands like usermod and groupmod for the same
purpose, and you should prefer these since they are safer and—mostly—more convenient to use.

The usermod program accepts mostly the same options as useradd, but changes existing user accounts
instead of creating new ones. For example, with

usermod g ⟨group⟩ ⟨user name⟩

you could change a user‘s primary group.

156
Caution! If you want to change an existing user account‘s UID, you could edit the ⟨UID⟩ field in

/etc/passwd directly. However, you should at the same time transfer that user‘s files to the new UID
using chown : ―chown -R hello /home/hello‖ re-confers ownership of all files below user hello‘s home
directory to user hello, after you have changed the UID for that account. If ―ls -l‖ displays a numerical

UID instead of a textual name, this implies that there is no user name for the UID of these files. You can
fix this using chown.

Changing User Information Directly— vipw

The vipw command invokes an editor (vi or a different one) to edit /etc/passwd directly. At the same
time, the file in question is locked in order to keep other users from simultaneously changing the file
using, e. g., passwd (which changes would be lost).

Creating, Changing and Deleting Groups

Like user accounts, you can create groups using any of several methods. The ―manual‖ method is much
less tedious here than when creating new user accounts: Since groups do not have home directories, it is
usually sufficient to edit the /etc/group file using any text editor, and to add a suitable new line. When
group passwords are used, another entry must be added to /etc/gshadow.

Incidentally, there is nothing wrong with creating directories for groups. Group members can place the
fruits of their collective labour there. The approach is similar to creating user home directories, although
no basic set of configuration files needs to be copied.

For group management, there are, by analogy to useradd, usermod, and userdel, the groupadd,
groupmod, and groupdel programs that you should use in favour of editing /etc/group and

/etc/gshadow directly. With groupadd you can create new groups simply by giving the correct command
parameters:

groupadd [ g ⟨GID⟩] ⟨group name⟩

The -g option allows you to specify a given group number, which is a positive integer. The values up to
99 are usually reserved for system groups. If -g is not specified, the next free GID is used.

You can edit existing groups with groupmod without having to write to /etc/group directly:

groupmod [ g ⟨GID⟩] [ n ⟨name⟩] ⟨group name⟩

The “-g ⟨GID⟩” option changes the group‘s GID. Unresolved file group assignments must be adjusted
manually. The ―-n ⟨name⟩‖ option sets a new name for the group without changing the GID; manual
adjustments are not necessary.

There is also a tool to remove group entries. This is unsurprisingly called groupdel:

groupdel ⟨group name⟩

157
5.4. Linux Service/Server Administration
5.4.1 Supporting a Windows Network – through SAMBA
5.4.1.1 What Samba is All About [From samba.org]

The commercialization of the Internet over the past few years has created something of a modern melting
pot. It has brought business-folk and technologists closer together than was previously thought possible.
As a side effect, Windows and Unix systems have been invading each others‘ turf, and people expect that
they will not only play together nicely, but that they will share.

A lot of emphasis has been placed on peaceful coexistence between Unix and Windows. The Usenix
Association (http://www.usenix.org/) has even created an annual conference around this theme.
Unfortunately, the two systems come from very different cultures and they have difficulty getting along
without mediation. and that, of course, is Samba‘s job. Samba runs on Unix platforms, but speaks to

Windows clients like a native. It allows a Unix system to move into a Windows ―Network Neighborhood‖
without causing a stir. Windows users can happily access file and print services without knowing or
caring that those services are being offered by a Unix host.

All of this is managed through a protocol suite which is currently known as the ―Common Internet File
System―, or CIFS. This name was introduced by Microsoft, and provides some insight into their hopes
for the future. At the heart of CIFS is the latest incarnation of the Server Message Block (SMB) protocol,
which has a long and tedious history. Samba is an open source CIFS implementation, and is available for
free from the http://samba.org/ mirror sites.

Samba and Windows are not the only ones to provide CIFS networking. OS/2 supports SMB file and
print sharing, and there are commercial CIFS products for Macintosh and other platforms (including
several others for Unix). Samba has been ported to a variety of non-Unix operating systems, including
VMS, AmigaOS, & NetWare. CIFS is also supported on dedicated file server platforms from a variety of
vendors. In other words, this stuff is all over the place.

5.4.1.2. General Overview

SMB (Server Message Block) is the protocol used by Windows systems to share files and printers across
a network, just like the NFS and LPR protocols are used by Unix systems. Any time you use the Network
Neighborhood, My Network Places, or map network drive features of Windows, the SMB protocol is
being used. Because it is the standard method of file sharing on Windows systems, it has become the most
commonly used method of sharing files on local networks.

Even though SMB is thought of as a Windows protocol, it was originally developed by DEC and has been
implemented by many different companies and in many products. These days it is often referred to as
CIFS (the Common Internet File System), even though the protocol itself has not changed. In fact, many
ancient clients will still be able to access modern SMB servers like Samba.

An SMB server is a system that has files or printers that it wants to allow other hosts access to. An SMB
client is a system that wants to read or write files on a server, or print to a server‘s printer. A single
system can be both a client and a server, and all releases of Windows from 95 onwards include software

158
for these purposes. However, on a typical organization‘s network there is a single large server system and
many smaller clients that access files on it.

Every host that uses the SMB protocol has a hostname, which is typically the same as its DNS name. A
server host can have multiple shares, each of which has a unique name and corresponds to a directory or
local printer on the server system. Shares are referred to using the \\hostname\sharename notation, such
as \\WCU\documents. On Windows clients, file shares are normally mapped to drive letters such as S: so
that they can be more easily referred to. All Windows applications can read and write files on a server in
exactly the same way that they would for local files.

Shared printers accessed by a client are not assigned a drive letter, but may be connected to a fake printer
port such as lpt2:. Clients can send jobs to the printer, view those that are currently waiting to be printed
and cancel jobs submitted by the same user. Unlike the Unix LPR protocol, clients using a remote printer
must have the appropriate driver installed, and must send data to the server in the format that the printer
actually accepts.

Fortunately, it is possible for Linux and Unix systems to participate in SMB file and printer sharing as
well. The software that makes this all possible is called Samba, a completely free re-implementation of
the SMB protocol for Unix systems. Samba has been available and under development for many years,
ever since the SMB protocol first started to be used on DOS systems. It allows a Unix system to do as
good a job of serving Windows clients as a real Windows server would – in fact, some would say that it is
even better.

Samba uses two daemon processes, named smbd and nmbd. The first handles actual file or printer share
requests from clients, while the second responds to SMB name lookup requests. Both daemons

use the smb.conf configuration file, which is usually found in the /etc directory. Any change made to this
file (either manually or by using Webmin) will be immediately detected by both daemons, and will take
effect at once. Unlike most other Unix server processes, they do not need to be signaled to re-read the
configuration file if it changes.

Unfortunately, there are some complexities that arise when sharing files between Unix and Windows
systems. The SMB protocol has no support for concepts such as file ownership or permissions, at least not
in the form that they exist on Unix systems. NTFS filesystem access control lists (used on Windows NT,
2000, XP and Vista) are supported instead, which are incompatible with normal Unix permissions. Samba
does have some support for them, but setting it up is complex and not covered in this page.

The SMB protocol supports authentication, so that clients can be forced to provide a valid username and
password to the server before they can access a share. The Samba server uses the standard Unix user
database to validate clients, although actual Unix passwords cannot be used (for reasons explained later).
When a client logs in to a Samba server, it accesses files with the permissions of the Unix user that it
authenticated as – just as an FTP client would. This means that all the normal file permission and
ownership rules apply.

Samba can be compiled on every version of Unix supported by Webmin, and has the same features on all
of them. This means that the module‘s user interface is the same as well, although differences in the
default configuration may cause some features to be initially inaccessible.

5.4.1.3. What Samba Does

159
Samba consists of two key programs (see above for detail), plus a bunch of other stuff that we‘ll get to
later. The two key programs are smbd and nmbd. Their job is to implement the four basic modern-day
CIFS services, which are:

1. File & print services


2. Authentication and Authorization
3. Name resolution
4. Service announcement (browsing)

File and print services are, of course, the cornerstone of the CIFS suite. These are provided by smbd,
the SMB Daemon. Smbd also handles ―share mode‖ and ―user mode‖ authentication and

authorization. That is, you can protect shared file and print services by requiring passwords. In share
mode, the simplest and least recommended scheme, a password can be assigned to a shared directory or
printer (simply called a ―share‖). This single password is then given to everyone who is allowed to use the
share. With user mode authentication, each user has their own username and password and the System
Administrator can grant or deny access on an individual basis.

The Windows NT Domain system provides a further level of authentication refinement for CIFS. The
basic idea is that a user should only have to log in once to have access to all of the authorized services on
the network. The NT Domain system handles this with an authentication server, called a Domain
Controller. An NT Domain (which should not be confused with a Domain Name System (DNS)
Domain) is basically a group of machines which share the same Domain Controller.

The NT Domain system deserves special mention because, until the release of Samba version 2, only
Microsoft owned code to implement the NT Domain authentication protocols. With version 2, Samba
introduced the first non-Microsoft-derived NT Domain authentication code. The eventual goal, of course,
it to completely mimic a Windows NT Domain Controller.

The other two CIFS pieces, name resolution and browsing, are handled by nmbd. These two services
basically involve the management and distribution of lists of NetBIOS names.

Name resolution takes two forms: broadcast and point-to-point. A machine may use either or both of
these methods, depending upon its configuration. Broadcast resolution is the closest to the original
NetBIOS mechanism. Basically, a client looking for a service named Trillian will call out ―Yo! Trillian!
Where are you?‖, and wait for the machine with that name to answer with an IP address. This can
generate a bit of broadcast traffic (a lot of shouting in the streets), but it is restricted to the local LAN so it
doesn‘t cause too much trouble.

The other type of name resolution involves the use of an NBNS (NetBIOS Name Service) server.
(Microsoft called their NBNS implementation WINS, for Windows Internet Name Service, and that
acronym is more commonly used today.) The NBNS works something like the wall of an old fashioned
telephone booth. Machines can leave their name and number (IP address) for others to see.

Hi, I‘m node Gaga. Call me for a good time! 192.168.100.101

It works like this: The clients send their NetBIOS names & IP addresses to the NBNS server, which keeps
the information in a simple database. When a client wants to talk to another client, it sends the other
client‘s name to the NBNS server. If the name is on the list, the NBNS hands back an IP address. You‘ve
got the name, look up the number.

160
Clients on different subnets can all share the same NBNS server so, unlike broadcast, the point-to-point
mechanism is not limited to the local LAN. In many ways the NBNS is similar to the DNS, but the NBNS
name list is almost completely dynamic and there are few controls to ensure that only authorized clients
can register names. Conflicts can, and do, occur fairly easily.

Finally, there‘s browsing. This is a whole ‗nother kettle of worms (difficult sitation), but Samba‘s nmbd
handles it anyway. This is not the web browsing we know and love, but a browsable list of services (file
and print shares) offered by the computers on a network.

On a LAN, the participating computers hold an election to decide which of them will become the Local
Master Browser (LMB). The ―winner‖ then identifies itself by claiming a special NetBIOS name (in
addition to any other names it may have). The LMBs job is to keep a list of available services, and it is
this list that appears when you click on the Windows ―Network Neighborhood‖ icon.

In addition to LMBs, there are Domain Master Browsers (DMBs). DMBs coordinate browse lists across
NT Domains, even on routed networks. Using the NBNS, an LMB will locate its DMB to exchange and
combine browse lists. Thus, the browse list is propagated to all hosts in the NT Domain. Unfortunately,
the synchronization times are spread apart a bit. It can take more than an hour for a change on a remote
subnet to appear in the Network Neighborhood.

Other Stuff

Samba comes with a variety of utilities. The most commonly used are:

 smbclient: A simple SMB client, with an interface similar to that of the FTP utility. It can be
used from a Unix system to connect to a remote SMB share, transfer files, and send files to
remote print shares (printers).

 nmblookup: A NetBIOS name service client. Nmblookup can be used to find NetBIOS names on
a network, lookup their IP addresses, and query a remote machine for the list of names the
machine believes it ownes.
 swat: The Samba Web Administration Tool. Swat allows you to configure Samba remotely,
using a web browser.

There are more, of course, but describing them would require explaining even more bits and pieces of
CIFS, SMB, and Samba. That‘s where things really get tedious, so we‘ll leave it alone for now.

5.4.1.5. SMB Filesystems for Linux

One of the cool things that you can do with a Windows box is use an SMB file share as if it were a hard
disk on your own machine. The N: drive can look, smell, feel, and act like your own disk space, but it‘s
really disk space on some other computer somewhere else on the network.

Linux systems can do this too, using the smbfs filesystem. Built from Samba code, smbfs (which stands
for SMB Filesystem) allows Linux to map a remote SMB share into its directory structure. So, for
example, the /mnt/zarquon directory might actually be an SMB share, yet you can read, write, edit, delete,
and copy the files in that directory just as you would local files.

161
The smbfs is nifty, but it only works with Linux. In fact, it‘s not even part of the Samba suite. It is
distributed with Samba as a courtesy and convenience. A more general solution is the new smbsh (SMB
shell). This is a cool gadget. It is run like a Unix shell, but it does some funky fiddling with calls to Unix
libraries. By intercepting these calls, smbsh can make it look as though SMB shares are mounted. All of
the read, write, etc. operations are available to the smbsh user. Another feature of smbsh is that it works
on a per-user, per shell basis, while mounting a filesystem is a system-wide operation. This allows for
much finer-grained access controls.

5.4.1.6. Setup and Management [Configurations will be covered in Lab]

Samba is configured using the smb.conf file. This is a simple text file designed to look a lot like those

*.ini files used in Windows. The goal, of course, is to give network administrators familiar with Windows
something comfortable to play with. Over time, though, the number of things that can be configured in
Samba has grown, and the percentage of Network Admins willing to edit a Windows *.ini file has shrunk.
For some people, that makes managing the smb.conf file a bit daunting.

Still, learning the ins and outs of smb.conf is a worth-while penance. Each of the smb.conf variables has a
purpose, and a lot of fine tuning can be accomplished. The file structure contents are fully documented, so
as to give administrators a running head start, and smb.conf can be manipulated using swat, which at
least makes it nicer to look at.

5.4.2. Mail Server


Electronic mail (email or e-mail) is a method of exchanging messages (―mail‖) between people using
electronic devices. Email was thus conceived as the electronic (digital) version of, or counterpart to, mail,
at a time when ―mail‖ meant only physical mail (hence e- + mail). Email later became a ubiquitous (very
widely used) communication medium, to the point that in current use, an e-mail address is often treated as
a basic and necessary part of many processes in business, commerce, government, education,
entertainment, and other spheres of daily life in most countries. Email is the medium, and each message
sent therewith is called an email (mass/count distinction).

5.4.2.1. Operation

The following is a typical sequence of events that takes place when sender Alice transmits a message
using a mail user agent (MUA) addressed to the email address of the recipient, Bob.

1. The MUA formats the message in email format and uses the submission protocol, a profile of the
Simple Mail Transfer Protocol (SMTP), to send the message content to the local mail submission
agent (MSA), in this case smtp.a.org.

 The MSA determines the destination address provided in the SMTP protocol (not from the
message header) — in this case, bob@b.org — which is a fully qualified domain address
(FQDA). The part before the @ sign is the local part of the address, often the username of the
recipient, and the part after the @ sign is a domain name. The mail submission agent (MSA)
resolves a domain name to determine the fully qualified domain name of the mail server in the
Domain Name System (DNS).

162
 The DNS server for the domain b.org (ns.b.org) responds with any MX records listing the mail
exchange servers for that domain, in this case mx.b.org, a message transfer agent (MTA) server
run by the recipient‘s ISP.
 smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the
message to other MTAs before the message reaches the final message delivery agent (MDA).
 The MDA delivers it to the mailbox of user bob.

 Bob‘s MUA picks up the message using either the Post Office Protocol (POP3) or the Internet
Message Access Protocol (IMAP).

Message format

The basic Internet message format used for email is defined by RFC 5322, with encoding of non-ASCII
data and multimedia content attachments defined in RFC 2045 through RFC 2049, collectively called
Multipurpose Internet Mail Extensions or MIME. The extensions in International email apply only to
email. RFC 5322 replaced the earlier RFC 2822 in 2008, then RFC 2822 in 2001 replaced RFC 822 – the
standard for Internet email for decades. Published in 1982, RFC 822 was based on the earlier RFC 733 for
the ARPANET.

Internet email messages consist of two sections, ‗header‗ and ‗body‗. These are known as ‗content‗. The
header is structured into fields such as From, To, CC, Subject, Date, and other information about the
email. In the process of transporting email messages between systems, SMTP communicates delivery
parameters and information using message header fields. The body contains the message, as unstructured
text, sometimes containing a signature block at the end. The header is separated from the body by a blank
line.

Message header

RFC 5322 specifies the syntax of the email header. Each email message has a header, comprising a
number of fields. Each field has a name (―field name‖), followed by the separator character ―:‖, and a
value (―field body‖ or ―header field body‖).

The message header must include at least the following fields:

 From: The email address, and, optionally, the name of the author(s). Some email clients are
changeable through account settings.
 Date: The local time and date the message was written.
 To: The email address(es), and optionally name(s) of the message‘s recipient(s). Indicates
primary recipients (multiple allowed), for secondary recipients see Cc: and Bcc: below.
 Subject: A brief summary of the topic of the message. Certain abbreviations are commonly used
in the subject, including ―RE:‖ and ―FW:‖.
 Cc (Carbon copy): Many email clients mark email in one‘s inbox differently depending on
whether they are in the To: or Cc: list.
 Bcc (Blind carbon copy): addresses are usually only specified during SMTP delivery, and not
usually listed in the message header.
 Content-Type: Information about how the message is to be displayed, usually a MIME type.
 Precedence: commonly with values ―bulk‖, ―junk‖, or ―list‖; used to indicate automated
―vacation‖ or ―out of office‖ responses should not be returned for this mail, e.g. to prevent
vacation notices from sent to all other subscribers of a mailing list. Sendmail uses this field to

163
affect prioritization of queued email, with ―Precedence: special-delivery‖ messages delivered
sooner. With modern high-bandwidth networks, delivery priority is less of an issue than it was.
 Message-ID: Also an automatic-generated field to prevent multiple deliveries and for reference
in In-Reply-To: (see below).

 In-Reply-To: Message-ID of the message this is a reply to. Used to link related messages
together. This field only applies to reply messages.
 References: Message-ID of the message this is a reply to, and the message-id of the message the
previous reply was a reply to, etc.
 Reply-To: Address should be used to reply to the message.
 Sender: Address of the sender acting on behalf of the author listed in the From: field.
 Archived-At: A direct link to the archived form of an individual email message.

5.5. Review Questions

1. Discuss the history, controversy, war … between open-source and closed sources
software.
2. What are advantages and disadvantages of Linux.
3. Discuss what makes Linux and GNU different from each other, and what makes them
one.

Can we say Linux and Unix are the same? If they are, then how? If they are not, then why?

164
Network Device and Configuration

Prepared by
Aklilu Elias (MSc)

Reviewed by
Alemayehu Dereje

165
UNIT I
1 CONFIGURATION WIZARD
While the configuration wizard is an easy way to display complex configuration options, it does
rely on the user having a basic understanding of the software component.

1.2 Network Devices


Computer networking devices are units that mediate data in a computer network and are also
called network equipment. Units that are the last receiver or generate data are called hosts or data
terminal equipment.
Network Models It was developed by the International Organization for Standardization (ISO). It
was first introduced in the late 1970s. It is a model for a computer protocol architecture and as a
framework for developing protocol standards. An ISO standard that covers all aspects of network
communications is the Open Systems Interconnection (OSI) model.

1.3 OSI Model


The OSI Model (Open Systems Interconnection Model) is a conceptual framework used to
describe the functions of a networking system. The OSI model characterizes computing
functions into a universal set of rules and requirements in order to support interoperability
between different products and software. It comprises of seven layers.

Advantages:

 Network communication is broken into smaller, more manageable parts.


 Allows different types of network hardware and software to communicate with each other.
 All layers are independent and changes does not affect other layers.
 Easier to understand network communication.

Why layered communication?

 To reduce complexity of communication task by splitting it into several layered small tasks
 assists in protocol design
 changes in one layer do not affect other layers
 provides a common language

166
Figure 1. 1 OSI model
Layer 1: Physical Layer
The lowest layer of the OSI Model is concerned with electrically or optically transmitting raw
unstructured data bits across the network from the physical layer of the sending device to the
physical layer of the receiving device. It can include specifications such as voltages, pin layout,
cabling, and radio frequencies. At the physical layer, one might find ―physical‖ resources such as
network hubs, cabling, repeaters, network adapters or modems.

 Define physical characteristics of network. E.g. wires, connector, voltages, data rates,
Asynchronous, Synchronous Transmission.

167
 Handles bit stream or binary transmission.
 Used to maintain, activate and deactivate physical link.
 For receiver it reassembles bits and send to upper layer for frames.

For Sender it convert frames into bit stream and send on transmission medium.

Layer 2: Data Link


At the data link layer, directly connected nodes are used to perform node-to-node data transfer
where data is packaged into frames. The data link layer also corrects errors that may have
occurred at the physical layer. The data link layer encompasses two sub-layers of its own. The
first, media access control (MAC), provides flow control and multiplexing for device
transmissions over a network. The second, the logical link control (LLC), provides flow and
error control over the physical medium as well as identifies line protocols.

 Packages raw bits from the physical layer into FRAMES.


 The data link layer provides reliable transit of data across a physical link by using the Media
Access Control (MAC) addresses. Source & Destination (address of device that connects one
Network to next) address.
 Flow Control: Prevent overwhelming of Receiving Node.
 Error Control: Through Trailer
 Access Control: Which device to have control
 Data Link LAN specifications: Fast Ethernet, Token Ring, FDDI.
 Data Link WAN specifications are: Frame Relay, PPP, X.25.
 Bridges and Switches operate at this layer

Sub layers of Layer 2

 Logical link layer (LLC)


o Used for communication with upper layers
o Error correction
o Flow control
 Media Access Control (MAC)
o Access to physical medium

168
o Header and trailer
o Trailer: The trailer typically includes a frame check sequence (FCS), which is used to perform
error detection.

Layer 3: Network
The network layer is responsible for receiving frames from the data link layer, and delivering
them to their intended destinations among based on the addresses contained inside the frame.
The network layer finds the destination by using logical addresses, such as IP (internet protocol).
At this layer, routers are a crucial component used to quite literally route information where it
needs to go between networks.

 Defines source to destination delivery of packets across NWs.


 Defines logical addressing and best path determination.
 Treat each packet independently
 Defines how routing works and how routes are learned
 Converts frames to packets
 Routed protocols ( encapsulate data into packets) and Routing protocols (create routing tables)
work on this layer
 Examples of Routed protocols are: IP, IPX, AppleTalk and Routing protocols are OSPF,
IGRP/EIGRP, RIP, BGP
 Routers operate at Layer 3.

Layer 4: Transport
The transport layer manages the delivery and error checking of data packets. It regulates the size,
sequencing, and ultimately the transfer of data between systems and hosts. One of the most
common examples of the transport layer is TCP or the Transmission Control Protocol.

 It regulates information flow to ensure process-to- process connectivity between host


applications reliably and accurately
 Adds service point address or Port address
 Segmentation & Re-assembly: SEGMENTS data from sending node and reassembles data on
receiving node
 Flow control / Error control at Source to destination level

169
 Connection oriented transport service ensures that data is delivered error free, in sequence with
no losses or duplications
 Establishes, maintains and terminates virtual circuits
 Connection oriented / Connectionless:

TCP (Reliable, provides guaranteed delivery),


UDP (Unreliable, less overhead, reliability can be provided by the Application layer)

Provides multiplexing: the support of different flows of data to different applications on the same
host

Layer 5: Session
The session layer controls the conversations between different computers. A session or
connection between machines is set up and managed at layer 5. Session layer services also
include authentication and reconnections.

 The session layer defines how to start, control and end conversations (called sessions) between
applications
 Establishes dialog control between the two computers in a session, regulating which side
transmits, plus when and how long it transmits (Full duplex)
 Synchronization: Allows processes to add check points. E.g. Insert check point at every 100 page
of 2000 page file to ensure that each 100-page unit is received & acknowledged
 Transmits Data

Layer 6: Presentation
The presentation layer formats or translates data for the application layer based on the syntax or
semantics that the application accepts. Because of this, it at times also called the syntax layer.
This layer can also handle the encryption and decryption required by the application layer.

 Presentation layer is concerned with the syntax and semantics of the information exchanged
between two systems.
 This layer is primarily responsible for the translation, encryption and compression of data.
 Defines coding and conversion functions

170
 This layer also manages security issues by providing services such as data encryption and data
compression
 Examples of these formats and schemes are: MPEG, QuickTime, ASCII, EBCDIC, GIF, TIFF,
JPEG

Layer 7: Application
At this layer, both the end user and the application layer interact directly with the software
application. This layer sees network services provided to end-user applications such as a web
browser or Office 365. The application layer identifies communication partners, resource
availability, and synchronizes communication.

 The application layer is responsible for providing services to the user


 Closest to the user and provides user interface
 Establishes the availability of intended communication partners
 Examples of Application layer protocols are: Telnet, SMTP, FTP, SNMP

Layer 1 Vs Layer 2

Layer 1 cannot communicate with upper layers Layer 2 does this using LLC
Layer 1 cannot identify computer Layer 2 uses addressing process
Layer 1 can only describe stream of bits Layer 2 uses framing to organize bits

5.9 Data Encapsulation


Data Encapsulation is the process of adding a header to wrap/envelop the data that flows down
the OSI model. The 5 Steps of Data Encapsulation are:

1. The Application, Presentation and Session layers create DATA from users‘ input.
2. The Transport layer converts the DATA to SEGMENTS
3. The NW layer converts the Segments to Packets (datagram)
4. The Data Link layer converts the PACKETS to FRAMES

171
5. The Physical layer converts the FRAMES to BITS.

Some of application layer protocols and their functions


Simple Mail Transfer Protocol (SMTP)

 Governs the transmission of mail messages and attachments


 SMTP is used in the case of outgoing messages
 More powerful protocols such as POP3 and IMAP4 are needed and available to manage
incoming messages
 POP3(Post Office Protocol version 3) is the older protocol
 IMAP4(Internet Mail Access Protocol version 4) is the more advanced protocol

Telnet:

 It allows a user on a remote client machine, called the Telnet client, to access the resources of
another machine, the Telnet server, in order to access a command-line interface.

File Transfer Protocol (FTP)

 File Transfer Protocol (FTP) actually lets us transfer files, and it can accomplish this between
any two machines using it.
 FTP‘s functions are limited to listing and manipulating directories, typing file contents, and
copying files between hosts.

Simple Network Management Protocol (SNMP)

 Simple Network Management Protocol (SNMP) collects and manipulates valuable network
information.

Hypertext Transfer Protocol (HTTP)

 It‘s used to manage communications between web browsers and web servers and opens the right
resource when you click a link, wherever that resource may actually reside.

Hypertext Transfer Protocol Secure (HTTPS)

172
 Hypertext Transfer Protocol Secure (HTTPS) is also known as Secure Hypertext Transfer
Protocol. It uses Secure Sockets Layer (SSL).

Domain Name Service (DNS)

 Domain Name Service (DNS)resolves hostnames—specifically, Internet names, such as


www.wcu.edu.et

Dynamic Host Configuration Protocol (DHCP)

 Dynamic Host Configuration Protocol (DHCP)assigns IP addresses to hosts dynamically.


 It allows for easier administration and works well in small to very large network environments.

Some of Transport layer protocols and their functions

TCP (Transmission Control Protocol)

 TCP: takes large blocks of information from an application and breaks them into segments.
 It is Connection oriented means that a virtual connection is established before any user data is
transferred. (handshake)

User Datagram Protocol (UDP)

 UDP does not sequence the segments and does not care about the order in which the segments
arrive at the destination.
 UDP just sends the segments off and forgets about them.

173
Table 1. 1 Well-Known TCP Port Numbers

1.4 Network device


Hub

Hubs connect computers together in a star topology network. Due to their design, they increase
the chances for collisions. Hubs operate in the physical layer of the OSI model and have no
intelligence. Hubs flood incoming packets to all ports all the time. For this reason, if a network is
connected using hubs, the chances of a collision increases linearly with the number of computers
(assuming equal bandwidth use).

Hubs cannot filter data so data packets are sent to all connected devices/computers and do not
have intelligence to find out best path for data packets. This leads to inefficiencies and wastage.

174
Bridge

In telecommunication networks, a bridge is a product that connects a local area network (LAN)
to another local area network that uses the same protocol. Having a single incoming and
outgoing port and filters traffic on the LAN by looking at the MAC address, bridge is more
complex than hub. Bridge looks at the destination of the packet before forwarding unlike a hub.
It restricts transmission on other LAN segment if destination is not found. Bridge works at the
data-link (physical network) level of a network, copying a data frame from one network to the
next network along the communications path. It used to connect two subnetworks that use
interchangeable protocols. It combines two LANs to form an extended LAN. The main
difference between the bridge and repeater is that the bridge has a penetrating efficiency.

 Transparent Bridges: It is also called learning bridges. Bridge construct its table of terminal
addresses on its own as it implements connecting two LANs. It facilitates the source location to
create its table. It is self-updating. It is a plug and plays bridge. Transparent Bridges is invisible
to the other devices on the network. Transparent Bridge only perform the function of blocking or
forwarding data based on MAC address. MAC address may also be referred as hardware address
or physical address. These addresses are used to built tables and make decision regarding
whether a frame should be forward and where it should be forwarded.
 Source Routing Bridge: Source-route Bridges were designed by IBM for use on Token ring
networks. The SR Bridge derives the entire route of the frame embedded within the frame. This
allows the Bridge to make specific decision about how the frame should be forwarded through
the network. This sending terminal means the bridges that the frames should stay. This type of
bridge is used to prevent looping problem.
 Translational Bridge: Translational Bridges are useful to connect segments running at different
speeds or using different protocols such as token Ring and Ethernet networks. Depending on the
direction of travel, a Translational Bridge can add or remove information and fields from frame
as needed.

Repeater

A repeater is an electronic device that receives a signal and retransmits it at a higher level and/or

175
higher power, or onto the other side of an obstruction, so that the signal can cover longer
distances without degradation. Because repeaters work with the actual physical signal, and do
not attempt to interpret the data being transmitted, they operate on the physical layer, the first
layer of the OSI model. Repeaters are majorly employed in long distance transmission to reduce
the effect of attenuation. It is important to note that repeaters do not amplify the original signal
but simply regenerate it.

Modem

Modem (from modulator-demodulator) is a device that turns the digital 1s and 0s of a personal
computer into sounds that can be transmitted over the telephone lines

NIC (Network Interface Card)

A network interface card is a computer hardware component designed to allow computers to


communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2
(data link layer) device, as it provides physical access to a networking medium and provides a
low-level addressing system through the use of MAC addresses. It allows users to connect to
each other either by using cables or wirelessly. Most motherboards today come equipped with a
network interface card in the form of a controller, with the hardware built into the board itself,
eliminating the need for a standalone card.

Switch

A switch when compared to bridge has multiple ports. Switches can perform error checking
before forwarding data, which are very efficient by not forwarding packets that error-end out or
forwarding good packets selectively to correct devices only. Switches can support both layer 2
(based on MAC Address) and layer 3 (Based on IP address) depending on the type of switch.
Usually large networks use switches instead of hubs to connect computers within the same
subnet.

 A switch operates in the layer 2, i.e. data link layer of the OSI model.
 It is an intelligent network device that can be conceived as a multiport network bridge.

176
 It uses MAC addresses (addresses of medium access control sublayer) to send data packets to
selected destination ports.
 It uses packet switching technique to receive and forward data packets from the source to the
destination device.
 It is supports unicast (one-to-one), multicast (one-to-many) and broadcast (one-to-all)
communications.
 Transmission mode is full duplex, i.e. communication in the channel occurs in both the
directions at the same time. Due to this, collisions do not occur.
 Switches are active devices, equipped with network software and network management
capabilities.
 Switches can perform some error checking before forwarding data to the destined port.
 The number of ports is higher – 24/48.

Types of Switches
There are variety of switches that can be broadly categorized into 4 types:

 Unmanaged Switch − These are inexpensive switches commonly used in home networks and
small businesses. They can be set up by simply plugging in to the network, after which they
instantly start operating. When more devices needs to be added, more switches are simply added
by this plug and play method. They are referred to as unmanaged since they do not require to be
configured or monitored. Unmanaged switches are generally made as plug-and-play devices and
require little to no special installation beyond an Ethernet cable. The setup of this type of switch
relies on auto-negotiation between Ethernet devices to enable communication between them. The
switch will automatically determine the best data rate to use, switching between full-duplex
mode (where data is received or transmitted in two directions at the same time) or half-duplex
mode (where data is received or transmitted two ways but only one direction at a time).
 Managed Switch − These are costly switches that are used in organisations with large and
complex networks, since they can be customized to augment the functionalities of a standard
switch. The augmented features may be QoS (Quality of Service) like higher security levels,
better precision control and complete network management. Despite their cost, they are preferred
in growing organizations due to their scalability and flexibility. Simple Network Management
Protocol (SNMP) is used for configuring managed switches. A managed switch is exactly what it

177
sounds like—a switch that requires some oversight by a network administrator. This type of
switch gives you total control over the traffic accessing your network while allowing you to
custom-configure each Ethernet port so you get maximum efficiency over data transfers on the
network. Managed switches are also typically the best network switches to support the Gigabit
standard of Ethernet rather than traditional Fast Ethernet.
 LAN Switch − Local Area Network (LAN) switches connects devices in the internal LAN of an
organization. They are also referred as Ethernet switches or data switches. These switches are
particularly helpful in reducing network congestion or bottlenecks. They allocate bandwidth in a
manner so that there is no overlapping of data packets in a network.
 PoE Switch − Power over Ethernet (PoE) switches are used in PoE Gogabit Ethernets. PoE
technology combine data and power transmission over the same cable so that devices connected
to it can receive both electricity as well as data over the same line. PoE switches offer greater
flexibility and simplifies the cabling connections. A PoE switch distributes power over the
network to different devices. This means any device on the network, from PCs to IP cameras and
smart lighting systems, can function without the need to be near an AC access point or router,
because the PoE switch sends both data and power to the connected devices.

1.4.1Media Converter
A media converter, in the context of network hardware, is a cost-effective and flexible device
intended to implement and optimize fiber links in every kind of network. Among media
converters, the most often used type is a device that works as a transceiver, which converts the
electrical signal utilized in copper unshielded twisted pair (UTP) network cabling to light waves
used for fiber optic cabling. It is essential to have the fiber optic connectivity if the distance
between two network devices is greater than the copper cabling is transmission distance.

The copper-to-fiber conversion carried out by a media converter allows two network devices
having copper ports to be connected across long distances by means of fiber optic cabling. Media
converters are available as Physical Layer or Layer 2 switching devices, and can provide rate-
switching and other advanced switching features like VLAN tagging. Media converters are
typically protocol specific and are available to support a wide variety of network types and data
rates.

178
Media converters can also convert between wavelengths for Wavelength Division Multiplexing
(WDM) applications. Deployed in Enterprise, Government, Data Center, and Telecom Fiber to
the x networks, media converters have become the Swiss army knife of networking to enable
connectivity and fiber distance extension.

The Benefits of Media Converters


Network complexity, demanding applications, and the growing number of devices on the
network are driving network speeds and bandwidth requirements higher and forcing longer
distance requirements within the Local Area Network (LAN). Media converters present solutions
to these problems, by allowing the use of fiber when it is needed, and integrating new equipment
into existing cabling infrastructure. Media converters provide seamless integration of copper and
fiber, and different fiber types in Enterprise LAN networks. They support a wide variety of
protocols, data rates and media types to create a more reliable and cost-effective network.

Figure 1. 2 Multi-mode media converter

179
Configuring Basic Settings

Setting the Hostname

Cisco switch by default have a host name ―switch‖. To change this name follow the instructions
below:

1. Click on the Switch. A popup window will be opened.


2. Go to CLI tab in the popup window.
3. Click in command box.
4. Press ―Enter‖.
5. To enable the switch give give following command: 1 | enable
6. To enable configuration mode give following command:
1 | configure terminal
7. To change the host name give following command: 1 | hostname
8. To save the configuration give following command: 1 | do write memory
9. To exit the configuration mode give following command: 1 | exit
10. To exit enable mode give following command:
1 | exit

180
Set or change password of cisco switch in cisco packet tracer

Cisco switch by default have no password. To set a password or change previous password
follow the instructions below: Click on the Switch. A popup window will be opened. Go to CLI
tab in the popup window. Click in command box.Press ―Enter‖.To enable the switch give give
following command: enable To enable.

Configuring Command-Line Access


To configure parameters to control access to the router, perform the following steps.

Command Purpose
configure terminal
Step 1 Example: Enters global configuration mode.
Router# configure terminal
line [ aux | console | tty | vty
] line-number
Step 2 Example: Enters line configuration mode, and specifies the type of line.
Router(config)# line
console 0
password password
Example:
Step 3 Specifies a unique password for the console terminal line.
Router(config)# password
5dr4Hepw3
login
Step 4 Example: Enables password verification at the terminal login session.
Router(config-line)# login
exec-timeout minutes [
Sets the interval that the EXEC command interpreter waits until user
seconds ]
Step 5 input is detected. The default is 10 minutes. You can also optionally
Example:
add seconds to the interval value.
Router(config-line)# exec-

181
timeout 5 30
line [ aux | console | tty | vty
] line-number
Step 6 Example: Specifies a virtual terminal for remote console access.
Router(config-line)# line
vty 0 4
password password
Example:
Step 7 Specifies a unique password for the virtual terminal line.
Router(config-line)#
password aldf2ad1
login
Step 8 Example: Enables password verification at the virtual terminal login session.
Router(config-line)# login
end
Example:
Step 9 Exits line configuration mode, and returns to privileged EXEC mode.
Router(config-line)#
endRouter#
SUMMARY STEPS

1. configure terminal
2. line [ aux | console | tty | vty ] line-number
3. password password
4. login
5. exec-timeout minutes [ seconds ]
6. line [ aux | console | tty | vty ] line-number
7. password password
8. login
9. end

182
View VLANs by Device and Port

 VLANs are assigned to individual switch ports.


 Ports can be statically assigned to a single VLAN or dynamically assigned to a single VLAN.
 All ports are assigned to VLAN 1 by default
 Ports are active only if they are assigned to VLANs that exist on the switch.
 Static port assignments are performed by the administrator and do not change unless modified by
the administrator, whether the VLAN exists on the switch or not.
 Dynamic VLANs are assigned to a port based on the MAC address of the device plugged into a
port.
 Dynamic VLAN configuration requires a VLAN Membership Policy Server (VMPS) client,
server, and database to operate properly.

1.4.2 Configuring Static VLANs


On a Cisco switch, ports are assigned to a single VLAN. These ports are referred to as access
ports and provide a connection for end users or node devices, such as a router or server. By
default, all devices are assigned to VLAN 1, known as the default VLAN. After creating a
VLAN, you can manually assign a port to that VLAN and it will be able to communicate only
with or through other devices in the VLAN. Configure the switch port for membership in a given
VLAN as follows:

To change the VLAN for a COS device, use the set vlan command, followed by the VLAN
number, and then the port or ports that should be added to that VLAN. VLAN assignments such
as this are considered static because they do not change unless the administrator changes the
VLAN configuration.

For the IOS device, you must first select the port (or port range for integrated IOS) and then use

183
the switchport access vlan command followed by the VLAN number.

Configuring Dynamic VLANs

Although static VLANs are the most common form of port VLAN assignments, it is possible to
have the switch dynamically choose a VLAN based on the MAC address of the device connected
to a port. To achieve this, you must have a VTP database file, a VTP server, a VTP client switch,
and a dynamic port. After you have properly configured these components, a dynamic port can
choose the VLAN based on whichever device is connected to that port.

Configuring a VLAN based on ports allows PCs in the VLAN to communicate with each other.
Application Environment: A company has multiple departments located in different buildings.
For service security, it is required that employees in one department be able to communicate with
each other, whereas employees in different departments be prohibited from communicating with
each other. Devices on the network shown in the following figure. Add ports connecting devices
to PCs of the financial department to VLAN 5 and ports connecting devices to PCs of the
marketing department to VLAN 9. This configuration prevents employees in financial and
marketing departments from communicating with each other.

Configure links between CE and PE as trunk links to allow frames from VLAN 5 and VLAN 9
to pass through, allowing employees of the same department but different buildings to
communicate with each other. By configuring port-based VLANs on the PE, CE1, and CE2,
employees in the same department can communicate with each other, whereas employees in
different departments cannot.

184
Figure 1. 3
Networking diagram for configuring a VLAN based on ports
Pre-configuration Tasks

Before configuring a VLAN based on ports, complete the following task: Connecting ports and
configuring physical parameters of the ports, ensuring that the ports are physically Up.

Configuration Procedures
Figure 8-6 Procedure of configuring a VLAN based on ports

Figure 8-6 Procedure of configuring a VLAN based on ports

Figure 1. 4 Procedure of configuring a VLAN based on ports


After a VLAN profile is created, assign it to switches, aggregation devices in a Junos Fusion

185
fabric, Virtual Chassis Fabric, members of Layer 3 Fabric, or members of custom groups. You
must have one or more existing VLAN profiles, either user-configured or system-created, before
you can assign a VLAN profile to a switch, or member of a custom group or port group.

1.4.4 Automatic Discovery and Configuration Manager


Configuration management is a process closely linked to change management, which is also
called configuration control. Any system that needs to be controlled closely and run with good
reliability, maintainability and performance benefits greatly from configuration management,
i.e., the management of system information and system changes. Configuration management can
extend life, reduce cost, reduce risk, and even correct defects. It should be applied over the life
cycle of a system in order to provide visibility and control of its performance as well as its
functional and physical attributes.

In Configuration Manager 2012, the discovery of users, groups and devices has been improved
since Configuration Manager 2007. The discovery feature in Configuration Manager 2012
enables you to identify computer and user resources that can be managed with Configuration
Manager. You are able to configure the discovery of resources on different levels in the
Configuration Manager 2012 hierarchy.

Active Directory Forest Discovery

The Active Directory Forest Discovery is a new discovery method in Configuration Manager
2012 that allows the discovery of Active Directory Forest where the site servers reside and any
trusted forest. With this discovery method, you are able to automatically create the Active
Directory or IP subnet boundaries that are within the discovered Active Directory Forests.

Active Directory Forest Discovery can be configured on Central Administration Sites and
Primary Sites.

1.4.5 Wireless Mobility Configuration Menu


A Mobility Domain enables users to roam geographically across the system while maintaining
data sessions and VLAN or subnet membership, including IP address, regardless of connectivity
to the network backbone. As users move from one area of a building or campus to another, client
associations with servers or other resources remains the same.

186
The clustering functionality ensures mobility across an entire wireless network. With clustering,
you can effortlessly create logical groups of controllers and access points, which share network
and user information in a proactive manner for continuous and uninterrupted support. You can
create a mobility domain using the Create Mobility Domain window from the Network Director
user interface.

A Mobility Group is a group of Wireless LAN Controllers (WLCs) in a network with the same
Mobility Group name. These WLCs can dynamically share context and state of client devices,
WLC load information, and can forward data traffic among them, which enables inter-controller
wireless LAN roam and controller redundancy. Before you add controllers to a mobility group,

you must verify that certain requirements are met for all controllers that are to be included in the
group.

A Mobility Group is configured manually. The IP and MAC address of the Wireless LAN
Controllers (WLCs) that belong to the same Mobility Group are configured on each of the WLCs
individually. Mobility Groups can be configured either through the CLI or through the GUI.
Mobility Groups can also be configured with the Prime Infrastructure (PI). This alternative
method comes in handy when a large number of WLCs is deployed. No Wireless LAN
Controllers (WLCs) can be configured only in one Mobility Group.

A Mobility Group can include up to 24 WLCs of any type. The number of access points
supported in a Mobility Group is bound by the number of WLCs and WLC types in the group.
For example, if a controller supports 6000 access points, a mobility group that consists of 24
such controllers supports up to 144,000 access points (24 * 6000 = 144,000 access points).

You can add different mobility members that are part of a different Mobility Group into the
mobility list that is used for mobility anchors that can anchor within a different Mobility Group.
There can be up to 72 members in the list with up to 24 in the same Mobility Group.

In a mobility list, the below combinations of mobility groups and members are allowed:

 3 mobility groups with 24 members in each group

187
 12 mobility groups with 6 members in each group
 24 mobility groups with 3 members in each group
 72 mobility groups with 1 member in each group

Configuring Mobility Groups (Cisco Wireless LAN Controllers)

To add an entry to a controller mobility configuration using the GUI, go to CONTROLLER >
Mobility Management > Mobility Groups, and click on New. Here you enter the MAC address
and IP address of the controller management interface you are adding along with the mobility
group name of that controller.

Mobility, or roaming, is a wireless LAN client‘s ability to maintain its association seamlessly
from one access point to another securely and with as little latency as possible.

Mobility group is a set of controllers, identified by the same mobility group name that make
seamless roaming for wireless clients. By creating a mobility group, we can enable multiple
controllers in a network to dynamically share information and forward data traffic when inter-
controller or inter-subnet roaming occurs. Controllers in the same mobility group can share the
context and state of client devices as well as their list of access points so that they do not
consider each other‘s access points as rogue devices.

Wireless access point

A wireless access point (WAP or AP) is a device that allows wireless communication devices to
connect to a wireless network using Wi-Fi, Bluetooth or related standards. The WAP usually
connects to a wired network, and can relay data between the wireless devices (such as computers
or printers) and wired devices on the network.

Basic firewall A firewall is a part of a computer system or network that is designed to block
unauthorized access while permitting outward communication. It is also a device or set of
devices configured to permit, deny, encrypt, decrypt, or proxy all computer traffic between
different security domains based upon a set of rules and other criteria.

Routers

188
 A router, like a switch forwards packets based on address.
 Usually, routers use the IP address to forward packets, which allows the network to go across
different protocols.
 Routers forward packets based on software while a switch (Layer 3 for example) forwards using
hardware called ASIC (Application Specific Integrated Circuits).
 Routers support different WAN technologies but switches do not.
 Besides, wireless routers have access point built in.
 The most common home use for routers is to share a broadband internet connection.
 As the router has a public IP address which is shared with the network, when data comes through
the router, it is forwarded to the correct computer.

1.5. Device Schedules


 Owing to the increasing need for massive data analysis and model training at the network
edge, as well as the rising concerns about the data privacy, a new distributed training
framework called federated learning (FL) has emerged. In each iteration of FL (called
round), the edge devices update local models based on their own data and contribute to
the global training by uploading the model updates via wireless channels. Due to the
limited spectrum resources, only a portion of the devices can be scheduled in each round.
 In order to take a backup of your device configurations, you need to first discover your
devices using Network Configuration Manager. The tool also allows you to add devices
in bulk. Once the devices are discovered, you can proceed to scheduling network
backups. Device configurations need to be backed up often in order to maintain a
repository of backups ready to be restored in case of emergencies. In large enterprises
with more number of devices, this task of getting the device configuration backup up
becomes a huge mundane task taking up most of the time of an admin. Being able to
schedule configuration backups is used to free up a network admin‘s time to do
productivity enhancing tasks.

1.5.1 VPN Policy Manager


A virtual private network (VPN) is a private data network connection that makes use of the
public telecommunications infrastructure, maintaining privacy through the use of a tunneling

189
protocol and security procedures. Using a virtual private network involves maintaining privacy
through the use of authorization, authentication, and encryption controls that encrypt da ta before
sending it through the public network and decrypting it at the receiving end. In a site-to-site
configuration, a VPN can be contrasted with a system of owned or leased lines that can only be
used by one company. In a remote user configuration, a VPN can be contrasted to a privately
managed remote access system (e.g. dial-up). The concept of the VPN is to give the agency the
same capabilities at much lower costs by using the shared public infrastructure rather than a
private one. However, VPN links are considered to be less trusted than dedicated, private
connections; therefore, this policy sets forth the security requirements for VPN connections to
the State‘s network.

VPN‘s enable an organization to use public networks such as the internet, to provide a secure
connection among the organization‘s wide area network. Customers can use VPN‘s to connect an
enterprise Intranet to a wide area network comprised of partners, customers, resellers and
suppliers. Traditionally, business have relied on private 56-Kbps or T-1 leased lines to connect
remote offices together. Leased lines are expensive to install and maintain. For small companies,
the cost is just too high. Using the internet as a backbone, A VPN can securely and cost
effectively connect all of companies‘ offices, telecommuters, mobile workers, customers,
partners and suppliers.

Overview of how it Works

 Two connections – one is made to the Internet and the second is made to the VPN.
 Datagrams – contains data, destination and source information.
 Firewalls – VPNs allow authorized users to pass through the firewalls.
 Protocols – protocols create the VPN tunnels.

VPN Gateway and Tunnels

A VPN gateway is a network device that provides encryption and authentication service to a
multitude of hosts that connect to it. From the outside (internet), all communications addressed to
inside hosts flow through the gateway. There are two types of endpoint VPN tunnels:

190
 Computer to gateway

For remote access: generally set up for a remote user to connect A corporate LAN

 Gateway to Gateway

This is a typical enterprise-to-enterprise configuration. The two gateways communicate with


each other

Figure 1. 5 Types of endpoint VPN tunnels

Element Manager
Importance of Managing Network Devices

 Configuration Management
 Performance Management
 Fault Management

Common ways to analyze the configuration, Performance and Faults on a Cisco Device

 CLI (Command Line Interface)


 SNMP (Simple Network Management Protocol)
 CiscoView

Using SNMP and CiscoView:

 A user can define a VTP domain,


 Configure devices as VTP servers, clients, or transparent devices in the domain,

191
 Create VLANs within the domain,
 Assign ports to a VLAN, and view the ports assigned to a VLAN.

Figure 1. 6 Access a device using CiscoView

CLI Configuration Manager


Configuration Manager can be run from a command line. You want to run the Configuration

192
Manager from the commend line as opposed to using the graphical user interface because of the
following reasons:

 You want to automate the configuration of the software.


 Your site wants the command-line version run for security reasons.
 You want to create a script to set up your system and then allow a user to run the script.

You begin by generating the configuration XML files that define the application server, the
profile type, and the XML file path. You then edit the files to enter values for your environment.

Understanding Cisco IOS Command Line Modes

Cisco Command Line Interface (CLI) is the main interface where we will interact with Cisco
IOS devices. CLI is accessible directly via console cable or remotely via methods such as
Telnet/SSH. From here, we can do things such as monitoring device status or changing
configuration. Cisco has divided its CLI into several different modes. Understanding Cisco IOS
Command Line Modes is essential because each mode has its own set of commands. Cisco has at
least three main command line modes: user EXEC mode, privileged EXEC mode, and global
configuration mode. Of course, there are other more specific modes such as interface
configuration mode, extended ACL configuration mode, routing/VLAN configuration mode, etc.

User EXEC mode

By default this is where we begin the session with our Cisco IOS devices (unless a specific
privilege level has been granted to our user account). The characteristics of user EXEC mode
are:

 Indicated by a right angle bracket sign (―>‖) next to the device hostname.

193
 Contains commands that we can use to test device/network configuration such as ping and
traceroute.
 A limited set of commands that are not changing the device configuration such as the show and
clear command are available.
 We can connect to other device from user EXEC mode by using telnet or ssh
 To protect user EXEC mode we can create username and password combination on the device.
 Issuing exit command here will disconnect the session.

This flowchart below will show the position of each node against the other modes.

Figure 1. 7 Cisco IOS Command Line Modes


Privileged EXEC mode

Basically, privileged EXEC mode contains the complete command of what we got in user EXEC
mode. In this mode, we still cannot do any configuration changes. However, the configuration
mode can only be accessed from privileged EXEC mode. Privileged EXEC mode is activated
after we use command enable on user EXEC mode.

194
Below are the characteristics of privileged EXEC mode:

 Indicated by a hash sign (―#‖) next to the device hostname


 All commands that are available on user EXEC mode are available in here too
 More complete set of commands under show and clear command are available here. For
example, in user EXEC mode there is no show running-config under the show command, but in
privileged EXEC mode it is exist.
 Unless the user account that we used has specific privilege level assigned to it, by default it will
get the highest privilege level, which is level 15.
 Privileged EXEC mode can be protected using an enable password.
 Issuing disable command here will bring us back to the user EXEC mode.
 Issuing exit command here will disconnect the session.

Global configuration mode

This is where the real configurations are done. We can enter global configuration mode from
privileged EXEC mode by using command configure terminal. From here we can do changes on
the global device configuration such as hostname, domain-name, creating user accounts, etc; or
we can enter more specific configuration within global configuration mode and make changes
such as IP address interface, access-list, DHCP, policy, etc.

Some characteristics of global configuration mode are:

 Indicated by device hostname prompt, followed by a word ―config‖ inside a bracket and then
hash sign (―#‖).
 All commands from EXEC mode can be used here by adding a word do before the command
that we want to execute, for example if we want to use show running-config in global
configuration mode we have to type it as do show running-config.

195
 Despite that we can change configuration within global configuration mode, if we want to save
the configuration we have to do it by exiting back to privileged EXEC mode and issue command
write memory or copy running-config startup-startup config from there (however, these two
commands can also be used from within global configuration mode by adding a do prefix to the
command, as explained in the previous point).
 Global configuration mode can be protected by assigning a custom privilege level to the user
account then set allowed commands and block the rest, thus limiting the configuration capability.
 Issuing exit here will bring us back to the privileged EXEC mode.

To change a device configuration, you need to enter the global configuration mode. This mode
can be accessed by typing configure terminal (or conf t, the abbreviated version of the command)
from the enable mode. The prompt for this mode is hostname(config). Global configuration
mode commands are used to configure a device. You can set a hostname, configure
authentication, set an IP address for an interface, etc. From this mode, you can also access
submodes, for example the interface mode, from where you can configure interface options. You
can get back to a privileged EXEC mode by typing the end command. You can also type CTRL
+ C to exit the configuration mode.

Submode Commands

A global configuration mode contains many sub-modes. For example, if you want to configure
an interface you have to enter that interface configuration mode. Each submode contains only
commands that pertain to the resource that is being configured. To enter the interface
configuration mode you need to specify which interface you would like to configure. This is
done by using the interface INTERFACE_TYPE/INTERFACE_NUMBER global configuration
command, where INTERFACE_TYPE represents the type of an interface (Ethernet,
FastEthernet, Serial…) and INTERFACE_NUMBER represents the interface number, since
CIsco devices usually have more than one physical interface. Once inside the interface
configuration mode, you can get a list of available commands by typing the ―?‖ character. Each
submode has its own promp

196
UNIT -II

2. ROUTER AND SWITCH


2.1. Basic Configuration
Routers: are small electronic devices that join multiple computer networks together via either
wired or wireless connections.

Figure 2. 1 Routers
Both Router and Switch are the connecting devices in networking. The main objective of router
is to connect various networks simultaneously and it works in network layer, whereas the main
objective of switch is to connect various devices simultaneously and it works in data link layer.

Switch connects multiple devices to create a network; a router connects multiple switches, and
their respective networks, to form an even larger network. These networks may be in a single
location or across multiple locations. Here is the arrangement of switch and router.

197
Figure 2. 2 The arrangement of switch and router
The difference between router and switch.

No. Router Switch


The main objective of router is to connect While the main objective of switch is to connect
1
various networks simultaneously. various devices simultaneously.
It works in network layer.
2 While it works in data link layer.

3 Router is used by LAN as well as MAN. While switch is used by only LAN.
Through router, data is sent in the form of While through switch, data is sent in the form of
4
packet. packet and frame.
While there is no collision, take place in full
5 There is less collision take place in router.
duplex switch.

198
6 Router is compatible with NAT. While it is not compatible with NAT.
The types of routing are: Adaptive and Non- The types of switching are Circuit, Packet and
7
adaptive routing. Message Switching.
A network switch (also called switching hub, bridging hub, officially MAC bridge) is a computer
networking device that connects devices together on a computer network, by using packet
switching to receive, process and forward data to the destination device. Unlike less advanced
network hubs, a network switch forwards data only to one or multiple devices that need to
receive it, rather than broadcasting the same data out of each of its ports.

A network switch is a multiport network bridge that uses hardware addresses to process and
forward data at the data link layer (layer 2) of the OSI model. Switches can also process data at
the network layer (layer 3) by additionally incorporating routing functionality that most
commonly uses IP addresses to perform packet forwarding; such switches are commonly known
as layer-3 switches or switches. Beside most commonly used Ethernet switches, they exist for
various types of networks, including Fiber Channel, Asynchronous Transfer Mode, and Infini
Band.

Figure 2. 3 Switch
How Routers Work

In technical terms, a router is a Layer 3 network gateway device, meaning that it connects two or
more networks and that the router operates at the network layer of the OSI model. Routers
contain a processor (CPU), several kinds of digital memory, and input- output (I/O) interfaces.
They function as special-purpose computers, one that does not require a keyboard or display.

The router‘s memory stores an embedded operating system (O/S). Compared to general-purpose

199
OS products like Microsoft Windows or Apple Mac OS, router operating systems limit what
kind of applications can be run on them and also need much smaller amounts of storage space.

Examples of popular router operating systems include Cisco Internetwork Operating System
(IOS) and DD-WRT. These operating systems are manufactured into a binary firmware image
and are commonly called router firmware.

By maintaining configuration information in a part of memory called the routing table, routers
also can filter both incoming and outgoing traffic based on the addresses of senders and
receivers.

Types of Routers and Routing Devices

A class of portable Wi-Fi router is called travel routers are marketed to people and families who
want to use the functions of a personal router at other locations besides home. Routing devices
called mobile hotspots that share a mobile (cellular) Internet connection with Wi-Fi clients are
also available. Many mobile hotspot devices only work with certain brands of cell service.

How to accesses a router

1. Connect the router to the modem: The router sends the modem‘s internet connection to any
computer connected to its network. Ensure that both the modem and the router have their power
cables plugged in. Plug one end of a network cable into your modem. Plug the other end into the
port on your router labeled Internet, WAN, or WLAN. The labels will vary depending on the
type of router you have.
2. Install the software: Depending on the router brand and model, you may or may not receive
software to install on the computer. This software is typically an interface to connect to the
router and adjust the settings, though it is not required.
3. Connect your computer to the router: You can do this either through an Ethernet cable or over
Wi-Fi. If this is your first time setting the router up, then connect your computer via Ethernet so
that you can configure the wireless network.

Typically, the Ethernet ports on the Router are labeled 1, 2, 3, 4, etc. but any port not labeled

200
―WAN,‖ ―WLAN,‖ or ―Internet‖ will work. Connect the other end of the cable to the Ethernet
port on your computer.

Routing Mechanisms mater

How does a router learn about paths (routes) to destinations? There are several routing
mechanisms that may be used as input sources to assist a router in building its route table.
Typically, routers use a combination of the following routing methods to build a router‘s route
table:

 Directly connected interface


 Static
 Default
 Dynamic

Although there are specific advantages and disadvantages for implementing them, they are not
mutually exclusive.

Directly Connected Interface


These routes come from the active router interfaces. Routers add a directly connected route when
an interface is configured with an IP address and is activated. Directly connected interfaces are
routes that are local to the router. That is, the router has an interface directly connected to one or
more networks or subnets. These networks are inherently known through the routers configured
interface attached to that network. These networks are immediately recognizable and traffic
directed to these networks can be forwarded without any help from routing protocols.

Static Routing

Static routing is a type of network routing technique. Static routing is not a routing protocol;
instead, it is the manual configuration and selection of a network route, usually managed by the
network administrator. It is employed in scenarios where the network parameters and
environment are expected to remain constant. Static routing is only optimal in a few situations.
Network degradation, latency and congestion are inevitable consequences of the non-flexible

201
nature of static routing because there is no adjustment when the primary route is unavailable.

Static routes are routes to destination hosts or networks that an administrator has manually
entered into the router‘s route table. Static routes define the IP address of the next hop router and
local interface to use when forwarding traffic to a particular destination.

Because this type of route has a static nature, it does not have the capability of adjusting to
changes in the network. If the router or interface defined fails or becomes unavailable, the route
to the destination fails.

This type of routing method has the advantage of eliminating all traffic related to routing
updates. Static routing tends to be ideal where the link is temporary or bandwidth is an issue, so
you want to use this method for dial-up networks or point-to-point WAN links. You can
implement static routes in conjunction with other routing methods to provide routes to
destinations across dial backup links when primary links implementing dynamic routing
protocols have failed.

To design an entire network with only this method because you would have to enter a static route
on every router for each network they are not directly attached to, thus highly impractical. In
addition, if a link or a router within the internetwork fails or is added, you would have to
reconfigure each router, removing the failed route or adding a new route. Meanwhile, routers
obviously cannot forward traffic to that destination because the original path has become invalid.
Static routing can have an extreme amount of overhead in the form of intense administrative
hours spent getting the network up and keeping it going.

You want to implement static routes in very small-to-small networks, with perhaps as little as 10
to 15 links total. Even then, dynamic routes offer so much more versatility.

Static routes conserve bandwidth because they do not cause routers to generate route update
traffic; however, they tend to be time consuming because a system administrator has to manually
update routes when changes occur in the network.

Static routes are also ideal for a stub network providing a single dedicated point-to- point WAN

202
connection outside the network to an upstream ISP (Internet Service Provider) providing Internet
access.

Configuring Static Routing

Example: – IP addresses of router interfaces connecting Routers B and C to A. It also shows


Router A‘s interface types, such as S0 and S1 (Serial 0 and Serial 1), which indicates the specific
interfaces on Router A connected to these links.

Default

In computer networking, the default route is a configuration of the Internet Protocol (IP) that
establishes a forwarding rule for packets when no specific address of a next-hop host is available
from the routing table or other routing mechanisms. Or the route that takes effect when no other
route is available for an IP destination address. If a packet is received on a routing device, the
device first checks to see if the IP destination address is on one of the device‘s local subnets. If
the destination address is not local, the device checks its routing table. If the remote destination
subnet is not listed in the routing table, the packet is forwarded to the next hop toward the
destination using the default route. The default route generally has a next-hop address of another
routing device, which performs the same process. The process repeats until a packet is delivered
to the destination.

The default route is generally the address of another router, which treats the packet the same
way: if a route matches, the packet is forwarded accordingly; otherwise, the packet is forwarded
to the default route of that router. The route evaluation process in each router uses the longest
prefix match method to obtain the most specific route. The network with the longest subnet mask
or network prefix that matches the destination IP address is the next-hop network gateway. The
process repeats until a packet is delivered to the destination host, or earlier along the route, when
a router has no default route available and cannot route the packet otherwise. In the latter case,
the packet is dropped and an ICMP Destination Unreachable message may be returned. Each
router traversal counts as one hop in the distance calculation for the transmission path.

Dynamic Routing

203
Dynamic routing is a networking technique that provides optimal data routing. Unlike static
routing, dynamic routing enables routers to select paths according to real-time logical network
layout changes. In dynamic routing, the routing protocol operating on the router is responsible
for the creation, maintenance and updating of the dynamic routing table. In static routing, all
these jobs are manually done by the system administrator.
Dynamic routing uses multiple algorithms and protocols. The most popular are Routing
Information Protocol (RIP) and Open Shortest Path First (OSPF).

Typically, dynamic routing protocol operations can be explained as follows:

1. The router delivers and receives the routing messages on the router interfaces.
2. The routing messages and information are shared with other routers, which use exactly the same
routing protocol.
3. Routers swap the routing information to discover data about remote networks.
4. Whenever a router finds a change in topology, the routing protocol advertises this topology
change to other routers.

Dynamic routing is easy to configure on large networks and is more intuitive at selecting the best
route, detecting route changes and discovering remote networks. However, because routers share
updates, they consume more bandwidth than in static routing; the routers‘ CPUs and RAM may
also face additional loads because of routing protocols. Finally, dynamic routing is less secure
than static routing.

Benefits of Router

 Due to the collision feature, network traffic can be reduced.


 Due to broadcasting domains, network traffic can be reduced.
 It provides a MAC address and IP address that will choose the best route across a network.
 Easy to connect to the wired or wireless network.
 Highly secured with a password.
 No loss of information.
 It can connect to different network architecture such as Ethernet cable, Wi-Fi, WLAN.

204
 The wireless router is easy to connect to the internet for a laptop or pc. No need to worry about a
bunch of wires.

2.2. Passwords
The Wi-Fi network password‘ is the password that you use to join the wireless network. Unless
you have set your network to ―No security‖, you are already using a Wi-Fi network password. If
you are using the router‘s default password or a memorable one that you have set yourself, then
you should consider replacing this password with a strong password. Remember, that we do not
consider the router‘s default password to be strong.

The router‘s admin account is the account that you use to log into the router to make
configuration changes. Your router shipped with a default ―admin‖ password. To upgrade the
―admin‖ account with a strong password, go to the ADMINISTRATION section. It is usually the
first option. On most routers, you can also change the administrator user name from admin. This
will make things harder for hackers, but if you‘ve got a 12 character strong password in place,
then having people be able to guess what your admin user id is called isn‘t going to do them
much good.

Reason why you should change your router‘s default password

The first reason is router‘s default password is printed on the side of the router, and so is on
display to anyone within touching distance of the unit. As we already know, sharing passwords
or leaving passwords in sight is absolutely not good security practice. Given that our highly
important network password is on view to everyone walking past it, we need to change the
default as a matter of urgency.

Second, in creating network passwords, router manufacturers often prefer convenience over
security. This means that they will prefer shorter passwords over longer ones. Shorter passwords
are easier to discover (by a method known as brute-force) than longer ones. Whilst all router
manufacturers have different standards for default password generation (and some may be
secure), we will always err on the side of caution and assume that the default Wi-Fi network
password is going to be easy for attackers to discover, and replace it with something strong and
unique.

205
2.3. Wildcard Masks
A wildcard mask is a mask of bits that indicates which parts of an IP address are available for
examination. A wildcard Mask matches everything in the network portion of an IP address with a
Zero. Also note the two rules: 0-bit = match and 1-bit = ignore. Wildcard mask can be used to
target a specific host/IP address, entire network, subnet, or even a range of IP addresses. For
example if we wanted to target, a specific host every bit in the hosts IP address must match. As
we mentioned earlier a 0-bit is a match and a 1-bit = ignore. Therefore, to target Wildcard Mask
for the host is 0.0.0.0.

Example: To target a specific network every bit in the ‗Network‘ portion of the IP address must
be a match. Therefore, if a class C network has the IP 192.168.1.0 the wildcard Mask would be
0.0.0.255. From these examples, you can see the benefit of using Wildcard Masks. They are
different from subnet masks in that they can target a specific Host, specific IP address, specific
Network, specific Subnet, or a range of IP addresses. They can even target all even or all odd
networks. All of these capabilities make the Wildcard Mask much more flexible than the Subnet
mask.

A wildcard mask can be thought of as an inverted subnet mask. For example, a subnet mask of
255.255.255.0 (binary equivalent = 11111111.11111111.11111111.00000000) inverts to a
wildcard mask of 0.0.0.255 (binary equivalent = 00000000.00000000.00000000.11111111).

What is the wild card n access-list in networking?

Wildcard mask is a 32-bit quantity used in conjunction with an IP address to determine which
bits in an IP address should be ignored when comparing that address with another IP address.
Wildcard masks use the following rules to match binary 1s and 0s:

 Wildcard mask bit 0: Match the corresponding bit value in the address.
 Wildcard mask bit 1: Ignore the corresponding bit value in the address.

Wildcard masking for access lists operates differently from an IP subnet mask. A zero in a bit
position of the access list mask indicates that the corresponding bit in the address must be
checked; a one in a bit position of the access list mask indicates the corresponding bit in the

206
address is not ‗interesting‘ and can be ignored.

Wildcard subnet mask is used in the following occasion.

 Defining subnet in ACL


 Defining subnet member in OSPF area

Counter Example

let‘s say you have the following subnet.

192.168.24.0/24
or 192.168.24.0 with 255.255.255.0 subnet mask

The binary format of the subnet mask is the following


11111111.11111111.11111111.00000000

In binary arithmetic, inverse a number means ―flipping‖ one state to the other (i.e from ―on‖ to
―off‖, from ―0‖ to ―1‖).

The inverse of the subnet mask in binary format is then the following
00000000.00000000.00000000.11111111

In decimal format, the inverse subnet mask looks like this 0.0.0.255

When you know, remember, or count the quantity of IP addresses or IP subnet within certain
VLSM network; you should be able to quickly deduct how the wildcard or inverse subnet mask
in question looks like. This way, you can skip the binary arithmetic and use strict decimal
arithmetic to get you a much quicker result with much simpler way.

This FAQ presents two quick ways of finding out how the wildcard or inverse subnet mask looks
like using simple decimal-number-based calculation of the quantity of available IP addresses or
IP subnet within certain IPv4 VLSM network. Following is the list of ways.

255 Octet Subtraction

207
This is one way of doing the simple calculation. Note that when we do binary inverse, we do it
octet by octet. Each octet has number from 0 to 255. To quickly find the inverse subnet mask,
you can use the result of 255 subtracted by the given octet.

Here are illustrations


Example #1: 255.255.255.0
255 – 255 = 0
255 – 0 = 255

1. 255 255 255 255


2. 2.55.255.255.0
3. – ———————-
4. 0. 0. 0. 255

Inverse /24: 0.0.0.255


Example #2: 255.255.255.224
255 – 255 = 0
255 – 224 = 31

1. 255 255 255 255


2. 255.255.255.224
3. – ————————
4. 0. 0. 0. 31

Inverse is 0.0.0.31
Example #3: 255.255.255.252
255 – 255 = 0
255 – 252 = 3

1. 255 255 255 255


2. 255.255.255.252
3. – ————————
4. 0. 0. 0. 3

208
Inverse is 0.0.0.3

Host Number: -this is another way of finding inverse subnet mask. In /24 or smaller subnets,
only last octet indicates the number of unique IP addresses exist within the subnet in question.
Specifically for /30, the last octet indicates four unique numbers of IP addresses; from 0 to 3.
Take the last number and apply that to inverse subnet mask.

As to the 1st three octets, they should ―automatically‖ convert to 0 since only the last octet
―matters‖ from number of IP address perspective in /24 or smaller subnets.

Here are illustrations


Example #1: 252 —> four IP addresses, from 0 to 3
Inverse is 0.0.0.3
Example #2: last octet: 224 —> 32 IP addresses, from 0 to 31
Inverse is 0.0.0.31

Working with Subnet Larger than /24

When you have subnet larger than /24, you need to consider other octets in addition to the last
one. Using the 2nd method (the Host Number), you will apply the last number of each octet to
the inverse.

Keep in mind that similar to Class C subnet calculation (/24 or smaller subnet), basic concept
applies to Class B (/16 or smaller subnet up to /23) and Class A (/8 or smaller subnet up to /15)
subnet calculations. When the first 3 octets in Class C subnet calculation are always constant and
only last octet changes (as shown above), the first 2 and last octets in Class B subnet calculation
are always constant where only the third octet changes. Similarly, the first and last two octets in
Class A subnet calculation are always constant where only the second octet changes.

Here are illustrations

Example #1: 255.255.254.0


3rd octet: 254 —> two /24 subnets, from 0 to 1
4th octet: 0 —> 256 IP addresses, from 0 to 255

209
Inverse is 0.0.1.255

Example #2: 255.255.248.0


3rd octet: 248 —> eight /24 subnets, from 0 to 7
4th octet: 0 —> 256 IP addresses, from 0 to 255

Inverse is 0.0.7.255

Example #3: 192.0.0.0


1st octet: 192 —> sixty four /8 subnets, from 0 to 63
2nd, 3rd, 4th octets: 0 —> 0 to 255

Inverse is 63.255.255.255

Note that the constants in Class A and B subnet calculation is slightly different than in the Class
C subnet calculation. The constants in Class C subnet calculation, which are the first three octets,
are all 0. In Class B subnet calculation, the constants are 0 for the first two octets while the last
octet is constant 255. In Class A subnet calculation, the constant is 0 for the first octet while the
last two octet‘s constant is 255.

210
Table 2. 2 Examples of Wildcard Masks

211
Table 2. 3 List of wildcard mask
Quizzes

Activity 2.6.

2.4. Access Control Lists


An access control list (ACL) is a list of rules that specifies which users or systems are granted or
denied access to a particular object or system resource. Access control lists are also installed in
routers or switches, where they act as filters, managing which traffic can access the network.

212
It is a table that tells a computer operating system which access rights each user has to a
particular system object, such as a file directory or individual file. Each object has a security
attribute that identifies its access control list. The list has an entry for each system user with
access privileges. The most common privileges include the ability to read a file (or all the files in
a directory).

Access control lists are used throughout many IT security policies, procedures, & technologies.
An access control list is a list of objects; each entry describes the subjects that may access that
object. Any access attempt by a subject to an object that does not have a matching entry on the
ACL will be denied. Technologies like firewalls, routers, and any border technical access device
are dependent upon access control lists in order to properly function. One thing to consider when
implementing an access control list is to plan for and implement a routine update procedure for
those access control lists.

Access Lists perform several functions within a router, including:

 Implement security / access procedures


 Act as a protocol ―firewall‖

Use Access Lists:

 Deny traffic you do not want based on packet tests (for example, addressing or traffic type)
 Identify packets for priority or custom queuing
 Restrict or reduce the contents of routing updates
 Provide IP traffic dynamic access control with enhanced user authentication using the lock-and-
key feature
 Identify packets for encryption
 Identify Telnet access allowed to the router virtual terminals Specify packet traffic for dial-in
remote sites using dial-on-demand routing (DDR)

Types of IPv4 ACLs


Standard and Extended ACLs

213
The previous sections describe the purpose of ACLs as well as guidelines for ACL creation. This
section covers standard and extended ACLs and named and numbered ACLs, and it provides
examples of placement of these ACLs.
There are two types of IPv4 ACLs:

 Standard ACLs: These ACLs permit or deny packets based only on the source IPv4 address.
 Extended ACLs: These ACLs permit or deny packets based on the source IPv4 address and
destination IPv4 address, protocol type, source and destination TCP or UDP ports, and more.

The following example shows how to create a standard ACL. In this example, ACL 10 permits
hosts on the source network 192.168.10.0/24. Because of the implied ―deny any‖ at the end, all
traffic except for traffic coming from the 192.168.10.0/24 network is blocked with this ACL.

R1(config)# access-list 10 permit 192.168.10.0 0.0.0.255


R1(config)#

The following example shows, the extended ACL 100 permits traffic originating from any host
on the 192.168.10.0/24 network to any IPv4 network if the destination host port is 80 (HTTP).

R1(config)# access-list 100 permit tcp 192.168.10.0 0.0.0.255 any eq www


R1(config)#

Notice that the standard ACL 10 is only capable of filtering by source address, while the
extended ACL 100 is filtering on the source and destination Layer 3 and Layer 4 protocol (for
example, TCP) information.

2.5. Remote Access


Remote access allows logging into a system as an authorized user without being physically
present at its keyboard. Remote is commonly used on corporate computer networks but can also
be utilized on home networks.
Remote working is the practice of completing your normal daily working life away from the
office, using some form of technology and an internet connection. Commonly, this means
working from home, often with a laptop employed to remotely connect to key systems, which
may be in the office or hosted in the cloud.

214
Remote working is not typically limited by location. Assuming the user has access to technology
and an internet connection, they could remotely work from anywhere; a partner‘s office, for
example, or a roadside service station whilst away from the office on another job.

When an organization needs to provide employees or third parties remote access to its network,
there are a number of solutions available.

1. VPNs: Virtual Private Networks

When employees need to remotely access their company files, a virtual private network (VPN) is
often the tool of choice. VPNs are designed to give employees the online privacy and anonymity
they (and their company) require, by turning a public internet connection into a private network.
VPN software creates a ―data tunnel‖ between the corporate network and an exit node in another
location (such as your workplace), which may be anywhere in the world. In other words, VPNs
provide a kind of telepresence: they can make it seem as if you are at the office on your company
machine with all of your applications and files at your disposal. To fully achieve its goals, a VPN
must accomplish two important tasks:

 Create the connection or tunnel; and


 Protect that connection, so that your files (and your company‘s network) will not be
compromised.

VPNs achieve this second step by encrypting data, these encryption and masking features help
protect your online activities and keep them anonymous. Since VPN services establish secure
and encrypted connections, an organization‘s employees can get the remote access they need
with greater privacy than the public internet. However, let us face it: using an unsecured Wi-Fi
network is simply not an option for company users, because your private information would be
totally exposed to anyone snooping on that network.

For all these reasons, VPNs have become a popular option for companies who need to give their
employees remote access, but want to provide online security and privacy.

The risks and drawbacks of VPNs

215
VPNs are certainly an improvement over using unprotected methods to remotely access an
organization‘s network, and in certain business environments, they can provide a useful service.
However, VPNs carry a number of drawbacks and inherent risks. Let us examine a few of the
major issues.

 Not optimal for remote vendor access: While VPNs may be good for giving remote access to
internal employees, it is not the optimal solution for three crucial tasks: identifying, controlling,
and auditing third-party vendors. VPNs simply do not have the degree of granular control needed
to properly monitor or restrict where a vendor can go and what they can do on a company‘s
network.
 VPNs are exploited in major data breaches: A note of caution for those thinking of using VPNs:
their reputation has suffered a major blow due to their implication in a number of serious data
breaches. National news stories have reported on how hackers exploited VPNs to cause data
breaches at several major companies. For example, in the case of data breaches suffered by
Home Depot and Target, malicious actors apparently stole VPN credentials, giving them access
to company networks, and the hackers also obtained an administrative credential. This
combination let them infiltrate and move through company networks.
 VPNs are leveraged in multi-stage cyberattacks: Hackers have also exploited VPNs in prolonged
multi-stage cyberattacks. VPNs are specifically mentioned by name in the alert as a major initial
access point for hackers.
 Audit and compliance risks: Another drawback to using VPNs for remote access: they may
expose organizations to compliance or regulatory risk. As cyberattacks have become more
costly, sophisticated, and frequent, some policy-making groups have imposed tougher standards
on their auditing processes and regulators are asking tougher questions about third-party access
methods. Many remote access tools such as VPNs may not be able to provide the level of audit
detail required and fail to meet these higher standards.

2. Desktop Sharing

Desktop sharing is another way organizations can provide remote access to users. These software
tools can provide real-time sharing of files, presentations, or applications with coworkers,
vendors, or other clients. There are many applications made possible by desktop sharing

216
including remote support, webinars, and online conferences with audio and visual content
(presentation sharing), and real-time global collaboration on projects.
Another application of desktop sharing is remote login for workers who need access to their
work computers from any Web-connected device (desktop, laptop, phone, or tablet).

Limitations of desktop sharing

Like VPNs, desktop sharing software tools come with a number of drawbacks. First, there are
authentication risks. Anyone, anywhere, can log into a desktop sharing tool if they have the
credentials, meaning they have access to the whole network as if they are in the building. During
a remote support session, if an employee surrenders control of their machine to a remote rep
whose account has been compromised, your company‘s internal sensitive files could become
visible to bad actors and used for nefarious purposes.
Second, desktop sharing tools are not the best solution for supporting enterprise environments.
While these tools can be utilized to provide desktop support and handle helpdesk tasks, they
typically do not have the security and functionality required for complex enterprise remote
support such as server or application maintenance. They often lack the strict security controls
(logging and audit) that enterprises in highly regulated industries need. In addition, while
desktop sharing can be useful for end-user support, there are additional tools and protocols
needed when supporting servers, databases, and other enterprise applications.

3. PAM: Privileged Access Management

To go beyond VPNs and desktop sharing, you need an alternative that can manage identities
closer than mere IAM technologies such as Active Directory. This is especially true of the
privileged or admin accounts used for many enterprise-level support tasks. In order to securely
manage credentials for privileged accounts, a better solution was developed: Privileged Access
Management, or PAM.

PAM is a set of tools and technologies that can be used to secure, control, and monitor access via
privileged accounts to an organization‘s resources. The most effective PAM solutions address
several areas of information security defense, such as advanced credential security, systems, and
data access control, credential obfuscation and user activity monitoring. Ensuring continuous

217
oversight of these target areas helps lower the threat of unauthorized network access, and makes
it easier for IT managers to uncover suspicious activity on the network.

Best practices in PAM indicate that least privilege protocols should be enforced, where users
only have access to the specific limited resources they need, rather than free reign to roam the
entire network. In addition, network managers should be able to restrict or expand user access as
needed, in real-time.

4. VPAM: Vendor Privileged Access Management

Many organizations need to provide privileged accounts to two types of users: internal users
(employees) and external users (technology vendors and contractors). However, organizations
that use vendors or contractors must protect themselves against potential threats from these
sources. External users pose a unique threat because network managers cannot control the
security best practices of their vendor partners; they can only protect against risky user behavior.

Vendor privileged access management (VPAM) refers to solutions that address the risks posed
from these external vendors and contractors, which are unique to third-party remote access users.
As the name implies, VPAM is related to PAM – but there are key differences. Traditional PAM
solutions are designed to manage internal privileged accounts, based on the reasonable
assumption that admins know the identity and employment status of each person accessing the
network. However, this is not the case with third-party users, and so VPAM solutions use multi-
factor authentication to provide an extra layer of protection.

In general, network managers and admins must be able to identify and authenticate external users
via more advanced VPAM methods that can confirm these users are connected to active vendor
employee accounts. A strong, effective VPAM solution will be able to continuously monitor
vendor user activity, using detailed tracking to provide optimal protection against unauthorized
use.

Both PAM and VPAM have the same overall goal: maintaining network security for all users
who have advanced permissions, whether they be internal or external.

218
Remote Desktop

The most sophisticated form of remote access enables users on one computer to see and interact
with the actual desktop user interface of another computer. Setting up remote desktop support
involves configuring software on both the host (local computer controlling the connection) and
target (remote computer being accessed). When connected, this software opens a window on the
host system containing a view of the target‘s desktop.

Advantages
Remote work is not all bad, of course. If it were, remote working would not be as popular as it is
becoming among workers and employers alike. Here are some advantages of remote working to
keep in mind:

1. Flexibility
Whilst the lack of routine was listed as a ―Disadvantage‖ because some people struggle to get
motivated, others in fact thrive on it. For example, with a flexible work schedule, parents are able
take their kids to and from school, which relieves reliance on childcare. In addition to this, there
are people who just perform more efficiently when they are in charge of their own schedule.
2. Less costly
Working at home part or full- time means less or no commuting, which means less money spent
on transport costs. Many remote workers will also make their lunch and coffee at home. Another
way remote workers save money is on their wardrobe! After all, unless you do video
conferencing, no one says you cannot wear informal clothes all day – no need for a work
wardrobe.
3. Work at your own pace.
Most people working remotely can choose how and when to work on projects, as long as they
deliver them in by the deadline. They can take breaks, or push through and complete a task all in
one go. While self-discipline is necessary when it comes to working from home, several studies
have shown that at-home workers tend to be more productive than their on-site counterparts.
4. Less sick days
How many times have you decided not to go to the office because you have a bad cold or a sore

219
throat? When you are at home, you can take care of yourself and still get work done. Researchers
have found that remote workers are off ill less frequently than on-site workers.
5. Technological advances make so much possible. Working in an office means that you can hand
in reports and communicate with colleagues without any delays. Nowadays though, the huge
choice of ways to stay connected by phone and internet ensure that speed is not an issue, even if
you are working remotely. 4G networks are available in many countries, and coverage areas are
increasing every day. In addition to making it possible to upload and send data four times faster
than with 3G, 4G allows for easy transfer of higher definition videos and images. If security is an
issue, 4G is also the best option for keeping documents and other information safe. Software can
also help like Microsoft Office 365 mean that you can collaborate with colleagues on documents
in real time, even if one of you is in the office and the other at home.

Disadvantages

1. Lack of routine
Not all-remote work is the same; some people do have to follow a schedule and check in with
their employer at key times. For those in results-based areas like freelancing though, there are no
rules about when, for example, you have to get out of bed. There probably are not meetings to go
to either, and if there are, these kinds of home workers can often dial in to participate if they
want. It may sound like a dream, but some remote workers can struggle with the lack of a
schedule, finding it difficult to feel motivated or work efficiently.
2. No workplace social life
Even if you are interacting with clients or co-workers virtually, it is not the same as banter in the
office or getting lunch together. Remote workers often report feeling a little isolated. This is why
many of them prefer to come into the office at least a few days a week, or possibly do their work
in public places like coffee shops.
3. The challenge of the work/life balance.
You would think that remote working would make it easier to devote time to your personal life.
However, when you do not have specific hours or a clear separation of workplace and home
space, it can be hard to ―switch off‖ and stop thinking about work you have to do, or stop
constantly checking your phone or inbox.

220
4. Distractions
Then again, while some struggle with ―switching off‖ from work, others have trouble switching
on. Working at home means all the distractions of your personal life, good and bad. If you have
small children around, they may demand your attention. On the other hand, the TV may just beg
you to watch it for an hour or two. Without the filters that many workplaces put up, remote
workers have constant access to time- wasting websites, personal emails and social networks –
which can be deadly for productivity.
5. Complete dependence on technology
Since you are not face-to-face with colleagues or clients when you are a remote worker, you
have to make sure you are easily reachable by email, phone and other platforms your office or
external contacts may use (Skype for Business, Dropbox, etc.). Luckily, there are many solutions
to help you communicate and transfer data quickly. A 4G network, for example, offers incredible
speed, security and image definition.

Remote Access to Files

Basic remote network access allows files to be read from and written to the target, even without
remote desktop capability in place. Virtual Private Network (VPN) technology provides remote
login and file access functionality across wide area networks (WANs). A VPN requires client
software be present on host systems and VPN server technology installed on the target network.

2.6. Logging with Syslog Usage


Syslog is a protocol that allows a machine to send event notification messages across IP
networks to event message collectors – also known as Syslog Servers or Syslog Daemons. In
other words, a machine or a device can be configured in such a way that it generates a Syslog
Message and forwards it to a specific Syslog Daemon (Server).

Syslog messages are based on the User Datagram Protocol (UDP) type of Internet Protocol (IP)
communications. Syslog messages are received on UDP port 514. Syslog message text is
generally no more than 1024 bytes in length. Since the UDP type of communication is
connectionless, the sending or receiving host has no knowledge receipt for retransmission. If a
UDP packet gets lost due to congestion on the network or due to resource unavailability, it will
simply get lost – no one would know about it.

221
Syslog Daemon
A Syslog Daemon or Server is an entity that would listen to the Syslog messages that are sent to
it. You cannot configure a Syslog Daemon to ask a specific device to send it Syslog Messages. If
a specific device has no ability to generate Syslog Messages, then a Syslog Daemon cannot do
anything about it. To make this thing clear, you can consider a Syslog Server or Syslog Daemon
as a TV, which can only display you the program that is currently running on a specific channel.
You cannot ask another station to send a new program on that channel.

Format of a Syslog Packet


The full format of a Syslog message seen on the wire has three distinct parts.

1. PRI
2. HEADER
3. MSG.
The total length of the packet cannot exceed 1,024 bytes, and there is no minimum length

1. PRI

The Priority part is a number that is enclosed in angle brackets. This represents both the Facility
and Severity of the message. This number is an eight-bit number. The first 3 least significant bits
represent the Severity of the message (with 3 bits you can represent 8 different Severities) and
the other 5 bits represent the Facility of the message. You can use the Facility and the Severity
values to apply certain filters on the events in the Syslog Daemon.

Note that Syslog Daemon cannot generate these Priority and Facility values. They are generated
by the applications on which the event is generated. Following are the codes for Severity and
Facility. Please note that the codes written below are the recommended codes that the
applications should generate in the specified situations. You cannot, however, be 100 % sure, if
it really is the correct code sent by the application. For example: An application can generate a
numerical code for severity as 0 (Emergency) when it should have generated 4 (Warning)
instead. Syslog Daemon cannot do anything about it. It will simply receive the message as it is.

a) Severity Codes: The Severity code is the severity of the message that has been generated.

222
Table 2. 4 The codes for Severity
b) Facility Codes
The facility is the application or operating system component that generates a log message.
Following are the codes for facility:

223
Table 2. 5 Facility Codes
Calculating Priority Value

The Priority value is calculated by first multiplying the Facility number by 8 and then adding the

224
numerical value of the Severity. For example, a kernel message (Facility=0) with a Severity of
Emergency (Severity=0) would have a Priority value of 0 (0+80). Also, a ―local use 4‖ message
(Facility=20) with a Severity of Notice (Severity=5) would have a Priority value of 165 (5+820).
In the PRI part of a Syslog message, these values would be placed between the angle brackets as
<0> and <165> respectively.

2. Header
The HEADER part contains the following things:
a) Timestamp — The Time stamp is the date and time at which the message was generated. Be
warned, that this timestamp is picked up from the system time and if the system time is not
correct, you might get a packet with totally incorrect time stamp
b) Hostname or IP address of the device.

3. MSG
The MSG part will fill the remainder of the Syslog packet. This will usually contain some
additional information of the process that generated the message, and then the text of the
message. The MSG part has two fields:
a) TAG field
b) CONTENT field
The value in the TAG field will be the name of the program or process that generated the
message. The CONTENT contains the details of the message.

Some Important Points

 As mentioned above, since Syslog protocol is UDP based, it is unreliable. It does not guarantee
you the delivery of the messages. They may either be dropped through network congestion, or
they may be maliciously intercepted and discarded.
 As mentioned above, since each process, application and operating system was written somewhat
independently, there is little uniformity to the content of syslog messages. For this reason, no
assumption is made upon the formatting or contents of the messages. The protocol is simply
designed to transport these event messages.

225
 The receiver of a Syslog packet will not be able to ascertain that the message was indeed sent
from the reported sender.
 One possible problem associated with the above-mentioned point is of Authentication. A
misconfigured machine may send syslog messages to a Syslog Daemon representing itself as
another machine. The administrative staff may become confused because the status of the
supposed sender of the messages may not be accurately reflected in the received messages.
 Another problem associated with point 2 is that an attacker may start sending fake messages
indicating a problem on some machine. This may get the attention of the system administrators
who will spend their time investigating the alleged problem. During this time, the attacker may
be able to compromise a different machine, or a different process on the same machine.
 The Syslog protocol does not ensure ordered delivery of packets.
 An attacker may record a set of messages that indicate normal activity of a machine. Later, that
attacker may remove that machine from the network and replay the syslog messages to the
Daemon.

2.7. Miscellaneous
Intrusion Notification

In some situations, you might want to configure the switch to send a notification to an SNMP
NMS when MAC addresses are learned by the system or deleted from the CAM table. An
example of where this might be used could be a switch that is deployed in a particularly
restrictive security zone in the network, like an R&D lab or a DMZ, and where you want to
determine if there is anomalous MAC address learning behavior in that part of the network. The
following command enables this feature:

Catalyst1(config)#mac-address-table notification Only dynamic and secure MAC addresses


generate a MAC address notification. Traps are not sent for self, multicast, or other static
addresses.

Switch Security Best Practices


Cisco makes the following recommendations for switch security best practices:

226
 Secure management: Think security for switch management. Use SSH, a dedicated management
VLAN, OOB, and so on as much as possible.
 Native VLAN: Always use a dedicated VLAN ID for trunk ports and avoid using VLAN 1 at all.
 User ports: Non-trunking. (Cisco VoIP phones being the exception. See Chapter 9, ―Introduction
to Endpoint, SAN, and Voice Security.‖)
 Port security: Use for access ports whenever possible.
 SNMP: Limit to the management VLAN if possible and treat community strings like superuser
passwords. (See Chapter 4, ―Implementing Secure Management and Hardening the Router,‖ for
more information.)
 STP attacks: Use BPDU guard and root guard.
 CDP: Only use if necessary. CDP should be left on for switch ports connected to VoIP phones.
An attacker can learn much from CDP advertisements.
 Unused ports: Disable and put them in an unused VLAN for extra security.

227
UNIT-III:
3. ROUTERS
3.1. Router Basic Configuration
A Router is a computer, just like any other computer including a PC. Routers have many of the
same hardware and software components that are found in other computers including:

 CPU
 RAM
 ROM
 Operating System

Figure 3. 1 1841 Integrated Services Router


Router is the basic backbone for the Internet. The main function of the router is to connect two or

228
more than two network and forwards the packet from one network to another. A router connects
multiple networks. This means that it has multiple interfaces that each belong to a different IP
network. When a router receives an IP packet on one interface, it determines which interface to
use to forward the packet onto its destination. The interface that the router uses to forward the
packet may be the network of the final destination of the packet (the network with the destination
IP address of this packet), or it may be a network connected to another router that is used to
reach the destination network. A router uses IP to forward packets from the source network to
the destination network. The packets must include an identifier for both the source and
destination networks. A router uses the IP address of the destination network to deliver a packet
to the correct network. When the packet arrives at a router connected to the destination network,
the router uses the IP address to locate the specific computer on the network.

Figure 3. 2 Router connects two network


A router uses IP to forward packets from the source network to the destination network. The

229
packets must include an identifier for both the source and destination networks. A router uses the
IP address of the destination network to deliver a packet to the correct network. When the packet
arrives at a router connected to the destination network, the router uses the IP address to locate
the specific computer on the network.

Routing and Routing Protocols

The primary responsibility of a router is to direct packets destined for local and remote networks
by:

 Determining the best path to send packets


 Forwarding packets toward their destination

The router uses its routing table to determine the best path to forward the packet. When the
router receives a packet, it examines its destination IP address and searches for the best match
with a network address in the router‘s routing table. The routing table also includes the interface
to be used to forward the packet. Once a match is found, the router encapsulates the IP packet
into the data link frame

3.2. Static Routing


Static routes are configured manually, network administrators must add and delete static routes
to reflect any network topology changes. In a large network, the manual maintenance of routing
tables could require a lot of administrative time. On small networks with few possible changes,
static routes require very little maintenance. Static routing is not as scalable as dynamic routing
because of the extra administrative requirements. Even in large networks, static routes that are
intended to accomplish a specific purpose are often configured in conjunction with a dynamic
routing protocol.

When to use static Routing

A network consists of only a few routers. Using a dynamic routing protocol in such a case does
not present any substantial benefit. On the contrary, dynamic routing may add more
administrative overhead.

230
A network is connected to the Internet only through a single ISP. There is no need to use a
dynamic routing protocol across this link because the ISP represents the only exit point to the
Internet.

A large network is configured in a hub-and-spoke topology. A hub-and-spoke topology consists


of a central location (the hub) and multiple branch locations (spokes), with each spoke having
only one connection to the hub. Using dynamic routing would be unnecessary because each
branch has only one path to a given destination through the central location.

Connected Routes

Those network that are directly connected to the Router are called connected routes and are not
needed to configure on the router for routing. They are automatically routed by the Router.

Dynamic Routes: Dynamic routing protocol uses a route that a routing protocol adjusts
automatically for topology or traffic changes. Non-adaptive routing algorithm When a ROUTER
uses a non-adaptive routing algorithm it consults a static table in order to determine to which
computer it should send a PACKET of data. This is in contrast to an ADAPTIVE ROUTING
ALGORITHM, which bases its decisions on data which reflects current traffic conditions (Also
called static route) adaptive routing algorithm When a ROUTER uses an adaptive routing
algorithm to decide the next computer to which to transfer a PACKET of data, it examines the
traffic conditions in order to determine a route which is as near optimal as possible. For example,
it tries to pick a route, which involves communication lines which have light traffic. This
strategy is in contrast to a NON-ADAPTIVE ROUTING ALGORITHM. (Also called Dynamic
route)

231
Figure 3. 3 Imagine maintaining static routing configurations

3.3. Dynamic Routing


Dynamic routing is a technique in which a router learns about routing information without an
administrator‘s help and adds the best route to its routing table. A router running a dynamic
routing protocol adds the best route to its routing table and can also determine another path if the
primary route goes down. Also a networking technique provides optimal data routing. Unlike
static routing, dynamic routing enables routers to select paths according to real-time logical
network layout changes

At the dynamic routing section, we will discuss the implementation of RIPv1, RIPv2, EIGRP,
and Single-Area OSPF.

3.4. Routing Protocols Matrix


Routing Protocol:

232
A routing protocol is the communication used between routers. A routing protocol allows routers
to share information about networks and their proximity to each other. Routers use this
information to build and maintain routing tables. Autonomous System: An AS is a collection of
networks under a common administration that share a common routing strategy. To the outside
world, an AS is viewed as a single entity. The AS may be run by one or more operators while it
presents a consistent view of routing to the external world.

The American Registry of Internet Numbers (ARIN), a service provider, or an administrator


assigns a 16- bit identification number to each AS.

Figure 3.
4 IGP vs Routing Protocols
Dynamic Routing Protocol:

1. Interior Gateway protocol (IGP)


I. Distance Vector Protocol
II. Link State Protocol
2. Exterior Gateway Protocol (EGP)

233
Interior gateway protocol (IGP): Within one Autonomous System.
Exterior Routing Protocol (EGP): Between the Autonomous System. Example BGP (Boarder
gateway protocol).

Metric:

There are cases when a routing protocol learns of more than one route to the same destination.
To select the best path, the routing protocol must be able to evaluate and differentiate between
the available paths. For this purpose a metric is used. A metric is a value used by routing
protocols to assign costs to reach remote networks. The metric is used to determine which path is
most preferable when there are multiple paths to the same remote network.

Each routing protocol uses its own metric. For example, RIP uses hop count, EIGRP uses a
combination of bandwidth and delay, and Cisco‘s implementation of OSPF uses bandwidth.

Basic Router Configuration

Interface Port Labels

Table 3.1 lists the interfaces supported for each router and their associated port labels on the
equipment.

Table 3. 1 Supported Interfaces and Associated Port Labels by Cisco Router


Viewing the Default Configuration

When you first boot up your Cisco router, some basic configuration has already been performed.
All of the LAN and WAN interfaces have been created, console and VTY ports are configured,
and the inside interface for Network Address Translation has been assigned. Use the show
running-config command to view the initial configuration, as shown in Example 1-1.

Router# show running-config


Building configuration…
Current configuration : 1090 bytes

234
!
version 12.3
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname Router
!
boot-start-marker
boot-end-marker
!
no aaa new-model
ip subnet-zero
!
ip cef
ip ips po max-events 100
no ftp-server write-enable
!
interface FastEthernet 0
no ip address
shutdown
!
interface FastEthernet 1
no ip address
shutdown
!
interface FastEthernet 2
no ip address
shutdown
!

235
interface FastEthernet 3
no ip address
shutdown
!
interface FastEthernet 4
no ip address
duplex auto
speed auto
!
interface Dot 11Radio0
no ip address
shutdown
speed basic-1.0 basic-2.0 basic-5.5 6.0 9.0 basic-11.0 12.0 18.0 24.0 36.0 48.0 54.0
rts threshold 2312
station-role root
!
interface Vlan1
no ip address
!
ip classless
!
no ip http server
no ip http secure-server
!
control-plane
!
line con 0
no modem enable
transport preferred all
transport output all
line aux 0

236
transport preferred all
transport output all
line vty 0 4
login
transport preferred all
transport input all
transport output all
!
End

Information Needed for Configuration

You need to gather some or all of the following information, depending on your planned network
scenario, prior to configuring your network

 If you are setting up an Internet connection, gather the following information:


o Point-to-Point Protocol (PPP) client name that is assigned as your login name
o PPP authentication type: Challenge Handshake Authentication Protocol (CHAP) or Password
Authentication Protocol (PAP)
o PPP password to access your Internet service provider (ISP) account
o DNS server IP address and default gateways
 If you are setting up a connection to a corporate network, you and the network administrator
must generate and share the following information for the WAN interfaces of the routers:
o PPP authentication type: CHAP or PAP
o PPP client name to access the router
o PPP password to access the router
 If you are setting up IP routing:
o Generate the addressing scheme for your IP network.
o Determine the IP routing parameter information, including IP address, and ATM permanent
virtual circuits (PVCs). These PVC parameters are typically virtual path identifier (VPI), virtual
circuit identifier (VCI), and traffic shaping parameters.

237
o Determine the number of PVCs that your service provider has given you, along with their VPIs
and VCIs. – For each PVC determine the type of AAL5 encapsulation supported. It can be one of
the following: AAL5SNAP
o This can be either routed RFC 1483 or bridged RFC 1483. For routed RFC 1483, the service
provider must provide you with a static IP address. For bridged RFC 1483, you may use DHCP
to obtain your IP address, or you may obtain a static IP address from your service provider.
AAL5MUX PPP
o With this type of encapsulation, you need to determine the PPP-related configuration items.
 If you plan to connect over an ADSL or G.SHDSL line: – Order the appropriate line from your
public telephone service provider. For ADSL lines—ensure that the ADSL signaling type is
DMT (also called ANSI T1.413) or DMT Issue 2. For G.SHDSL lines—Verify that the
G.SHDSL line conforms to the ITU G.991.2 standard and supports Annex A (North America) or
Annex B (Europe). Once you have collected the appropriate information, you can perform a full
configuration on your router, beginning with the tasks in the ―Configuring Basic Parameters‖
section.

Configure Global Parameters

Perform these steps to configure selected global parameters for your router:

238
Table 3. 2 For complete information on the global parameter commands
Configure Fast Ethernet LAN Interfaces

The Fast Ethernet LAN interfaces on your router are automatically configured as part of the
default VLAN and as such, they are not configured with individual addresses. Access is afforded
through the VLAN. You may assign the interfaces to other VLANs if desired.

Configure WAN Interfaces

The Cisco 851 and Cisco 871 routers each have one Fast Ethernet interface for WAN connection.
The Cisco 857, Cisco 877, and Cisco 878 routers each have one ATM interface for WAN
connection.
Based on the router model you have, configure the WAN interface(s) using one of the following
procedures:

239
 Configure the Fast Ethernet WAN Interface
 Configure the ATM WAN Interface

Configure the Fast Ethernet WAN Interface

This procedure applies only to the Cisco 851 and Cisco 871 router models. Perform these steps
to configure the Fast Ethernet interface, beginning in global configuration mode:

Table 3. 3 Configure the Fast Ethernet WAN Interface


Configure the ATM WAN Interface

This procedure applies only to the Cisco 857, Cisco 876, Cisco 877 and Cisco 878 models.
Perform these steps to configure the ATM interface, beginning in global configuration mode:

240
Table 3. 4 Configure the ATM WAN Interface
Configuring a Loopback Interface

The loopback interface acts as a placeholder for the static IP address and provides default routing
information

241
Table 3. 5 Configuring a Loopback Interface
Configuration Example

The loopback interface in this sample configuration is used to support Network Address
Translation (NAT) on the virtual-template interface. This configuration example shows the
loopback interface configured on the Fast Ethernet interface with an IP address of
10.10.10.100/24, which acts as a static IP address. The loopback interface points back to virtual-
template1, which has a negotiated IP address.
!
interface loopback 0
ip address 10.10.10.100 255.255.255.0 (static IP address)
ip nat outside
!
interface Virtual-Template1
ip unnumbered loopback0
no ip directed-broadcast
ip nat outside
!Verifying Your Configuration

242
To verify that you have properly configured the loopback interface, enter the show interface
loopback command. You should see verification output similar to the following example.
Router# show interface loopback 0
Loopback0 is up, line protocol is up
Hardware is Loopback
Internet address is 10.10.10.100/24
MTU 1514 bytes, BW 8000000 Kbit, DLY 5000 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation LOOPBACK, loopback not set
Last input never, output never, output hang never
Last clearing of ―show interface‖ counters never
Queueing strategy: fifo
Output queue 0/0, 0 drops; input queue 0/75, 0 drops
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec,
0 packets/sec 0 packets input, 0 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
0 packets output, 0 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out

Another way to verify the loopback interface is to ping it:


Router# ping 10.10.10.100
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.10.10.100, timeout is 2 seconds:
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms

Configuring Command-Line Access to the Router

Perform these steps to configure parameters to control access to the router, beginning in global
configuration mode.

243
Table 3. 6 Configuring Command-Line Access to the Router

244
Configuration Example

The following configuration shows the command-line access commands. You do not need to
input the commands marked ―default.‖ These commands appear automatically in the
configuration file generated when you use the show running-config command.
!
line con 0
exec-timeout 10 0
password 4youreyesonly
login
transport input none (default)
stopbits 1 (default)
line vty 0 4
password secret
login
!

Quizzes

Activity 3.2.

Activity 3.3.

3.5. RIP
How to Configure RIPv1 and RIPv2 in Cisco Routers

When would you need this: When you need to implement a routing protocol for a small network
and you need the configuration to be simple. Routing Information Protocol is the simplest that it
can get.
Special Requirements: None.

1. The first thing to do is to enable the RIP protocol on the router: Router(config)#router rip

245
2. Identify the networks to be advertised using the ‗network‘ command. Using this command, you
need to identify only the networks that are directly connected to the router: Router(config-
router)#network network-id If the network is sub netted, you will need to write the main network
address without the need to write the subnets. For example, if you have the following subnets
connected to the router (172.16.0.0/24, 172.16.1.0/24, and 172.16.2.0/24), you can put them all
in single ‗network‘ command like this: Router(config router) #network 172.16.0.0. The router is
intelligent enough to figure out which subnets are connected to the router.
3. If you need to adjust the timers (update, invalid, hold down, and flush timers), use the ‗timers
basic‘ command. All the four parameters of this command, update, invalid, hold down, and flush
timer consecutively, are in seconds: Router (config-router)#timers basic 30 180 180 240 The
example above is set with the default values of the RIP timers. Remember to keep the relativity
of the timer values. Always keep it as (n 6n 6n 8n). If, for example, you set the update timer to
40, you need to make the other timers 240 240 320 consecutively. It is highly recommended that
you keep the timers on their default values.
4. You will need to stop the updates from being broadcasted to the Internet, if one of the router
interfaces is connected to the Internet. For this purpose, use the ‗passive interface‘ command.
This command prevents the interface from forwarding any RIP broadcasts, but keeps the
interface listening to what others are saying in RIP. Router (config router)#passive-interface
interface-type interface-number where interface-type is the type of the interface, such as Serial,
Fast Ethernet, or Ethernet. Interface-number is the number of the interface such as 0/0 or 0/1/0
5. RIP, by nature, sends updates as broadcast. If the router is connected through non-broadcast
networks (like Frame Relay), you will need to tell RIP to send the updates on this network as
unicast. This is achieved by the ‗neighbor‘ command: Router (config-router)#neighbor neighbor-
address where neighbor-address is the IP address of the neighbor.
6. Cisco‘s implementation of RIP Version 2 supports authentication, key management, route
summarization, classless inter-domain routing (CIDR), and variable-length subnet masks
(VLSMs). By default, the router receives RIP Version 1 and Version 2 packets, but sends only
Version 1 packets. You can configure the router to receive and send only Version 2 packets. To
do so, use the ‗version‘ command: Router (config-router)#version 2 If you like to stick to version
one, just replace the 2 in the command above with 1. Furthermore, you can control the versions
of the updates sent and received on each interface to have more flexibility in support of both

246
versions. This is achieved by the ‗ip rip send version‘ and ‗ip rip receive version‘ commands:
Router (config-if)#ip rip send version 2
Router (config-if)#ip rip receive version 1
7. Check the RIP configuration using these commands:
Router#show ip route
Router#show ip protocols
Router#debug ip rip

How to Configure RIPng for IPv6

When would you need this: When you want to implement a simple routing protocol for a small-
to-medium sized IPv6 network.
Special Requirements: None.

1. Enable IPv6 routing:


Router (config)#ipv6 unicast-routing
2. Enable RIPng process: Router (config) #ipv6 router rip process-name where the process-name
can be any unique process name you select. The process name is of local significance, i.e., you
do not need to use the same process name on all the routers participating in the RIPng process.
3. On the interface you want to participate in the RIPng process, assign an IP address: Router
(config-if) #ipv6 address ipv6-address/prefix length where ipv6-address is the IPv6 address you
want to assign to this interface. Prefix-length is the IPv6 prefix length of the network this
interface is connected to. If you do not wish to assign an IPv6 address to the interface, you can
enable the IPv6 operation on the interface and let it create its own link-local address using the
following command:
Router (config-if)#ipv6 enable
4. Enable the RIPng process on the interface: Router (config-if) #ipv6 rip process-name where the
process-name should be the process name that you have selected in step 2. Repeat this step on all
interface you want to take part in the RIPng routing process.
5. For troubleshooting, use the following commands:
Router#show ipv6 route
Router#show ipv6 route rip

247
Router#show ipv6 protocols
83
Router#show ipv6 rip
Router#show ipv6 rip next-hops
Router#debug ipv6 rip

3.6. IGRP
How to Configure IGRP (Interior Gateway Routing Protocol)

Interior Gateway Routing Protocol (IGRP) is a distance vector interior routing protocol (IGP)
invented by Cisco. It is used by routers to exchange routing data within an autonomous system.
IGRP is a proprietary protocol. IGRP was created in part to overcome the limitations of RIP
(maximum hop count of only 15, and a single routing metric) when used within large networks.

IGRP supports multiple metrics for each route, including bandwidth, delay, load, MTU, and
reliability; to compare two routes these metrics are combined together into a single metric, using
a formula which can be adjusted through the use of pre-set constants. The maximum hop count
of IGRP-routed packets is 255 (default 100), and routing updates are broadcast every 90 seconds
(by default).

IGRP is considered a classful routing protocol. Because the protocol has no field for a subnet
mask, the router assumes that all sub network addresses within the same Class A, Class B, or
Class C network have the same subnet mask as the subnet mask configured for the interfaces in
question. This contrasts with classless routing protocols that can use variable length subnet
masks. Classful protocols have become less popular as they are wasteful of IP address space.

In order to address the issues of address space and other factors, Cisco created EIGRP (Enhanced
Interior Gateway Routing Protocol). EIGRP adds support for VLSM (variable length subnet
mask) and adds the Diffusing Update Algorithm (DUAL) in order to improve routing and
provide a loopless environment. EIGRP has completely replaced IGRP, making IGRP an
obsolete routing protocol. In Cisco IOS versions 12.3 and greater, IGRP is completely
unsupported. In the new Cisco CCNA curriculum (version 4), IGRP is mentioned only briefly, as
an ―obsolete protocol‖.

248
The IGRP protocol allows a number of gateways to coordinate their routing. Its goals are the
following:

 Stable routing even in very large or complex networks. No routing loops should occur, even as
transients.
 Fast response to changes in network topology.
 Low overhead. That is, IGRP itself should not use more bandwidth than what is actually needed
for its task.
 Splitting traffic among several parallel routes when they are of roughly equal desirability
 Taking into account error rates and level of traffic on different paths.

A very simple configuration of IGRP can be:


Router A
RouterA# conf t
RouterA(config)# interface eth0
RouterA(config-if)# ip address 70.0.0.1 255.0.0.0
RouterA(config-if)# exit
RouterA(config)# interface serial0
RouterA(config-if)# ip address 20.30.40.2 255.255.255.252
RouterA(config-if)# exit
RouterA(config)# router igrp 1
RouterA(config-router)# redistribute connected
RouterA(config-router)# network 20.0.0.0
RouterA(config-router)# network 70.0.0.0
RouterA(config-router)# network 71.0.0.0

Router B

RouterB# conf t
RouterB(config)# interface eth0
RouterB(config-if)# ip address 71.0.0.1 255.0.0.0
RouterB(config-if)# exit

249
RouterB(config)# interface serial0
RouterA(config-if)# ip address 20.30.40.1 255.255.255.252
RouterA(config-if)# exit (config)# router igrp 1
RouterA(config-router)# redistribute connected
RouterA(config-router)# network 20.0.0.0
RouterA(config-router)# network 70.0.0.0
RouterA(config-router)# network 71.0.0.0

A few other commands might come in useful. Variance 2 can be used to configure IGRP to load
balance between equal cost paths. The command passive-interface eth0 disables IGRP from
sending updates out of eth0.

Testing

router# debug ip igrp events

Only shows the sending or receiving of IGRP packets and the number of routes in each update. It
does show the routes that are advertised!

router# debug ip igrp transactions

Sames as debug ip igrp events but also shows the routes that are advertised.

router# show ip route

As with debugging any routing problem, look at the routing table. Is there a static route that takes
precendece?

router# show ip interface brief

This command is always useful to quickly verify which links are and which aren‘t.

3.7. EIGRP
How to Configure EIGRP on a Cisco Router

250
When would you need this: When you are implementing a routing protocol on a large
Internetwork and all the networking devices involved are Cisco devices or devices supporting
EIGRP.

Special Requirements: EIGRP is a Cisco proprietary protocol. So, either all the routers in the
Internetwork must be Cisco routers, or the routers should be EIGRP capable.

Before we start, if you have not set the bandwidth of the interfaces, set them now. For correct
routing decisions, you need to set the bandwidth for the serial interfaces depending on the WAN
technologies that you are using. This is done using the following command on each serial
interface:

Router (config-if) #bandwidth bandwidth

Where bandwidth is the bandwidth of the WAN connection in kilobits per second.
Next, you can start configuring EIGRP as in the following steps:

1. Enable EIGRP on the router with the command, Router (config)#router eigrp autonomous-
system where autonomous-system is the autonomous system number. The same autonomous-
system number must be used for all the routers that you want to exchange routing information.
2. Instruct the router to advertise the networks that are directly connected to it. Router (config-
router) #network network-address where network-address is the network address of a network
that is directly connected to the router. Repeat this step for each network that is directly
connected to the specific router that you are configuring. For sub netted networks, remember that
you need only to write the original network address of a group of subnets and the router will
automatically identify the subnets. For example, if the router is connected to the networks,
172.16.1.0/24, 172.16.2.0/24, and 172.16.3.0/24, you will need to do one ‗network‘ command
with the address 172.16.0.0.
3. By default, EIGRP packets consume a maximum of 50% of the link bandwidth, as configured
with the ‗bandwidth‘ interface configuration command. You might want to change that value if a
different level of link utilization is required or if the configured bandwidth does not match the
actual link bandwidth (it may have been configured to influence route metric calculations). Use
the following command to set the percentage of bandwidth to be used on each interface

251
separately:
Router (config-if) #ip bandwidth-percent eigrp bandwidth percentage
Where bandwidth-percentage is the percentage of bandwidth to be used
4. You can change the intervals of the hello packets and the hold down timer on each interface
using command:
Router (config-if) #ip hello-interval eigrp autonomous system timer where autonomous-system is
the autonomous system number and time is the new hello packet interval time in seconds. Router
(config-if) #ip hold-time eigrp autonomous-system time
Where autonomous-system is the autonomous system number and time is the new hold down
time in seconds.
5. Check your configuration on the routers after configuring all the routers in the internetwork
using the following commands: To display information about interfaces configured for EIGRP.
Router #show ip eigrp interfaces interface-type autonomous-system Display the EIGRP
discovered neighbors.
Router #show ip eigrp neighbors to display the EIGRP topology table for a given process.
Router #show ip eigrp topology autonomous-system
Or
Router #show ip eigrp topology network-address subnet mask To display the number of packets
sent and received for all or a specified EIGRP process. Router #show ip
eigrp traffic autonomous-system where interface-type is the interface type. Autonomous-system
autonomous system number. Network-address and subnet mask are the network address and
subnet mask.

How to Configure EIGRP Metrics on a Cisco Router

Although it is not recommended, if you need to change the way the metrics of the routes are
calculated, you can set them using the command: Router (config-router) #metric weights type-of-
service K1 K2 K3 K4 K5
Where type-of-service is the type of service index and the values of k1–k5 are used to calculate
the metric using the following equation:

252
the default values are k1 = k3 = 1 and k2 = k4 = k5 = 0 and if k5 = 0, the formula is reduces to

It is highly recommended that you leave the metric in the default values unless you are a highly
experienced network designer.

How to Configure EIGRP for IPv6 on a Cisco Router

When would you need this: When you are implementing a routing protocol on a large IPv6
Internetwork and all the networking devices involved are Cisco devices or devices supporting
EIGRP.

Special Requirements: EIGRP is a Cisco proprietary protocol. So, either all the routers in the
Internetwork must be Cisco routers, or the routers should be EIGRP capable.

1. Enable IPv6 routing on the router:


Router (config)#ipv6 unicast-routing
2. Enable EIGRP on the router:
Router (config) #ipv6 router eigrp autonomous-system number
Where autonomous-system-number is the number of the autonomous system in which this
EIGRP process will run. Remember to use the same autonomous system number in all the
routers that you want to exchange routing information.
3. Enable IPv6 on the interface you want to participate in the EIGRP process:
Router (config-if) #ipv6 enable Using the ‗ipv6 enable‘ command will inform the router to create
a link-local IPv6 address for this interface. If you want to use a different IPv6 address, you can
use the following command instead of ‗ipv6 enable‘:
Router (config-if ) #ipv6 address ipv6-address/prefix length

253
Where ipv6-address is the IPv6 address you want to assign to this interface. Prefix-length is the
prefix length for the IPv6 address.
4. Enable EIGRP on the interface connected to other EIGRP-enabled routers by identifying the
autonomous system number this interface will be part of:
Router (config-if) #ipv6 eigrp autonomous-system-number where autonomous-system-number is
the number of the autonomous system in which this EIGRP process will run. Remember to use
the same autonomous system number used in steps 2 and 4.
5. If you want to manually set up the RouterID to control the internal process of EIGRP, you can
use the following optional steps:
Router (config-if) #ipv6 router eigrp autonomous system numberRemember to use the same
autonomous system number used in steps 2 and 4.
Router (config-router) #eigrp router-id router-id where router-id is the RouterID used in the
EIGRP process. The RouterID is formatted as an IPv4 address even if you are using EIGRP for
IPv6 networks. Router (config-router)#exit
6. By default, EIGRP packets consume a maximum of 50% of the link bandwidth, as configured
with the ‗bandwidth‘ interface configuration command. You might want to change that value if a
different level of link utilization is required or if the configured bandwidth does not match the
actual link bandwidth (it may have been configured to
influence route metric calculations). Use the following command to set the percentage of
bandwidth to be used on each interface separately:
Router (config-if) #ipv6 bandwidth-percent eigrp autonomous-system-number bandwidth
percentage
Where autonomous-system-number is the number of the autonomous system in which this
EIGRP process will run. Bandwidth-percentage is the percentage of bandwidth to be used.
7. To troubleshoot, use the following commands:
Router#show ipv6 eigrp autonomous-system-number
Router#show ipv6 eigrp interface interface-type interface-number
Router#show ipv6 eigrp interface interface-type interface-number autonomous-system-number

EIGRP Implementation Notes

254
1. If you are using discontinuous networks, which is mostly the case, you should turn off auto-
summarization using the following command: Router (config)#no ip auto-summary.
2. You can set manual summary addresses using the following command: Router (config-if)#ip
eigrp summary address autonomous system summarized-network summary-subnet mask where
autonomous-system is the autonomous system number and summarized summarized-network is
the network address expressing the summary of multiple networks. Summary-subnet mask is the
subnet mask for the summarized address.
3. When you are using non-broadcast networking technologies such as Frame Relay and SMDS,
you will need to turn off split-horizon to let EIGRP perform efficiently and effectively.
Router (config-if)#no ip split-horizon autonomous-system where autonomous-system is the
autonomous system number.
4. To clear the neighbour table, use the command:
Router#clear ip eigrp neighbors

Quizzes

Activity 3.4.

3.8. OSPF
When would you need this: When you need to set up dynamic routing with Cisco and non-Cisco
routers.

Special Requirements: None.

OSPF is one of the most widely used dynamic routing protocols. Cisco‘s version of OSPF is
compatible with non-Cisco routers. Single-area OSPF is suitable for small-to-medium
internetworks. An area is a logical grouping of routers running OSPF. All routers in the same
area share the same topology database. Multiple-Area OSPF is used for large networks to
prevent their topology databases from becoming out of the capability of the router. Single-area
OSPF configuration is as follows:

1. Since OSPF best route calculations rely solely on bandwidth, you need to set up the bandwidth
of the serial interface involved in the routing process using the following command on the

255
interface:
Router(config-if)#bandwidth bandwidth
Where: bandwidth is the bandwidth of the connection in kilobits per second. Remember that this
command does not change the actual bandwidth. It only changes the bandwidth value being used
by the routing protocol for the purpose of best path calculation.
2. Instruct the router to activate the OSPF routing process:
Router (config)#router ospf process-number Where: process-number is the process number of
OSPF. This process number is of local significance. It does not have to be the same on all
routers.
3. Instruct the router to advertise the directly connected networks:
Router(config-router)#network network-address wildcard mask area 0 Where: network-address
is the network address of a directly connected network. Wildcard-mask is the wildcard mask of
the network address. Since we are setting a single-area OSPF, we will always use ‗area 0‘.
4. Repeat step 3 for every network that is directly connected to the router. If you finished the first
four steps on all the routers involved in the process, everything should work just fine.

If you want to do more configurations, there are a few optional advanced steps to go through:

1. To change the selection process of the DR (Designated Router) and BDR (Backup Designated
Router), use the following command to change the router‘s OSPF priority on a certain interface:
Router(config)#ip ospf priority priority
Where: priority is the priority (0–255). The router with the highest priority becomes the DR. A
priority of 0 means that this router will never be elected as DR.
2. To restart the whole process of DR and BDR elections, use the command: Router#clear ip ospf
process *
3. To change the cost of a certain link in the OSPF process, use the following command:
Router(config-if)#ip ospf cost suggested-cost
Where: CC is the suggested cost (0–65, 535).

For troubleshooting, you can use the following commands:

256
1. To show the OSPF processes information:
Router#show ip ospf
2. To show the OSPF database of the topology:
Router#show ip ospf database
3. To show the OSPF operation on the interfaces:
Router#show ip ospf interface
4. To show the OSPF neighbors table
Router#show ip ospf neighbor
5. To debug all the OSPF process events:
Router#debug ip ospf events

How to Configure Single-Area OSPFv3 for IPv6 on a Cisco Router


When would you need this: When you need to set up dynamic routing with Cisco and non-Cisco
routers Special Requirements: None.

1. Enable IPv6 routing on the router: Router (config)#ipv6 unicast-routing


2. Since OSPF best route calculations rely solely on bandwidth, you need to set up the bandwidth
of the serial interface involved in the routing process using the following command on the
interface: Router(config-if)#bandwidth bandwidth
Where: bandwidth is the bandwidth of the connection in kilobits per second. Remember that this
command does not change the actual bandwidth. It only changes the bandwidth value being used
by the routing protocol for the purpose of best path calculation.
3. Instruct the router to activate the OSPF routing process: Router(config)#ipv6 router ospf process-
number
Where process-number is the process number of OSPF. This process number is of local
significance. It does not have to be the same on all routers.
4. Enable OSFP process on each interface you want to participate in the OSPF process:
Router(config)#interface interface-type interface number
Router(config-if)#ipv6 enable
Router(config-if)#ipv6 ospf process-number Area 0
Where interface-type and interface-number are the type and number of the interface. Process-
number is the process number of OSPF identified in step 3. Since we are setting a single-area

257
OSPF, we will always use ‗area 0‘. Using the ‗ipv6 enable‘ command will inform the router to
create a link-local IPv6 address for this interface.
If you want to use a different IPv6 address, you can use the following command instead of ‗ipv6
enable‘:
Router(config-if)#ipv6 address ipv6-address/prefix length
Where ipv6-address is the IPv6 address you want to assign to this interface. Prefix-length is the
prefix length for the IPv6 address.
5. Repeat step 4 for every network that is directly connected to the router. If you finished the first
four steps on all the routers involved in the process, everything should work just fine.

If you want to do more configurations, there are a few optional advanced steps to go through:

1. To change the selection process of the DR (Designated Router) and BDR (Backup Designated
Router), use the following command to change the router‘s OSPF priority on a certain interface:
Router(config)#ipv6 ospf priority priority where priority is the priority (0–255). The router with
the highest priority becomes the DR. A priority of 0 means that this router will never be elected
as DR.
2. To restart the whole process of DR and BDR elections, use the command: Router#clear ipv6 ospf
process *
3. To change the cost of a certain link in the OSPF process, use the following command:
Router(config-if)#ipv6 ospf cost suggested-cost

Where CC is the suggested cost (0–65,535) For troubleshooting, you can use the following
commands:

1. To show the OSPF processes information: Router#show ipv6 ospf


2. To show the OSPF database of the topology: Router#show ipv6 ospf database
3. To show the OSPF operation on the interfaces: Router#show ipv6 ospf interface
4. To show the OSPF neighbors table: Router#show ipv6 ospf neighbor
5. To debug all the OSPF process events: Router#debug ipv6 ospf events

How to Configure HSRP on a Cisco Router

258
When would you need this: When your network design requires redundancy and high
availability.

Special Requirements: None.

To understand why and how HSRP protocol works, you need to look into an example. For each
local network, there is a default-gateway. This default-gateway is usually the LAN interface of
the router. If your network design requires high availability and redundancy, you can use HSRP
to set up a different interface in a different router to operate as a standby interface such that
whenever the main default-gateway fails, the standby becomes active and the network operation
is not interrupted.

HSRP operates by setting a main IP address and a standby IP address for the routers‘ interfaces
that are taking part in the HSRP operation. Most of the time, the IP address used as the default-
gateway is called the virtual IP. Let us jump into the configuration:

1. On the first router, configure the main IP address:


Router1(config-if)#ip address ip-address subnetmask
2. On the first router, configure the virtual IP address:
Router1(config-if)#standby hsrp-group-number ip virtualip-address
3. On the first router, set up the priority of the virtual IP
Router1(config-if)#standby hsrp-group-number priority standby-priority
Where ip-address and subnetmask are the IP main address and subnet mask of the interface.
hsrp-group-number is the HSRP group number. You have to use the same number in all the
routers that you want to participate in the same HSRP process. Virtual-ip-address is the virtual IP
address to be used when the standby router becomes active. Standby-priority is the priority of the
standby IP address. The number can be between 0 and 255. The router with the highest priority
will be the active one.
4. On the second router, configure the main IP address (which is different from the one used in
Router1):
Router2(config-if)#ip address ip-address subnetmask

259
5. On the second router, configure the virtual IP address:
Router2(config-if)#standby hsrp-group-number ip virtualip-address
6. On the first router, set up the priority of the virtual IP
Router2(config-if)#standby hsrp-group-number priority Standby-priority
Where ip-address and subnetmask are the IP main address and subnetmask of the interface. hsrp-
group-number is the HSRP group number. You have to use the same number in all the routers
that you want to participate in the same HSRP process. virtual-ip-address is the virtual IP address
to be used when the standby router becomes active. Standby-priority is the priority of the standby
IP address. The number can be between 0 and 255. The router with the highest priority will be
the active one.
7. You can troubleshoot using the commands:
Router#show standby
Router#show standby brief
Router#show standby all

How to Configure VRRP on a Cisco Router

When would you need this: When your network design requires redundancy and high
availability.

Special Requirements: None.

VRRP operates by setting a main IP address and a standby IP address for the routers‘ interfaces
that are taking part in the VRRP operation. Most of the time, the IP address used as the default-
gateway is called the virtual IP. Let us jump into the configuration:

1. On the first router, configure the main IP address:


Router1(config-if)#ip address ip-address subnetmask
2. On the first router, configure the virtual IP address:
Router1(config-if)#vrrp group-number ip virtualip-address
3. On the first router, set up the priority of the virtual IP
Router1(config-if)#vrrp group-number priority standbypriority
Where ip-address and subnetmask are the IP main address and subnetmask of the interface.

260
group-number is the VRRP group number. You have to use the same number in all the routers
that you want to participate in the same VRRP process. Virtual-ip-address is the virtual IP
address to be used when the standby router becomes active. Standby-priority is the priority of the
standby IP address. The number can be between 0 and 255. The router with the highest priority
will be the active one.
4. On the second router, configure the main IP address (which is different from the one used in
Router1 and from the virtual IP):
Router2(config-if)#ip address ip-address subnetmask
5. On the second router, configure the virtual IP address:
Router2(config-if)#vrrp group-number ip virtualip-address
6. On the first router, set up the priority of the virtual IP
Router2(config-if)#vrrp group-number priority standbypriority
Where ip-address and subnetmask are the IP main address and subnetmask of the interface.
group-number is the GLBP group number. You have to use the same number in all the routers
that you want to participate in the same GLBP process. virtual-ip-address is the virtual IP address
to be used when the standby router becomes active. standby-priority is the priority of the standby
IP address. The number can be between 0 and 255. The router with the highest priority will be
the active one.
7. You can troubleshoot using the commands:
Router#show vrrp
Router#show vrrp brief

3.9. DHCP
How to Configure a Cisco Router as a DHCP Client

When would you need this: When your ISP gives you a dynamic IP address upon each
connection or you need to configure the router to obtain its interface IP address automatically.
Special Requirements: None.

This is done using a single command:

Router(config-if)#ip address dhcp Some service providers might ask you to use a client-id and/or

261
a hostname of their own choice. This can be done by adding the following parameters to the
command above:

Router(config-if)#ip address dhcp client-id interfacename hostname hostname

Where interface-name is the interface name that will be used for the client-id and hostname is the
hostname that will be used for the DHCP binding. This hostname can be different from the one
that was set for the router in the global configuration. You can use both of these parameters, one
of them, or none of them.

How to Configure a Cisco Router as a DHCP Server

When would you need this: When using the router as a DHCP server to provide IP addresses and
related information to DHCP clients.

Special Requirements: DHCP server software is supported for these series: 800, 1000, 1400,
1600, 1700 series (support for the Cisco 1700 series was added in Cisco IOS Release 12.0[2]T),
2500, 2600, 3600, 3800, MC3810, 4000, AS5100, AS5200, AS5300, 7000, 7100, 7200, MGX
8800 with an installed Route Processor Module, 12000, uBR900, uBR7200, Catalyst 5000
family switches with an installed Route Switch Module, Catalyst 6000 family switches with an
installed MultiLayer Switch Feature Card, and Catalyst 8500.
The configuration steps are as follows:

1. Define the DHCP address pool:


Router(config)#ip dhcp pool dhcp-pool-name
Router(dhcp-config)#network network-address subnetmask
Where dhcp-pool-name is the DHCP pool name, network-address is the network address to be
used by the DHCP pool, and subnetmask is the subnet mask for the network. You can replace the
subnet mask by (/prefix) to provide the subnet mask.
2. Configure the parameters to be sent to the client:
Router(dhcp-config)#dns-server dns-server-address
To provide the DNS server IP address:
Router(dhcp-config)#default-router default-gateway address

262
To provide the IP address of the default-gateway, which is usually the IP address of the router
interface connected to the network.
Router(dhcp-config)#domain-name domain
To provide the name of the domain of the network (if in a domain environment):
Router(dhcp-config)#netbios-name-server netbios-server address
To provide the IP address of the NetBIOS name server:
Router(dhcp-config)#lease days hours minutes
To define the lease time of the addresses given to the client. You can make it
infinite, which is not advised, by using this command instead
Router(dhcp-config)#lease infinite
There is a large group of settings that you can configure to be sent to the clients and I have only
mentioned the most frequently used.
3. Configure the IP addresses to be excluded from the pool. This is usually done to avoid the
conflicts caused by the DHCP with servers and printers. Remember to give all servers and
network printers‘ static IP addresses in the same range of the DHCP pool. Afterward, exclude
these addresses from the pool to avoid conflicts.
Router(config)#ip dhcp excluded-address excluded-ipaddress Use the command in the previous
form to exclude a single address. You can repeat it as many times as you see fit for the IP
addresses you want to exclude. You can also use the same command to exclude a range of IP
addresses all in a single command:
Router(config)#ip dhcp excluded-address start-ip-address end-ip-address
Where start-ip-address is the first address in the range to be excluded from the pool and end-ip-
address is the last excluded address in the range.
4. Enable the DHCP service in the router:
Router(config)#service dhcp
To disable it, use Router(config)#no service dhcp
Usually, the DHCP service is enabled by default on your router
5. Use the following commands to check the DHCP operation on the router:
Router#show ip dhcp binding
This command shows the current bindings of addresses given to clients.
Router#show ip dhcp server statistics

263
This command shows the DHCP server statistics.
Router#debug ip dhcp server
This debug command is used to troubleshoot DHCP issues.

Implementation notes:

1. You can create a DHCP database agent that stores the DHCP binding database. A DHCP
database agent is any host; for example, an FTP, TFTP, or RCP server that stores the DHCP
bindings‘ database. You can configure multiple DHCP database agents, and you can configure
the interval between database updates and transfers for each agent. To configure a database agent
and database agent parameters, use the following command in global configuration mode:

Router(config)#ip dhcp database URL [timeout seconds | write-delay seconds]


An example URL is this ftp://user:password@192.168.0.3/router-dhcp
If you choose not to configure a DHCP database agent, disable the recording of DHCP address
conflicts on the DHCP server. To disable DHCP address conflict logging, use the following
command in global configuration mode:
Router(config)#no ip dhcp conflict logging

2. DHCP service uses port 67 and 68. So, if you are using a firewall, remember to open these ports.
To clear DHCP server variables, use the following commands as needed:

Router#clear ip dhcp server statistics


Router#clear ip dhcp binding *

If you want to clear a certain binding not all of them, replace the * in the previous command with
the IP address to be cleared.

How to Configure a Cisco Router as a DHCP Server for IPv6

When would you need this: When using the router as a DHCP server to provide IPv6 in stateless
and stateful configuration of DHCPv6.
Special Requirements: DHCPv6 support in IOS.

264
1. Create the DHCP pool:
Router(config)#ipv6 dhcp pool pool-name
2. Configure the parameters you want to pass to the clients:
Router(config-dhcp)#dns-server server-ipv6-address
Router(config-dhcp)#domain-name domain
3. If you are working on a stateless address auto-configuration scenario, skip the next two steps and
jump to 6.
4. Configure the IPv6 address prefix:
Router(config-dhcp)#address prefix ipv6-address-prefix
Where the ipv6-address-prefix is the 64-bit hexadecimal network address prefix.
5. An optional step is to set up a link address prefix: Router(config-dhcp)#link-address ipv6-link-
prefix
6. Enable DHCPv6 on the interface you want to be part of the DHCP process and assign a specific
pool to the interface:
Router(config-if)#ipv6 dhcp server pool-name
7. Check the address leases (in stateful addressing only):
Router#show ipv6 dhcp lease

How to Configure DHCP Relay in Cisco Router IPv4

If you have a DHCP server other than the router and you would like the router to pass the DHCP
requests to this DHCP server laying outside the LAN, go to the LAN interface that does not have
the DHCP server and type the following command:
Router(config-if)#ip helper-address dhcp-server-address
where dhcp-server-address is the IP address of the DHCP server located outside this LAN.

IPv6

If you have a DHCPv6 server other than the router and you would like the router to pass the
DHCPv6 requests to this DHCPv6 server laying outside the LAN, go to the LAN interface that
does not have the DHCPv6 server and type the following command:

Router(config-if)#ipv6 dhcp relay destination dhcp-serveripv6-address

265
Where dhcp-server-ipv6-address is the IPv6 address of the DHCP server located outside this
LAN.

3.10. NAT and PAT


When would you need this: When you want to connect a local network to the Internet and the
available global IP addresses are less than the local IP addresses. This can also be used as an
additional security feature.

Special Requirements: None.

There are two types of NAT that can be configured on a Cisco router: static and dynamic.

Static NAT Configuration


This type is used when you want to do one-to-one assignment of global (namely public) IP
addresses to local IP addresses.

1. Establish static translation between an inside local address and an inside global address:
Router(config)#ip nat inside source static local-ip-address global-ip-address where local-ip
address is the (inside) local address and global-ip-address is the (inside) global address.
2. Specify the local interface (the interface connected to the internal network). This is done by
going to the interface configuration mode and issuing:
Router(config-if)#ip nat inside
3. Specify the global interface (the interface connected to the external network).
This is done by going to the interface configuration mode and issuing:
Router(config-if)#ip nat outside

Dynamic NAT Configuration

This type is used when you want the router to do the mapping dynamically. This method is
useful when you have too many global and local addresses and you do not want to do the
mapping manually, or when the number of global addresses available is less than the local
addresses.

This would lead us to two different scenarios:

266
A. The number of global IP addresses is more than one and it is equal or less than the local
addresses.

1. Define a pool of global addresses that would be employed in the translation: Router(config)#ip
nat pool pool-name first-public-address last-public-address netmask public-subnetmask.
Where pool-name is the name of the pool, first-public-address is the starting IP address of the
pool, last-public-address is the end IP address of the pool, and public-subnetmask is the subnet
mask of the network that the pool is part of (i.e., the global network).
2. Define the range of local addresses permitted to participate in the translation using an access-list:
Router(config)#access-list access-list-number permit local-network-address wildcard-mask
Where access-list-number is the number of the access-list, which is usually a standard access list;
thus, the number can be any number from 1 to 99; local-network-address is the network address
of the local network or the starting IP address of the range; and wildcard-mask is the wildcard
mask used to define the range. You can issue more than one access-list sentence in the same
access-list to define the specific IP address range(s). If you are not familiar with wildcard masks,
refer to the note in section.
3. Associate the pool and the local range in a dynamic NAT translation command:
Router(config)#ip nat inside source list access-list number pool nat-pool-name [overload]
Where : access-list-number is the number of the access-list, nat-pool-name is the name of the
global pool, and overload : This parameter must be used when you have global IP addresses less
than local IP addresses (this type of NAT is also known as Port Address Translation, PAT).
4. Specify the local interface. This is done by going to the interface configuration mode and
issuing:
Router(config-if)#ip nat inside
5. Specify the global interface. This is done by going to the interface configuration mode and
issuing:
Router(config-if)#ip nat outside

B. The other scenario is when there is only one global IP address and a group of local IP
addresses.

In this case, the only global IP address is assigned to the interface connected to the global

267
network.

1. Define the range of local addresses permitted to participate in the translation using an access-list:
Router(config)#access-list access-list-number permit local-network-address wildcard mask
Where: access-list-number is the number of the access-list, which is usually a standard accesslist;
thus, the number can be any number from 1 to 99, local-network-address is the network address
of the local network or the starting IP address of the range, and wildcard-mask is the wildcard
mask used to define the range. You can issue more than one access-list sentence in the same
access-list to define the specific IP address range(s). If you are not familiar with wildcard masks,
refer to the note in Section.
2. Associate the pool and the local range in a dynamic NAT translation command:
Router(config)#ip nat inside source list access-listnumber interface interface-type interface-
number overload .
Where: access-list-number is the number of the access-list, interface-type is the type of the
interface that has the global IP address (e.g., serial or Ethernet), and interface-number is the
number of the interfaces. An example of the interface type and number is serial 0 or Ethernet 0/0.
3. Specify the local interface. This is done by going to the interface configuration mode and
issuing: Router(config-if)#ip nat inside
4. Specify the global interface. This is done by going to the interface configuration mode and
issuing:
Router(config-if)#ip nat outside

Troubleshooting Commands

1. To show the current translations performed by NAT


Router#show ip nat translation
Note that these translations have a certain lifetime. They do not remain in the list forever. If you
need to test your NAT configuration, ping to an outside host from an inside host and look for the
translations immediately.
2. To show the static translations of NAT:
Router#show ip nat static

268
3. To watch the instantaneous interactions of NAT:
Router#debug ip nat

Disabling NAT

To disable NAT, you need to do the following steps:

1. Disable NAT on the local and global interfaces:


Router(config-if)#no ip nat inside on the local, and
Router(config-if)#no ip nat outside on the global interface.
2. Clear the contents of the translation table:
Router#clear ip nat translations
3. Remove the NAT assignment command by preceding it with a ‗no‘. For example,
Router(config)#no ip nat inside source list access-listnumber interface interface-type interface-
number overload
4. Remove the access-list, if any, by putting ‗no‘ ahead of the command: Router(config)#no access-
list access-list-number

NAT-PT Configuration for IPv6

When would you need this:

When you have IPv6-only devices that need to communicate with IPv4-only devices.
Special Requirements: None.
NAT-PT, where PT stands for Protocol Translation, is a tunneling protocol that is used to
translate IPv6 into IPv4 and vice versa. NAT-PT can operate in one of the three modes: static,
dynamic, and Port Address. Translation.

Before configuring NAT-PT, you need to enable IPv6 routing on the translation router using this
command:
Router(config)#ipv6 unicast-routing

1. In static configuration, an IPv6 address is statically mapped into an IPv4 address using the
following command:

269
Router(config)#ipv6 nat v6v4 source ipv6-address ipv4- address
Where ipv6-address is the IPv6 address assigned to the IPv6-only host and ipv4-address is the
IPv4 address assigned to the IPv4-only host. The previous command needs to be configured once
for every address. In a similar fashion, we need to identify the reversed mapping from IPv4 to
IPv6 using the following command:
Router(config)#ipv6 nat v6v4 source ipv4-address ipv6- address
where: ipv6-address is the IPv6 address assigned to the IPv6-only host and ipv4-address is the
IPv4 address assigned to the IPv4-only host. The next step is to enable IPv6 NAT on the IPv4
interface:
Router(config-if)#ipv6 nat
2. In dynamic configuration, you will need to configure translation in both ways: IPv6-to-IPv4 and
IPv4-to-IPv6. For the first option, IPv6-to-IPv4, you will need to identify the IPv6 addresses
using an access-list and map it to an IPv4 address pool to be used in the translation.
First, we identify the pool of IPv4 addresses using the command:
Router(config)#ipv6 nat v6v4 pool pool-name start-address end-address prefix-length prefix-
length
Where pool-name is the name of the NAT pool, start-address and end-address are the first and
last addresses in the pool, and prefix-length is the prefix length of the IPv4 network. Next, we
create a named access-list to identify the range of IPv6 addresses that are allowed to participate
in the translation. This is done using the following commands:
Router(config)#ipv6 access-list acl-name
Router(config-ipv6-acl)#permit ip ipv6-source-prefix/ prefix-length any
Where: acl-name is the name of the access-list, ipv6-source-prefix is the IPv6 prefix address of
the hosts that are allowed to use this NAT translation, and prefix-length is the IPv6 network
prefix length.
Repeat the last command as many times as you need to include all the addresses you want to
participate in the translation.
The last step is to configure the mapping using the following command:
Router(config)#ipv6 nat v6v4 source list acl-name pool pool-name
Where acl-name is the name of the access-list identified in the previous step and pool-name is
the name of the NAT pool we identified earlier. In the second part, we will need to identify the

270
IPv4-to-IPv6 mapping using similar commands to the ones used before but exchanging IPv4 and
IPv6 addresses.
First, we identify the pool of IPv6 addresses using the command:
Router(config)#ipv6 nat v6v4 pool pool-name start-address end-address prefix-length prefix-
length
Where pool-name is the name of the NAT pool, start-address and end-address are the first and
last addresses in the pool, and prefix-length is the prefix length of the IPv6 network. Next, we
create a numbered (or named) access-list to identify the range of IPv4 addresses that are allowed
to participate in the translation.
This is done using the following commands:
Router(config)#access-list acl-number permit ip ipv4- network-address wildcard-mask
Where acl-number is the number of the access-list. The number should be within the range 1-99
because it is a standard ACL; ipv4-network-address is the IPv4 network that includes the hosts
that are allowed to use this NAT translation; and wildcard-mask is the wildcard mask that
identifies the range.
Repeat the last command as many times as you need to include all the addresses you want to
participate in the translation using the same access-list number.
The last step is to configure the mapping using the following command:
Router(config)#ipv6 nat v6v4 source list acl-number pool pool-name
Where acl-name is the name of the access-list identified in the previous step and pool-name is
the name of the NAT pool we identified earlier.
3. Port Address Translation is configured in an identical manner to the previous case of dynamic
mapping with the exception of one small difference. In the mapping command, add the word
overload at the end after the pool name.
4. For verification purposes, use the following commands:
Router#show ipv6 nat translations
Router#clear ipv6 nat translation *
Router#debug ipv6 nat detail

271
3.11. PPP
How to Configure PPP on a Cisco Router

When would you need this: When you are creating a WAN link. This procedure might also be
required when the other end of a WAN link is not a Cisco router. Point-to-Point Protocol can be
used in synchronous, asynchronous, HSSI, and ISDN links.

Special Requirements: None.

Special Requirements: None.

1. Get to the interface configuration mode of the router‘s serial interface and issue the following
command,
Router(config-if)#encapsulation ppp
2. If you want to configure authentication (which is almost always the case), go through the
following steps:

o Choose the authentication type: Password Authentication Protocol (PAP) or Challenge


Handshake Authentication Protocol (CHAP)
Router(config-if)#ppp authentication authentication type
Where authentication type is the authentication type, which can be: PAP, CHAP, PAP CHAP, or
CHAP PAP. The last two choices are to use the second authentication type when the first one
fails. CHAP is strongly recommended over PAP for two reasons. First, PAP sends the username
and password in plaintext, while CHAP sends hashed challenges only.
Second is that CHAP does an operation similar to periodic re-authentication in the middle of the
communication session, such that it provides more security than PAP.
o Set a username and a password that the remote router would use to connect to your local router.
You can define many username/password pairs for many PPP connections to the same router.
Router(config)#username remote-username password remote-password
Where remote-username is username sent from the remote router, and remote-password is its
password. If the remote router was not configured with a username to send, it will send its
hostname instead. Issue this command once for each PPP connection. For example, if you are

272
connecting RouterA to RouterB and RouterC, on RouterA issue this command once for each
remote router.
o Now, you can set the username and password that your local router would send to access the
remote router. For PAP authentication, you can specify the username and password that the local
router will send to the remote router for authentication using the following command,
Router(config-if)#ppp pap sent-username sent-username password sent-password For CHAP,
two commands are used,
Router(config-if)#ppp chap hostname sent-usernam
Router(config-if)#ppp chap password sent-password
The usernames and passwords are case sensitive, so be careful when writing them. This way, you
will have to write the username and password of the remote router in your local router and write
the username and password of your local router into your remote using the ‗username‘
command. If you do not set the username and password that will be sent from the local router to
the remote router for authentication, the router will use its hostname and secret password instead.
3. You can monitor the quality of the serial link that is using PPP with the following command,
Router(config-if)#ppp quality percentage
Where percentage is the minimum accepted link quality. If the link quality drops below the
percentage, the link will be shutdown and considered bad.
4. If the available bandwidth is small, you might consider compressing the data being transmitted
using the following command,
Router(config-if)#ppp compress compression-type
Where compression type is the compression type which can be predictor or stacker.
5. To troubleshoot PPP, you can use the following commands,
Router#debug ppp negotioations
Router#debug ppp packets
Router#debug ppp errors
Router#debug ppp authentication

3.12. Frame Relay


How to Configure Frame-Relay in a Cisco Router

When would you need this:

273
When you are setting up a Frame-relay WAN connection rented from a service provider.

Special Requirements: None.

Frame-relay configuration mainly depends on the topology you are using.

Point-to-Point Connection of Two Sites Using Physical Interfaces

1. On the serial interface, change the encapsulation type to Frame-relay:


Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the frame-relay
equipment.
2. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider.
3. Assign an IP address to the interface
Router(config-if)#ip address ip-address1 subnetmask1
where the ip address1 and subnetmask1 are the IP address and subnetmask assigned to the
Frame-relay interface on the first side of the link.
4. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map ip-address2 dlci-number encapsulation-type
where
ip-address2 is the IP address of the other side of the link. dlci-number is the virtual circuit
number given to you by the Frame-relay service provider. encapsulation-type is the type of
encapsulation standard used. The value is usually either Cisco or ietf. This information should
also be given to you by the Frame-relay service provider.
5. On the other end, the serial interface encapsulation type is changed to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay

274
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
6. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider. Usually, it is the
same type used in step 2.
7. Assign an IP address to the interface
Router(config-if)#ip address ip-address2 subnetmask2
where the ip address2 and subnetmask2 are the IP address and subnetmask assigned to the
Frame-relay interface on the second side of the link.
8. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map ip-address1 dlci-number encapsulation-type
where
ip address1 is the IP address of the first side of the link. dlci-number is the virtual circuit number
given to you by the Frame-relay service provider. encapsulation-type is the type of encapsulation
standard used. The value is usually either Cisco or ietf. This information should also be given to
you by the Frame-relay service provider.
9. Use the following commands for troubleshooting:
Router#show Frame-relay lmi
Router#show Frame-relay pvc

Point-to-Multipoint Using Physical Interfaces

In a point-to-multipoint Frame-relay connection, a central node is connected to a group of nodes


using a single physical line. The Frame-relay network will recognize the different destinations
through the use of different DLCI numbers on the same link. The configuration is similar to the
previous subsection except that at the central node multiple mappings are configured on the
Frame-relay interface while a single mapping is configured on each terminal interface.

1. At the central node, on the serial interface, change the encapsulation type to Frame-relay:
Router(config)#interface serial interface-number

275
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
2. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider.
3. Assign an IP address to the interface
Router(config-if)#ip address central-ip-address subnetmask1
where the central ip address and subnetmask1 are the IP address and subnetmask assigned to the
Frame-relay interface on the central side of the link.
4. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map ip-address2 dlci-number encapsulation-type
where
ip address2 is the IP address of the other side of the link. dlci-number is the virtual circuit
number given to you by the Frame-relay service provider. encapsulation-type is the type of
encapsulation standard used. The value is usually either Cisco or ietf. This information should
also be given to you by the Frame-relay service provider.
This command is repeated once for every terminal node. Each terminal node would have a
different DLCI number.
5. On the terminal end, the serial interface encapsulation type is changed to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
6. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider. Usually, it is the
same type used in step 2.

276
7. Assign an IP address to the interface
Router(config-if)#ip address ip-address2 subnetmask2
where the ip address2 and subnetmask2 are the IP address and subnetmask assigned to the
Frame-relay interface on the second side of the link.
8. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map central-ip-address dlci-number encapsulation-type
where central-ip-address is the IP address of the central side of the link. dlci-number is the
virtual circuit number given to you by the Frame-relay service provider. encapsulation-type is the
type of encapsulation standard used. The value is usually either Cisco or ietf. This information
should also be given to you by the Frame-relay service provider.

Point-to-Multipoint Using Logical Interfaces

In what we call multiple-point-to-point scenario, a single central station is connected through a


single physical link the Frame-relay network. Through that Frame-relay network, the central
node is also connected to multiple terminal nodes. However, these connections are done by
creating a single logical point-to-multipoint link carried over the single physical link.

1. At the central node, on the serial interface, change the encapsulation type to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
2. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider.
3. Assure that there is no IP address assigned to the interface
Router(config-if)#no ip address
4. Create logical interface:
Router(config-if)#interface serial interface-number.- logical-interface-number point-to-
multipoint

277
5. On the logical interface, assign an IP address:
Router(config-if)#ip address ip-address1 subnetmask1
where the ip-address1 and subnetmask1 are the IP address and subnetmask assigned to the
Frame-relay logical interface on the central side of the link.
6. Map the interface to a specific DLCI number:
Router(config-subif)#Frame-relay interface-dlci dlcinumber
where dlci-number is the virtual circuit number given to you by the Frame-relay service
provider. This DLCI number resembles the virtual circuit leading to a specific remote node.
7. Repeat steps 6 for as many remote nodes as you need.
8. On the remote node, the serial interface encapsulation type is changed to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
9. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider. Usually, it is the
same type used in step 2.
10. Assign an IP address to the interface
Router(config-if)#ip address ip-address2 subnetmask2
where the ip-address2 and subnetmask2 are the IP address and subnetmask assigned to the
Frame-relay interface on the remote side of the link.
11. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map ip-address1 dlci-number encapsulation-type
where ip-address1 is the IP address of the first side of the link. dlci-number is the virtual circuit
number given to you by the Frame-relay service provider. encapsulation-type is the type of
encapsulation standard used. The value is usually either Cisco or ietf. This information should
also be given to you by the Frame-relay service provider.
12. Repeat steps 8, 9, 10, and 11 on each remote node using different IP addresses and DLCI
numbers.

278
Multiple Point-to-Point Using Logical Interfaces

In what we call multiple-point-to-point scenario, a single central station is connected through a


single physical link the Frame-relay network. Through that Frame-relay network, the central
node is also connected to multiple terminal nodes. However, these connections are done by
creating multiple logical point-to-point links carried over the single physical link. This way, the
separation of traffic handled from one node to the other is clearer and the remote nodes cannot
communicate unless the traffic passes through the central node.

1. At the central node, on the serial interface change the encapsulation type to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
2. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider.
3. Assure that there is no IP address assigned to the interface
Router(config-if)#no ip address
4. Create logical interface:
Router(config-if)#interface serial interface-number.- logical-interface-number point-to-point
5. On the logical interface, assign an IP address:
Router(config-if)#ip address ip-address1 subnetmask1
where the ip-address1 and subnetmask1 are the IP address and subnetmask assigned to the
Frame-relay logical interface on the central side of the link.
6. Map the interface to a specific DLCI number:
Router(config-subif)#Frame-relay interface-dlci dlcinumber
Where dlci-number is the virtual circuit number given to you by the Frame-relay service
provider. This DLCI number resembles the virtual circuit leading to a specific remote node.
7. Repeat steps 4, 5 and 6 for as many remote nodes as you need.

279
8. On the remote node, the serial interface encapsulation type is changed to Frame-relay:
Router(config)#interface serial interface-number
Router(config-if)#encapsulation Frame-relay
where interface number is the number of the serial interface connected to the Frame-relay
equipment.
9. Configure the LMI type:
Router(config-if)#Frame-relay lmi-type lmi-type
where lmi-type is the type of LMI standard used. The supported types are Cisco, ansi and q933a.
This information should be given to you by the Frame-relay service provider. Usually, it is the
same type used in step 2.
10. Assign an IP address to the interface
Router(config-if)#ip address ip-address2 subnetmask2
where the ip-address2 and subnetmask2 are the IP address and subnetmask assigned to the
Frame-relay interface on the remote side of the link.
11. Map the Frame-relay DLCI number to a destination IP address:
Router(config-if)#Frame-relay map ip-address1 dlci-number encapsulation-type
where ip-address1 is the IP address of the first side of the link. dlci-number is the virtual circuit
number given to you by the Frame-relay service provider. encapsulation-type is the type of
encapsulation standard used. The value is usually either Cisco or ietf. This information should
also be given to you by the Frame-relay service provider.
12. Repeat steps 8, 9, 10, and 11 on each remote node using different IP addresses and DLCI
numbers.

Frame-Relay and Routing Issues

Cisco routers employ a technique called split-horizon. This technique is used to eliminate routing
loops by which a routing update cannot be forwarded to the same interface it came from.
Building on that logic, split-horizon can cause issues when using Frame-relay point-to-multipoint
topologies. Now think of a scenario where a routing update is coming from one of the remote
points connected on the other end of a point-to-multipoint link. The routing update, due to split-
horizon, will not be forwarded on the same physical link over to the other points connected to the
point-to-multipoint topology, because it will be considered coming from one interface and

280
cannot be forwarded over to the same interface. This way, the other points will not be able to
exchange routing updates.

Split-horizon can be disabled using the following command on the interface level:
Router(config-if)#no ip split-horizon
On OSPF, you can use the following command:
Router(config-if)#ip ospf network point-to-multipoint

3.13. Router on the stick121 Review Questions

3.13.1 Router on the stick


Configuration of Router on a stick

Switches divide broadcast domain through VLAN (Virtual LAN). VLAN is a partitioned
broadcast domain from a single broadcast domain. Switch doesn‘t forward packets across
different VLANs by itself. If we want to make these virtual LANs communicate with each other,
a concept of Inter VLAN Routing is used.

Inter VLAN Routing:

Inter VLAN routing is a process in which we make different virtual LANs communicate with
each other irrespective of where the VLANs are present (on same switch or different switch).
Inter VLAN Routing can be achieved through a layer-3 device i.e. Router or layer-3 Switch.
When the Inter VLAN Routing is done through Router it is known as Router on a stick.

Router on a Stick:

The Router‘s interface is divided into sub-interfaces, which acts as a default gateway to their
respective VLANs.

Configuration:

281
Figure 3. 5
Configuration
Here is a topology in which there is a router and a switch and some end hosts. 2 different VLANs
have been created on the switch. The router‘s interface is divided into 2 sub-interfaces (as there
are 2 different VLANs) which will acts as a default gateway to their respective VLANs. Then
router will perform Inter VLAN Routing and the VLANs will communicate with each other.

First we will assign IP address to the host PC1 as 192.168.1.10/24, Server 192.168.1.20/24, and
the other host PC2 will have IP address 192.168.2.10/24 manually.

Now, we will make sub-interface of fa0/0 as fa0/0.1 and fa0/0.2 and assign IP addresses as
192.168.1.1/24 and 192.168.2.1/24 respectively on the router‘s ports.

r1# int fa0/0.1


r1# encapsulation dot1q 2

282
r1# ip address 192.168.1.1 255.255.255.0
r1# int fa0/0.2
r1# encapsulation dot1q 3
r1# ip address 192.168.2.1 255.255.255.0

NOTE : Here encapsulation type dot1q is used for frame tagging between the 2 different VLAN.
When the switch forwards packet of one VLAN to another, it inserts a VLAN into the Ethernet
header.

Now, we will make 2 different VLANs on switch namely VLAN 2 and VLAN 3 giving names
HR_dept and sales_dept.

Switch# vlan 2
Switch# name HR_dept
Switch# vlan 3
Switch# name sales_dept
Switch# int range fa0/1-2
Switch# switchport mode access
Switch# switchport access vlan 2
Switch# int fa0/3
Switch# switchport mode access
Switch# switchport access vlan 3

Here, we have assigned VLAN 2 to the specific switch ports fa0/1, fa0/2 and vlan 3 to fa0/3
respectively.

NOTE: int range fa0/1-2 command is used as there are more than one host present in a single
VLAN.

Now to check reachability of PC2 from PC1, we will try to PING PC2 from PC1.

283
284
Figure 3. 6 PING PC2
from PC1
From the above figures, we see that the packet is delivered to the router by the switch, because
now the broadcast domain have been divided by the different VLANs present on the switch
therefore, the packet will be delivered to the default gateway (as PC2 is present on different
network) and then to the destination.

285
UNIT- IV
4. SWITCHES
4.1. Switch basic configuration
Scenario In this lab, you will examine and configure a standalone LAN switch. Although a
switch performs basic functions in its default out-of-the-box condition, there are a number of
parameters that a network administrator should modify to ensure a secure and optimized LAN.
This lab introduces you to the basics of switch configuration.

Task 1: Cable, Erase, and Reload the Switch

Step 1: Cable a network. Cable a network that is similar to the one in the topology diagram.
Create a console connection to the switch. You can use any current switch in your lab as long as
it has the required interfaces shown in the topology. The output shown in this lab is from a 2960
switch. If you use other switches, the switch outputs and interface descriptions may appear
different.
Note: PC2 is not initially connected to the switch. It is only used in Task 5.
Step 2: Clear the configuration on the switch. Clear the configuration on the switch using the
procedure in Appendix 1.

Task 2: Verify the Default Switch Configuration

Step 1: Enter privileged mode.


You can access all the switch commands in privileged mode. However, because many of the
privileged commands configure operating parameters, privileged access should be password-
protected to prevent unauthorized use. You will set passwords in Task 3. The privileged EXEC
command set includes those commands contained in user EXEC mode, as well as the configure
command through which access to the remaining command modes are gained. Enter privileged
EXEC mode by entering the enable command.

Switch>enable
Switch#
Notice that the prompt changed in the configuration to reflect privileged EXEC mode.

286
Step 2: Examine the current switch configuration.
Examine the current running configuration file.
Switch#show running-config
Step 3: Display Cisco IOS information.
Examine the following version information that the switch reports.
Switch#show version
Step 4: Examine the Fast Ethernet interfaces.
Examine the default properties of the Fast Ethernet interface used by PC1.
Switch#show interface fastethernet 0/18
Step 5: Examine VLAN information.
Examine the default VLAN settings of the switch.

Switch#show vlan
Step 6: Examine flash memory.
Issue one of the following commands to examine the contents of the flash directory.

Switch#dir flash: or Switch#show flash


Files have a file extension, such as .bin, at the end of the filename. Directories do not have a file
extension. To examine the files in a directory, issue the following command using the filename
displayed in the output of the previous command:

Switch#dir flash:c2960-lanbase-mz.122-25.SEE3
The output should look similar to this:
Directory of flash:/c2960-lanbase-mz.122-25.SEE3/
6 drwx 4480 Mar 1 1993 00:04:42 +00:00 html
618 -rwx 4671175 Mar 1 1993 00:06:06 +00:00 c2960-lanbase-mz.122-25.SEE3.bin
619 -rwx 457 Mar 1 1993 00:06:06 +00:00 info
32514048 bytes total (24804864 bytes free)

Step 7: Examine the startup configuration file. To view the contents of the startup configuration
file, issue the show startup-config command in privileged EXEC mode.
Switch#show startup-config

287
startup-config is not present
Let‘s make one configuration change to the switch and then save it. Type the following
commands:

Switch#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Switch(config)#hostname S1
S1(config)#exit
S1#

To save the contents of the running configuration file to non-volatile RAM (NVRAM), issue the
the command copy running-config startup-config.
Switch#copy running-config startup-config
Destination filename [startup-config]? (enter)
Building configuration…
[OK]
Note: This command is easier to enter by using the copy run start abbreviation.
Now display the contents of NVRAM using the show startup-config command.
S1#show startup-config
Using 1170 out of 65536 bytes

!
version 12.2
no service pad
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
!
hostname S1
!
<output omitted>

288
4.2. CAM Table
We dig in deeper into the operations of a switch in the CCNP SWITCH Official Certification
Guide. The CAM table is one of the fundamental operations of a switch. It is not only important
for the 642-813 SWITCH exam but it is important to know for working on the job. The CAM
table, or content addressable memory table, is present in all Cisco Catalysts for layer 2 switching.
It is used to record a stations mac address and its corresponding switch port location. In addition,
a timestamp for the entry is recorded and it is VLAN assignment.

The CAM table is used in multilayer switching for the purpose of quickly switching frames to
their destination. The switch looks at the incoming frame‘s source MAC address and enters it
into the CAM table and keeps it there for 300 seconds before aging out. This is the default value.
If the device connected to that switchport is moved to another port, the switch records the
incoming source MAC address, updates the CAM table and removes it‘s previous entry for the
same MAC address.

Host A is connected to switch port 1 and Host B is connected to switch port 2.

1. Host A sends traffic to the switch.


2. The switch looks into the frame and records the source MAC address (of Host A) and places an entry into
the CAM table. Host A is on switchport 1, has the MAC address of AAAA, VLAND ID of 1, and the
timestamp.
3. Host B has not communicated with the switch yet.
4. Host A decides to communicate with Host B.
5. When Host A sends a frame to the switch destined to Host B, the switch notices the destination MAC
address (for Host B) in the frame, queries the CAM table for that MAC address but doesn‘t find it.
6. Because the destination MAC is unknown, the switch marks the frame for flooding and sends the unicast
frame to all ports with the same VLAN association.
7. Host B responds to the unicast frame.
8. The switch records the incoming frame from Host B and records Host B‘s MAC, switchport location,
VLAN ID, and applies a timestamp.
9. The next time Host A sends a frame destined for Host B, the switch queries it‘s CAM table, finds Host B
in the table and sends the frame directly to Host B.

CAM Table Before Host B Communicates on the Network [table id=1 /]

289
CAM Table After Host B Communicates on the Network [table id=2 /]

Figure 4. 1 Host
A sending data to Host B

4.3. Port Security


Attackers‘ task is comparatively very easy when they can enter the network they want to attack.
Ethernet LANs are very much vulnerable to attack as the switch ports are open to use by default.
Various attacks such as Dos attack at layer 2, address spoofing can take place. If the
administrator has control over the network then obviously the network is safe. To take total
control over the switch ports, the user can use a feature called port-security. If somehow prevent
an unauthorized user to use these ports, then the security will increase up to a great extent at
layer 2.

290
Users can secure a port in two steps:

1. Limiting the number of MAC addresses to a single switch port, i.e if more than the limit, Mac addresses
are learned from a single port then appropriate action will be taken.
2. If unauthorized access is observed, the traffic should be discarded by using any of the options, or more
appropriately, the user should generate a log message so that unauthorized access can be easily observed.

Port security –

Switches learn MAC addresses when the frame is forwarded through a switch port. By using port
security, users can limit the number of MAC addresses that can be learned to a port, set static
MAC addresses, and set penalties for that port if it is used by an unauthorized user. Users can
either use restrict, shut down or protect port-security commands. Let‘s discuss these violation
modes:

 protect – This mode drops the packets with unknown source mac addresses until you remove enough
secure mac addresses to drop below the maximum value.
 restrict – This mode performs the same function as protecting, i.e drops packets until enough secure mac
addresses are removed to drop below the maximum value. In addition to this, it will generate a log
message, increment the counter value, and will also send an SNMP trap.
 shut down – This mode is mostly preferred as compared to other modes as it shut down the port
immediately if unauthorized access is done. It will also generate a log, increment counter value, and send
an SNMP trap. This port will remain in a shutdown state until the administrator will perform the ―no
shutdown‖ command.
 sticky – This is not a violation mode. By using the sticky command, the user provides static Mac address
security without typing the absolute Mac address. For example, if user provides a maximum limit of 2
then the first 2 Mac addresses learned on that port will be placed in the running configuration. After the
2nd learned Mac address, if the 3rd user wants to access then the appropriate action will be taken
according to the violation mode applied.
 Note – The port security will work on access port only i.e to enable port security, the user first has to
make it an access port.

Configuration

291
Applying port-security on fa0/1 interface of switch .first, convert the port to an access port and
will enable port-security.

S1(config)#int fa0/1
S1(config-if)#switchport mode access
S1(config-if)#switchport port-security

Use sticky command so that it will learn the Mac address dynamically and will provide the limit
and the appropriate action that should be taken.

S1(config-if)#switchport port-security mac-address sticky


S1(config-if)#switchport port-security maximum 2
S1(config-if)#switchport port-security violation shutdown

If the user wants to provide a static entry, then configure that by starting its Mac address.

S1(config-if)#switchport port-security
S1(config-if)#switchport port-security violation shutdown
S1(config-if)#switchport port-security mac-address aa.bb.cc.dd.ee.ff

4.4. VLANs
A VLAN is a switched network that is logically segmented by function, project team, or
application, without regard to the physical locations of the users. VLANs have the same
attributes as physical LANs, but you can group end stations even if they are not physically
located on the same LAN segment. Any switch port can belong to a VLAN, and unicast,
broadcast, and multicast packets are forwarded and flooded only to end stations in the VLAN.
Each VLAN is considered a logical network, and packets destined for stations that do not belong
to the VLAN must be forwarded through a router or a switch supporting fallback bridging.
Because a VLAN is considered a separate logical network, it contains its own bridge
Management Information Base (MIB) information and can support its own implementation of
spanning tree.

292
Figure 4. 2
VLANs as Logically Defined Networks
VLANs are often associated with IP subnetworks. For example, all the end stations in a
particular IP subnet belong to the same VLAN. Interface VLAN membership on the switch is
assigned manually on an interface-by-interface basis. When you assign switch interfaces to
VLANs by using this method, it is known as interface-based, or static, VLAN membership.
Traffic between VLANs must be routed or fallback bridged. The switch can route traffic between
VLANs by using switch virtual interfaces (SVIs). An SVI must be explicitly configured and
assigned an IP address to route traffic between VLANs.

Note
If you plan to configure many VLANs on the switch and to not enable routing, you can use the
sdm prefer vlan global configuration command to set the Switch Database Management (sdm)
feature to the VLAN template, which configures system resources to support the maximum
number of unicast MAC addresses.

VLAN Port Membership Modes You configure a port to belong to a VLAN by assigning a
membership mode that specifies the kind of traffic the port carries and the number of VLANs to
which it can belong. Table 12-1 lists the membership modes and membership and VTP
characteristics.

293
294
Table 4. 1 Port Membership Modes and Characteristics
Configuring Normal-Range VLANs

Normal-range VLANs are VLANs with VLAN IDs 1 to 1005. If the switch is in VTP server or
VTP transparent mode, you can add, modify or remove configurations for VLANs 2 to 1001 in
the VLAN database. (VLAN IDs 1 and 1002 to 1005 are automatically created and cannot be
removed.)

You can cause inconsistency in the VLAN database if you attempt to manually delete the
vlan.dat file. If you want to modify the VLAN configuration, use the commands described in
these sections and in the command reference for this release. To change the VTP configuration.

You use the interface configuration mode to define the port membership mode and to add and
remove ports from VLANs. The results of these commands are written to the running-
configuration file, and you can display the file by entering the show running-config privileged
EXEC command. You can set these parameters when you create a new normal-range VLAN or
modify an existing VLAN in the VLAN database:

295
 VLAN ID
 VLAN name
 VLAN type (Ethernet, Fiber Distributed Data Interface [FDDI], FDDI network entity title [NET], TrBRF,
or TrCRF, Token Ring, Token Ring-Net)
 VLAN state (active or suspended)
 Maximum transmission unit (MTU) for the VLAN
 Security Association Identifier (SAID)
 Bridge identification number for TrBRF VLANs
 Ring number for FDDI and TrCRF VLANs
 Parent VLAN number for TrCRF VLANs
 Spanning Tree Protocol (STP) type for TrCRF VLANs
 VLAN number to use when translating from one VLAN type to another

Token Ring VLANs

Although the switch does not support Token Ring connections, a remote device such as a
Catalyst 5000 series switch with Token Ring connections could be managed from one of the
supported switches. Switches running VTP Version 2 advertise information about these Token
Ring VLANs:

 Token Ring TrBRF VLANs


 Token Ring TrCRF VLANs For more information on configuring Token Ring VLANs, see the
Catalyst 5000 Series Software Configuration Guide.

Normal-Range VLAN Configuration Guidelines

Follow these guidelines when creating and modifying normal-range VLANs in your network:

 The switch supports 1005 VLANs in VTP client, server, and transparent modes.
 Normal-range VLANs are identified with a number between 1 and 1001. VLAN numbers 1002 through
1005 are reserved for Token Ring and FDDI VLANs.
 VLAN configuration for VLANs 1 to 1005 are always saved in the VLAN database. If the VTP mode is
transparent, VTP and VLAN configuration are also saved in the switch running configuration file.

296
 The switch also supports VLAN IDs 1006 through 4094 in VTP transparent mode (VTP disabled). These
are extended-range VLANs and configuration options are limited. Extended-range VLANs are not saved
in the VLAN database.
 Before you can create a VLAN, the switch must be in VTP server mode or VTP transparent mode. If the
switch is a VTP server, you must define a VTP domain or VTP will not function.
 The switch does not support Token Ring or FDDI media. The switch does not forward FDDI, FDDI-Net,
TrCRF, or TrBRF traffic, but it does propagate the VLAN configuration through VTP.
 The switch supports 128 spanning-tree instances. If a switch has more active VLANs than supported
spanning-tree instances, spanning tree can be enabled on 128 VLANs and is disabled on the remaining
VLANs. If you have already used all available spanning-tree instances on a switch, adding another VLAN
anywhere in the VTP domain creates a VLAN on that switch that is not running spanning-tree. If you
have the default allowed list on the trunk ports of that switch (which is to allow all VLANs), the new
VLAN is carried on all trunk ports. Depending on the topology of the network, this could create a loop in
the new VLAN that would not be broken, particularly if there are several adjacent switches that all have
run out of spanning-tree instances. You can prevent this possibility by setting allowed lists on the trunk
ports of switches that have used up their allocation of spanning-tree instances.

If the number of VLANs on the switch exceeds the number of supported spanning-tree instances,
we recommend that you configure the IEEE 802.1s Multiple STP (MSTP) on your switch to map
multiple VLANs to a single spanning-tree instance.

VLAN Configuration in config-vlan Mode

To access config-vlan mode, enter the vlan global configuration command with a VLAN ID.
Enter a new VLAN ID to create a VLAN, or enter an existing VLAN ID to modify that VLAN.
You can use the default VLAN configuration or enter multiple commands to configure the
VLAN. For more information about commands available in this mode, see the vlan global
configuration command description in the command reference for this release. When you have
finished the configuration, you must exit config-vlan mode for the configuration to take effect.
To display the VLAN configuration, enter the show vlan privileged EXEC command.

VLAN Configuration in VLAN Database Configuration Mode

297
To access VLAN database configuration mode, enter the vlan database privileged EXEC
command. Then enter the vlan command with a new VLAN ID to create a VLAN, or enter an
existing VLAN ID to modify the VLAN. You can use the default VLAN configuration or enter
multiple commands to configure the VLAN. For more information about keywords available in
this mode, see the vlan VLAN database configuration command description in the command
reference for this release. When you have finished the configuration, you must enter apply or exit
for the configuration to take effect. When you enter the exit command, it applies all commands
and updates the VLAN database. VTP messages are sent to other switches in the VTP domain,
and the privileged EXEC mode prompt appears.

Saving VLAN Configuration

The configurations of VLAN IDs 1 to 1005 are always saved in the VLAN database (vlan.dat
file). If the VTP mode is transparent, they are also saved in the switch running configuration file.
You can enter the copy running-config startup-config privileged EXEC command to save the
configuration in the startup configuration file. To display the VLAN configuration, enter the
show vlan privileged EXEC command.

When you save VLAN and VTP information (including extended-range VLAN configuration
information) in the startup configuration file and reboot the switch, the switch configuration is
selected as follows:

 If the VTP mode is transparent in the startup configuration, and the VLAN database and the VTP domain
name from the VLAN database matches that in the startup configuration file, the VLAN database is
ignored (cleared), and the VTP and VLAN configurations in the startup configuration file are used. The
VLAN database revision number remains unchanged in the VLAN database.
 If the VTP mode or domain name in the startup configuration does not match the VLAN database, the
domain name and VTP mode and configuration for the first 1005 VLANs use the VLAN database
information.
 If VTP mode is server, the domain name and VLAN configuration for the first 1005 VLANs use the
VLAN database information

4.5. STP
Configuring STP

298
This part describes how to configure the Spanning Tree Protocol (STP) on port-based VLANs on
the Catalyst 3560 switch. The switch can use either the per-VLAN spanning-tree plus (PVST+)
protocol based on the IEEE 802.1D standard and Cisco proprietary extensions, or the rapid per-
VLAN spanning-tree plus (rapid-PVST+) protocol based on the IEEE 802.1w standard.

STP Overview

STP is a Layer 2 link management protocol that provides path redundancy while preventing
loops in the network. For a Layer 2 Ethernet network to function properly, only one active path
can exist between any two stations. Multiple active paths among end stations cause loops in the
network. If a loop exists in the network, end stations might receive duplicate messages. Switches
might also learn end-station MAC addresses on multiple Layer 2 interfaces. These conditions
result in an unstable network. Spanning-tree operation is transparent to end stations, which
cannot detect whether they are connected to a single LAN segment or a switched LAN of
multiple segments. The STP uses a spanning-tree algorithm to select one switch of a redundantly
connected network as the root of the spanning tree. The algorithm calculates the best loop-free
path through a switched Layer 2 network by assigning a role to each port based on the role of the
port in the active topology:

 Root—A forwarding port elected for the spanning-tree topology


 Designated—A forwarding port elected for every switched LAN segment
 Alternate—A blocked port providing an alternate path to the root bridge in the spanning tree
 Backup—A blocked port in a loopback configuration The switch that has all of its ports as the designated
role or as the backup role is the root switch. The switch that has at least one of its ports in the designated
role is called the designated switch.

Spanning tree forces redundant data paths into a standby (blocked) state. If a network segment in
the spanning tree fails and a redundant path exists, the spanning-tree algorithm recalculates the
spanning-tree topology and activates the standby path. Switches send and receive spanning-tree
frames, called bridge protocol data units (BPDUs), at regular intervals. The switches do not
forward these frames but use them to construct a loop-free path. BPDUs contain information
about the sending switch and its ports, including switch and MAC addresses, switch priority, port

299
priority, and path cost. Spanning tree uses this information to elect the root switch and root port
for the switched network and the root port and designated port for each switched segment.

When two ports on a switch are part of a loop, the spanning-tree port priority and path cost
settings control which port is put in the forwarding state and which is put in the blocking state.
The spanning-tree port priority value represents the location of a port in the network topology
and how well it is located to pass traffic. The path cost value represents the media speed.

Spanning-Tree Topology and BPDUs

The stable, active spanning-tree topology of a switched network is controlled by these elements:

 The unique bridge ID (switch priority and MAC address) associated with each VLAN on each switch.
 The spanning-tree path cost to the root switch.
 The port identifier (port priority and MAC address) associated with each Layer 2 interface.

When the switches in a network are powered up, each functions as the root switch. Each switch
sends a configuration BPDU through all of its ports. The BPDUs communicate and compute the
spanning-tree topology. Each configuration BPDU contains this information:

 The unique bridge ID of the switch that the sending switch identifies as the root switch
 The spanning-tree path cost to the root
 The bridge ID of the sending switch
 Message age
 The identifier of the sending interface
 Values for the hello, forward delay, and max-age protocol timers

When a switch receives a configuration BPDU that contains superior information (lower bridge
ID, lower path cost, and so forth), it stores the information for that port. If this BPDU is received
on the root port of the switch, the switch also forwards it with an updated message to all attached
LANs for which it is the designated switch. If a switch receives a configuration BPDU that
contains inferior information to that currently stored for that port, it discards the BPDU. If the
switch is a designated switch for the LAN from which the inferior BPDU was received, it sends

300
that LAN a BPDU containing the up-to-date information stored for that port. In this way, inferior
information is discarded, and superior information is propagated on the network.

 A BPDU exchange results in these actions:


 One switch in the network is elected as the root switch (the logical center of the spanning-tree topology in
a switched network).

For each VLAN, the switch with the highest switch priority (the lowest numerical priority value)
is elected as the root switch. If all switches are configured with the default priority (32768), the
switch with the lowest MAC address in the VLAN becomes the root switch. The switch priority
value occupies the most significant bits of the bridge ID.

A root port is selected for each switch (except the root switch). This port provides the best path
(lowest cost) when the switch forwards packets to the root switch. The shortest distance to the
root switch is calculated for each switch based on the path cost. A designated switch for each
LAN segment is selected. The designated switch incurs the lowest path cost when forwarding
packets from that LAN to the root switch. The port through which the designated switch is
attached to the LAN is called the designated port. All paths that are not needed to reach the root
switch from anywhere in the switched network are placed in the spanning-tree blocking mode.

Bridge ID, Switch Priority, and Extended System ID

The IEEE 802.1D standard requires that each switch has a unique bridge identifier (bridge ID),
which controls the selection of the root switch. Because each VLAN is considered as a different
logical bridge with PVST+ and rapid PVST+, the same switch must have a different bridge IDs
for each configured VLAN. Each VLAN on the switch has a unique 8-byte bridge ID. The 2
most-significant bytes are used for the switch priority, and the remaining 6 bytes are derived
from the switch MAC address. The switch supports the IEEE 802.1t spanning-tree extensions,
and some of the bits previously used for the switch priority are now used as the VLAN identifier.
The result is that fewer MAC addresses are reserved for the switch, and a larger range of VLAN
IDs can be supported, all while maintaining the uniqueness of the bridge ID. As shown in Table
4-2, the 2 bytes previously used for the switch priority are reallocated into a 4-bit priority value
and a 12-bit extended system ID value equal to the VLAN ID.

301
Table 4. 2 Switch Priority Value and Extended System ID
Spanning tree uses the extended system ID, the switch priority, and the allocated spanning-tree
MAC address to make the bridge ID unique for each VLAN. Support for the extended system ID
affects how you manually configure the root switch, the secondary root switch, and the switch
priority of a VLAN. For example, when you change the switch priority value, you change the
probability that the switch will be elected as the root switch. Configuring a higher value
decreases the probability; a lower value increases the probability.

Spanning-Tree Interface States

Propagation delays can occur when protocol information passes through a switched LAN. As a
result, topology changes can take place at different times and at different places in a switched
network. When an interface transitions directly from nonparticipation in the spanning-tree
topology to the forwarding state, it can create temporary data loops. Interfaces must wait for new
topology information to propagate through the switched LAN before starting to forward frames.
They must allow the frame lifetime to expire for forwarded frames that have used the old
topology. Each Layer 2 interface on a switch using spanning tree exists in one of these states:

 Blocking—The interface does not participate in frame forwarding.


 Listening—The first transitional state after the blocking state when the spanning tree decides that the
interface should participate in frame forwarding.
 Learning—The interface prepares to participate in frame forwarding.
 Forwarding—The interface forwards frames.
 Disabled—The interface is not participating in spanning tree because of a shutdown port, no link on the
port, or no spanning-tree instance running on the port.
 An interface moves through these states:
 From initialization to blocking
 From blocking to listening or to disabled
 From listening to learning or to disabled
 From learning to forwarding or to disabled

302
 From forwarding to disabled

Figure 4. 3 Spanning-Tree Interface States


When you power up the switch, spanning tree is enabled by default, and every interface in the
switch, VLAN, or network goes through the blocking state and the transitory states of listening
and learning. Spanning tree stabilizes each interface at the forwarding or blocking state.

When the spanning-tree algorithm places a Layer 2 interface in the forwarding state, this process
occurs:

1. The interface is in the listening state while spanning tree waits for protocol information to move the
interface to the blocking state.
While spanning tree waits the forward-delay timer to expire, it moves the interface to the learning state
and resets the forward-delay timer.
2. In the learning state, the interface continues to block frame forwarding as the switch learns end-station
location information for the forwarding database.
3. When the forward-delay timer expires, spanning tree moves the interface to the forwarding state, where
both learning and frame forwarding are enabled.

Blocking State

303
A Layer 2 interface in the blocking state does not participate in frame forwarding. After
initialization, a BPDU is sent to each switch interface. A switch initially functions as the root
until it exchanges BPDUs with other switches. This exchange establishes which switch in the
network is the root or root switch. If there is only one switch in the network, no exchange occurs,
the forward-delay timer expires, and the interface moves to the listening state. An interface
always enters the blocking state after switch initialization. An interface in the blocking state
performs these functions:

 Discards frames received on the interface


 Discards frames switched from another interface for forwarding
 Does not learn addresses
 Receives BPDUs

Listening State

The listening state is the first state a Layer 2 interface enters after the blocking state. The
interface enters this state when the spanning tree decides that the interface should participate in
frame forwarding. An interface in the listening state performs these functions:

 Discards frames received on the interface


 Discards frames switched from another interface for forwarding
 Does not learn addresses
 Receives BPDUs

Learning State

A Layer 2 interface in the learning state prepares to participate in frame forwarding. The
interface enters the learning state from the listening state. An interface in the learning state
performs these functions:

 Discards frames received on the interface


 Discards frames switched from another interface for forwarding
 Learns addresses
 Receives BPDUs

304
Forwarding State

A Layer 2 interface in the forwarding state forwards frames. The interface enters the forwarding
state from the learning state. An interface in the forwarding state performs these functions:

 Receives and forwards frames received on the interface


 Forwards frames switched from another interface
 Learns addresses
 Receives BPDUs

Disabled State

A Layer 2 interface in the disabled state does not participate in frame forwarding or in the
spanning tree. An interface in the disabled state is nonoperational. A disabled interface performs
these functions:

 Discards frames received on the interface


 Discards frames switched from another interface for forwarding
 Does not learn addresses
 Does not receive BPDUs

4.6. VTP
This portion describes how to use the VLAN Trunking Protocol (VTP) and the VLAN database
for managing VLANs with the Catalyst 3560 switch.

Understanding VTP

VTP is a Layer 2 messaging protocol that maintains VLAN configuration consistency by


managing the addition, deletion, and renaming of VLANs on a network-wide basis. VTP
minimizes misconfigurations and configuration inconsistencies that can cause several problems,
such as duplicate VLAN names, incorrect VLAN-type specifications, and security violations.
Before you create VLANs, you must decide whether to use VTP in your network. Using VTP,
you can make configuration changes centrally on one or more switches and have those changes
automatically communicated to all the other switches in the network. Without VTP, you cannot
send information about VLANs to other switches. VTP is designed to work in an environment

305
where updates are made on a single switch and are sent through VTP to other switches in the
domain. It does not work well in a situation where multiple updates to the VLAN database occur
simultaneously on switches in the same domain, which would result in an inconsistency in the
VLAN database. The switch supports 1005 VLANs, but the number of routed ports, SVIs, and
other configured features affects the usage of the switch hardware. If the switch is notified by
VTP of a new VLAN and the switch is already using the maximum available hardware
resources, it sends a message that there are not enough hardware resources available and shuts
down the VLAN. The output of the show vlan user EXEC command shows the VLAN in a
suspended state. VTP only learns about normal-range VLANs (VLAN IDs 1 to 1005). Extended-
range VLANs (VLAN IDs greater than 1005) are not supported by VTP or stored in the VTP
VLAN database.

The VTP Domain

A VTP domain (also called a VLAN management domain) consists of one switch or several
interconnected switches under the same administrative responsibility sharing the same VTP
domain name. A switch can be in only one VTP domain. You make global VLAN configuration
changes for the domain. By default, the switch is in the VTP no-management-domain state until
it receives an advertisement for a domain over a trunk link (a link that carries the traffic of
multiple VLANs) or until you configure a domain name. Until the management domain name is
specified or learned, you cannot create or modify VLANs on a VTP server, and VLAN
information is not propagated over the network. If the switch receives a VTP advertisement over
a trunk link, it inherits the management domain name and the VTP configuration revision
number. The switch then ignores advertisements with a different domain name or an earlier
configuration revision number.

When you make a change to the VLAN configuration on a VTP server, the change is propagated
to all switches in the VTP domain. VTP advertisements are sent over all IEEE trunk connections,
including Inter-Switch Link (ISL) and IEEE 802.1Q. VTP dynamically maps VLANs with
unique names and internal index associates across multiple LAN types. Mapping eliminates
excessive device administration required from network administrators. If you configure a switch
for VTP transparent mode, you can create and modify VLANs, but the changes are not sent to

306
other switches in the domain, and they affect only the individual switch. However, configuration
changes made when the switch is in this mode are saved in the switch running configuration and
can be saved to the switch startup configuration file.

Configuring VTP

Default VTP Configuration

Table
4. 3 Default VTP Configuration
Configuration Mode

VTP Configuration in Global Configuration Mode

You can use the vtp global configuration command to set the VTP password, the version, the
VTP file name, the interface providing updated VTP information, the domain name, and the
mode, and to disable or enable pruning. For more information about available keywords, see the
command descriptions in the command reference for this release. The VTP information is saved
in the VTP VLAN database. When VTP mode is transparent, the VTP domain name and mode
are also saved in the switch running configuration file, and you can save it in the switch startup
configuration file by entering the copy running-config startup-config privileged EXEC
command. You must use this command if you want to save VTP mode as transparent, even if the
switch resets. When you save VTP information in the switch startup configuration file and reboot
the switch, the switch configuration is selected as follows:

307
 If the VTP mode is transparent in the startup configuration and the VLAN database and the
VTP domain name from the VLAN database matches that in the startup configuration file, the
VLAN database is ignored (cleared), and the VTP and VLAN configurations in the startup
configuration file are used. The VLAN database revision number remains unchanged in the
VLAN database.
 If the VTP mode or domain name in the startup configuration do not match the VLAN
database, the domain name and VTP mode and configuration for the first 1005 VLANs use the
VLAN database information.

VTP Configuration in VLAN Database Configuration Mode

You can configure all VTP parameters in VLAN database configuration mode, which you access
by entering the vlan database privileged EXEC command. For more information about available
keywords, see the vtp VLAN database configuration command description in the command
reference for this release. When you enter the exit command in VLAN database configuration
mode, it applies all the commands that you entered and updates the VLAN database. VTP
messages are sent to other switches in the VTP domain, and the privileged EXEC mode prompt
appears. If VTP mode is transparent, the domain name and the mode (transparent) are saved in
the switch running configuration, and you can save this information in the switch startup
configuration file by entering the copy running-config startup-config privileged EXEC
command.

VTP Configuration Guidelines

These sections describe guidelines you should follow when implementing VTP in your network.

Domain Names

When configuring VTP for the first time, you must always assign a domain name. You must
configure all switches in the VTP domain with the same domain name. Switches in VTP
transparent mode do not exchange VTP messages with other switches, and you do not need to
configure a VTP domain name for them.

308
Passwords

You can configure a password for the VTP domain, but it is not required. If you do configure a
domain password, all domain switches must share the same password and you must configure
the password on each switch in the management domain. Switches without a password or with
the wrong password reject VTP advertisements. If you configure a VTP password for a domain,
a switch that is booted without a VTP configuration does not accept VTP advertisements until
you configure it with the correct password. After the configuration, the switch accepts the next
VTP advertisement that uses the same password and domain name in the advertisement. If you
are adding a new switch to an existing network with VTP capability, the new switch learns the
domain name only after the applicable password has been configured on it.

VTP Version

Follow these guidelines when deciding which VTP version to implement:

 All switches in a VTP domain must run the same VTP version.
 A VTP Version 2-capable switch can operate in the same VTP domain as a switch running VTP Version
1 if Version 2 is disabled on the Version 2-capable switch (Version 2 is disabled by default).
 Do not enable VTP Version 2 on a switch unless all of the switches in the same VTP domain are Version-
2-capable. When you enable Version 2 on a switch, all of the Version-2-capable switches in the domain
enable Version 2. If there is a Version 1-only switch, it does not exchange VTP information with switches
that have Version 2 enabled.
 If there are TrBRF and TrCRF Token Ring networks in your environment, you must enable VTP Version
2 for Token Ring VLAN switching to function properly. To run Token Ring and Token Ring-Net, disable
VTP Version 2.

Configuration Requirements

When you configure VTP, you must configure a trunk port so that the switch can send and
receive VTP advertisements to and from other switches in the domain. If you are configuring
VTP on a cluster member switch to a VLAN, use the rcommand privileged EXEC command to
log in to the member switch. For more information about the command, see the command
reference for this release. If you are configuring extended-range VLANs on the switch, the

309
switch must be in VTP transparent mode. VTP does not support private VLANs. If you
configure private VLANs, the switch must be in VTP transparent mode. When private VLANs
are configured on the switch, do not change the VTP mode from transparent to client or server
mode.

Configuring a VTP Server

When a switch is in VTP server mode, you can change the VLAN configuration and have it
propagated throughout the network.

Table 4. 4 Configuring a VTP Server


When you configure a domain name, it cannot be removed; you can only reassign a switch to a
different domain. To return the switch to a no-password state, use the no vtp password global
configuration command. This example shows how to use global configuration mode to configure

310
the switch as a VTP server with the domain name eng_group and the password mypassword:
Switch# config terminal

Switch(config)# vtp mode server


Switch(config)# vtp domain eng_group
Switch(config)# vtp password mypassword
Switch(config)# end

You can also use VLAN database configuration mode to configure VTP parameters. Beginning
in privileged EXEC mode, follow these steps to use VLAN database configuration mode to
configure the switch as a VTP server:

When you configure a domain name, it cannot be removed; you can only reassign a switch to a
different domain. To return the switch to a no-password state, use the no vtp password VLAN
database configuration command. This example shows how to use VLAN database configuration
mode to configure the switch as a VTP server with the domain name eng_group and the
password mypassword:

Switch# vlan database


Switch(vlan)# vtp server
Switch(vlan)# vtp domain eng_group

311
Switch(vlan)# vtp password mypassword
Switch(vlan)# exit
APPLY completed.
Exiting….
Switch#

Configuring a VTP Client

When a switch is in VTP client mode, you cannot change its VLAN configuration. The client
switch receives VTP updates from a VTP server in the VTP domain and then modifies its
configuration accordingly.

Use the no vtp mode global configuration command to return the switch to VTP server mode. To
return the switch to a no-password state, use the no vtp password privileged EXEC command.
When you configure a domain name, it cannot be removed; you can only reassign a switch to a
different domain.

Quizzes

Activity 4.4

4.7. Inter VLAN Communication

312
After VLANs are assigned, broadcast packets are only forwarded in the same VLAN. This
means that hosts in different VLANs cannot communicate at Layer 2. In real-world scenarios,
hosts in different VLANs often need to communicate, so inter-VLAN communication needs to
be implemented to resolve this.

Similar to intra-VLAN communication described in Intra-VLAN Communication, inter-VLAN


communication goes through three phases: packet transmission from the source host, Ethernet
switching in a switch, and adding and removing VLAN tags during the exchange between
devices. According to the Ethernet switching principle, broadcast packets are only forwarded in
the same VLAN and hosts in different VLANs cannot directly communicate at Layer 2. Layer 3
routing or VLAN translation technology is required to implement inter-VLAN communication.

Inter-VLAN Communication Technologies

Huawei provides a variety of technologies to implement inter-VLAN communication. The


following two technologies are commonly used:

 VLANIF interface A VLANIF interface is a Layer 3 logical interface that can be used to implement inter-
VLAN Layer 3 connectivity. It is simple to configure a VLANIF interface, so VLANIF interfaces are the
most commonly used for inter-VLAN communication. However, a VLANIF interface needs to be
configured for each VLAN and each VLANIF interface requires an IP address. As a result, this
technology wastes IP addresses.
 Dot1q termination sub-interface A sub-interface is also a Layer 3 logical interface that can be used to
implement inter-VLAN Layer 3 connectivity. A Dot1q termination sub-interface applies to scenarios
where a Layer 3 Ethernet interface connects to multiple VLANs. In such a scenario, data flows from
different VLANs preempt bandwidth of the primary Ethernet interface; therefore, the primary Ethernet
interface may become a bottleneck when the network is busy.
 VLAN aggregation VLAN aggregation associates a super-VLAN with a super-VLAN. The sub-VLANs
share the IP address of the super-VLAN, which acts as the gateway IP address, to implement Layer 3
connectivity with an external network. Proxy ARP can be enabled between sub VLANs to implement
Layer 3 connectivity between sub-VLANs. VLAN aggregation conserves IP addresses. VLAN
aggregation applies to scenarios where multiple VLANs share a gateway. For details about VLAN
aggregation, see VLAN Aggregation Configuration.

313
 VLAN Switch switch-vlan VLAN Switch switch-vlan requires a pre-configured static forwarding path
along switching nodes on a network. When a switching node receives VLAN-tagged frames matching
VLAN Switch entries, it directly forwards the frames to corresponding interfaces according to the static
forwarding path, thus implementing Layer 2 communication. Switch-VLAN does not require lookup of
the MAC address table, so the forwarding efficiency and security are enhanced. If a switching node
connects to many user devices, the network administrator needs to configure each user device in advance
to establish a static forwarding path. This increases the manual configuration workload and makes
network management inconvenient. Switch-VLAN applies to small-scale networks.

Inter-VLAN Communication through the Same Switch

Host_1 (source host) and Host_2 (destination host) connect to the same Layer 3 switch, are
located on different network segments, and belong to VLAN 2 and VLAN 3, respectively. After
VLANIF 2 and VLANIF 3 are created on the switch and allocated IP addresses, the default
gateway addresses of the hosts are set to IP addresses of the VLANIF interfaces.

Figure 4. 4 Using
VLANIF interfaces to implement inter-VLAN communication through the same switch
When Host_1 sends a packet to Host_2, the packet is transmitted as follows (assuming that no
forwarding entry exists on the switch):

1. Host_1 determines that the destination IP address is on a different network segment from its own IP
address, and therefore sends an ARP Request packet to request the gateway MAC address. The ARP
Request packet carries the destination IP address of 10.1.1.1 (gateway‘s IP address) and all-F destination
MAC address.
2. When the ARP Request packet reaches IF_1 on the Switch, the Switch tags the packet with VLAN 2
(PVID of IF_1). The Switch then adds the mapping between the source MAC address, VLAN ID, and
interface (1-1-1, 2, IF_1) in its MAC address table.

314
3. The Switch detects that the packet is an ARP Request packet and the destination IP address is the IP
address of VLANIF 2. The Switch then encapsulates VLANIF 2‘s MAC address of 3-3-3 into the ARP
Reply packet before sending it from IF_1. In addition, the Switch adds the binding of the IP address and
MAC address of Host_1 in its ARP table.
4. After receiving the ARP Reply packet from the Switch, Host_1 adds the binding of the IP address and
MAC address of VLANIF 2 on the Switch in its ARP table and sends a packet to the Switch. The packet
carries the destination MAC address of 3-3-3 and destination IP address of 10.2.2.2 (Host_2‘s IP address).
5. After the packet reaches IF_1 on the Switch, the Switch tags the packet with VLAN 2.
6. The Switch updates its MAC address table based on the source MAC address, VLAN ID, and inbound
interface of the packet, and compares the destination MAC address of the packet with the MAC address
of VLANIF 2. If they are the same, the Switch determines that the packet should be forwarded at Layer 3
and searches for a Layer 3 forwarding entry based on the destination IP address. If no entry is found, the
Switch sends the packet to the CPU. The CPU then searches for a routing entry to forward the packet.
7. The CPU looks up the routing table based on the destination IP address of the packet and detects that the
destination IP address matches a directly connected network segment (network segment of VLANIF 3).
The CPU continues to look up its ARP table but finds no matching ARP entry. Therefore, the Switch
broadcasts an ARP Request packet with the destination address of 10.2.2.2 to all interfaces in VLAN 3.
The ARP Request packet will be send from IF_2.
8. After receiving the ARP Request packet, Host_2 detects that the IP address is its own IP address and
sends an ARP Reply packet with its own. Additionally, Host_2 adds the mapping between the MAC
address and IP address of VLANIF 3 to its ARP table.
9. After IF_2 on the Switch receives the ARP Reply packet, IF_2 tags the packet with VLAN 3 to the packet
and adds the binding of the MAC address and IP address of Host_2 in its ARP table. Before forwarding
the packet from Host_1 to Host_2, the Switch removes the tag with VLAN 3 from the packet. The Switch
also adds the binding of Host_2‘s IP address, MAC address, VLAN ID, and outbound interface in its
Layer 3 forwarding table.

The packet sent from Host_1 then reaches Host_2. The packet transmission process from Host_2
to Host_1 is similar. Subsequent packets between Host_1 and Host_2 are first sent to the
gateway (Switch), and the Switch forwards the packets at Layer 3 based on its Layer 3
forwarding table.

Inter-VLAN Communication through Multiple Switches

315
When hosts in different VLANs connect to multiple Layer 3 switches, you need to configure
static routes or a dynamic routing protocol in addition to VLANIF interface addresses. This is
because IP addresses of VLANIF interfaces can only be used to generate direct routes.

In Figure 4-14, Host_1 (source host) and Host_2 (destination host) are located on different
network segments, connect to Layer 3 switches Switch_1 and Switch_2, and belong to VLAN 2
and VLAN 3, respectively. On Switch_1, VLANIF 2 and VLANIF 4 are created and allocated IP
addresses of 10.1.1.1 and 10.1.4.1. On Switch_2, VLANIF 3 and VLANIF 4 are created and
allocated IP addresses of 10.1.2.1 and 10.1.4.2. Static routes are configured on Switch_1 and
Switch_2. On Switch_1, the destination network segment in the static route is 10.1.2.0/24 and
the next hop address is 10.1.4.2. On Switch_2, the destination network segment in the static route
is 10.1.1.0/24 and the next hop address is 10.1.4.1.

Figure 4.
5 Using VLANIF interfaces to implement inter-VLAN communication through multiple switches
When Host_1 sends a packet to Host_2, the packet is transmitted as follows (assuming that no
forwarding entry exists on Switch_1 and Switch_2):

1. The first six steps are similar to steps 1 to 6 in inter-VLAN communication when hosts connect to the
same switch. After the steps are complete, Switch_1 sends the packet to its CPU and the CPU looks up
the routing table.

316
2. The CPU of Switch_1 searches for the routing table based on the destination IP address of 10.1.2.2 and
finds a static route. In the static route, the destination network segment is 10.1.2.0/24 and the next hop
address is 10.1.4.2. The CPU continues to look up its ARP table but finds no matching ARP entry.
Therefore, Switch_1 broadcasts an ARP Request packet with the destination address of 10.1.4.2 to all
interfaces in VLAN 4. IF_2 on Switch_1 transparently transmits the ARP Request packet to IF_2 on
Switch_2 without removing the tag from the packet.
3. After the ARP Request packet reaches Switch_2, Switch_2 finds that the destination IP address of the
ARP Request packet is the IP address of VLANIF Switch_2 then sends an ARP Reply packet with the
MAC address of VLANIF 4 to Switch_1.
4. IF_2 on Switch_2 transparently transmits the ARP Reply packet to Switch_1. After Switch_1 receives the
ARP Reply packet, it adds the binding of the MAC address and IP address of VLANIF4 in its ARP table.
5. Before forwarding the packet of Host_1 to Switch_2, Switch_1 changes the destination MAC address of
the packet to the MAC address of VLANIF 4 on Switch_2 and the source MAC address to the MAC
address of VLANIF 4 on itself. In addition, Switch_1 records the forwarding entry (10.1.2.0/24, next hop
IP address, VLAN, and outbound interface) in its Layer 3 forwarding table. Similarly, the packet is
transparently transmitted to IF_2 on Switch_2.
6. After Switch_2 receives packets of Host_1 forwarded by Switch_1, the steps similar to steps 6 to 9 in
inter-VLAN communication when hosts connect to the same switch are performed. In addition, Switch_2
records the forwarding entry (Host_2‘s IP address, MAC address, VLAN, and outbound interface) in its
Layer 3 forwarding table.

4.8. Miscellaneous
This part contains miscellaneous configurations that are specific to certain access points.

Using the LAN ports on 700W APs

The Cisco Aironet 700W series access points have one 10/100/1000BASE-T PoE Uplink/WAN
port and four 10/100/1000BASE-T RJ-45 local Ethernet ports for wired device connectivity. The
fourth port functions as a PoE-Out port when the AP is powered by 802.3at Ethernet switch,
Cisco power injector AIR-PWRJ4=, or Cisco Power Supply. By default, all four local Ethernet
ports are disabled. You can be enable them when required. You can also configure the local
Ethernet ports to a VLAN ID using the interface configuration command, vlan vlan-id.

Enable LAN ports on 702W

317
Step 1 Enter global configuration mode.
ap#conf t Enter configuration commands, one per line. End with CNTL/Z.
Step 2 Enable the LAN port.
ap(config)#lan-Port
port-id 1
ap(config-lan-port)#no shutdown
ap(config-lan-port)#end

Assign a VLAN to the LAN ports

Use the commands given in the example below.

ap#conf t
Enter configuration commands, one per line. End with CNTL/Z.
ap(config)#lan-Port port-id 1
ap(config-lan-port)#vlan 25
ap(config-lan-port)#end

Verifying the LAN Port Configurations

Use the command given in the example below

voip#sh lan config LAN table entries:


Port Status Vlan valid Vlan Id
—- ——— ———- ——-
LAN1 DISABLED 25 NA
LAN2 ENABLED NO NA
LAN3 DISABLED NO NA
LAN4 ENABLED NO NA
LAN POE out state = ENABLED

700W AP as Workgroup Bridge

318
Like other Cisco Access points 702W AP series also can be configured as a Workgroup Bridge
(WGB). A WGB can provide a wireless infrastructure connection for Ethernet-enabled devices.
Devices that do not have a wireless client adapter in order to connect to the wireless network can
be connected to the WGB through the Ethernet port. The WGB supports up to 20 Ethernet-
enabled devices to a Wireless LAN (WLAN). The WGB associates to the root AP through the
wireless interface. In this way, wired clients obtain access to the wireless network. A WGB can
associate to:

 An AP
 A root bridge (in AP mode)
 A controller through a lightweight AP

When a Cisco 702W access point acts as a WGB, the wired Ethernet clients behind the WGB can
be either connected to the LAN or WAN ports present on the 702W AP.

4.9 REFERENCES
1. Rufi, Oppenheimer, Woodward and Brady, Network Fundamentals, CCNA Exploration
Labs and Study Guide, CISCO Press, 2008.
2. Dye, McDonald and Riufi, Network Fundamentals, CCNA Exploration Companion
Guide, CISCO Press, 2007.
3. Top-Down Network Design (2nd Edition) By Priscilla Oppenheimer. Published
by Cisco Press. Published: May 27, 2004.
4. Christina J. Hogan. The Practice of System and Network Administration, Addison-
Wesley Professional, 2001.

319
INFORMATION ASSURANCE AND SECURITY

Prepared by
Mesele Gebre, Senior Lecturer

Reviewed by Aklili Elias (MSc)

320
UNIT - I

1. Introduction
Information assurance (IA) is the practice of assuring information and managing risks related to the use,
processing, storage, and transmission of information or data and the systems and processes used for those
purposes. Information assurance includes protection of the integrity, availability, authenticity, non-
repudiation and confidentiality of user data. It uses physical, technical and administrative controls to
accomplish these tasks. While focused predominantly on information in digital form, the full range of IA
encompasses not only digital but also analog or physical form. These protections apply to data in transit,
both physical and electronic forms as well as data at rest in various types of physical and electronic
storage facilities. Information assurance as a field has grown from the practice of information security.

1.1Information assurance process


The information assurance process typically begins with the enumeration and classification of the
information assets to be protected. Next, the IA practitioner will perform a risk assessment for those
assets. Vulnerabilities in the information assets are determined in order to enumerate the threats capable
of exploiting the assets. The assessment then considers both the probability and impact of a threat
exploiting vulnerability in an asset, with impact usually measured in terms of cost to the asset‘s
stakeholders. The sum of the products of the threats‘ impact and the probability of their occurring is the
total risk to the information asset. With the risk assessment complete, the IA practitioner then develops a
risk management plan. This plan proposes countermeasures that involve mitigating, eliminating,
accepting, or transferring the risks, and considers prevention, detection, and response to threats.
Countermeasures may include technical tools such as firewalls and anti-virus software, policies and
procedures requiring such controls as regular backups and configuration hardening, employee training in
security awareness, or organizing personnel into dedicated computer emergency response team (CERT)
or computer security incident response team (CSIRT). The cost and benefit of each countermeasure is
carefully considered. Thus, the IA practitioner does not seek to eliminate all risks, were that possible, but
to manage them in the most cost-effective way. After the risk management plan is implemented, it is
tested and evaluated, often by means of formal audits. The IA process is an iterative one, in that the risk
assessment and risk management plan are meant to be periodically revised and improved based on data
gathered about their completeness and effectiveness. .

Information security, sometimes shortened to InfoSec, is the practice of defending information from
unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or
destruction. It is a general term that can be used regardless of the form the data may take (electronic,
physical, etc…) Two major aspects of information security are:

 IT security: Sometimes referred to as computer security, Information Technology Security is


information security applied to technology (most often some form of computer system). It is

321
worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any
device with a processor and some memory (even a calculator). IT security specialists are almost
always found in any major enterprise/establishment due to the nature and value of the data within
larger businesses. They are responsible for keeping all of the technology within the company
secure from malicious cyber-attacks that often attempt to breach into critical private information
or gain control of the internal systems.
 Information assurance: The act of ensuring that data is not lost when critical issues arise. These
issues include but are not limited to: natural disasters, computer/server malfunction, physical
theft, or any other instance where data has the potential of being lost. Since most information is
stored on computers in our modern era, information assurance is typically dealt with by IT
security specialists. One of the most common methods of providing information assurance is to
have an off-site backup of the data in case one of the mentioned issues arises. Governments,
military, corporations, financial institutions, hospitals, and private businesses amass a great deal
of confidential information about their employees, customers, products, research and financial
status. Most of this information is now collected, processed and stored on electronic computers
and transmitted across networks to other computers.

Definitions:

1. ―Preservation of confidentiality, integrity and availability of information. Note: In addition, other


properties, such as authenticity, accountability, non-repudiation and reliability can also be
involved.‖
2. ―The protection of information and information systems from unauthorized access, use,
disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity,
and availability.‖
3. ―Ensures that only authorized users (confidentiality) have access to accurate and complete
information (integrity) when required (availability).‖
4. ―Information Security is the process of protecting the intellectual property of an organization.‖
5. ―…information security is a risk management discipline, whose job is to manage the cost of
information risk to the business.‖
6. ―A well-informed sense of assurance that information risks and controls are in balance.‖
7. ―Information security is the protection of information and minimizes the risk of exposing
information to unauthorized parties.‖

1.2 Computer security


Computer security is a branch of computer technology known as information security as applied to
computers and networks. The objective of computer security includes protection of information and
property from theft, corruption, or natural disaster, while allowing the information and property to remain
accessible and productive to its intended users. The term computer system security means the collective
processes and mechanisms by which sensitive and valuable information and services are protected from
publication, tampering or collapse by unauthorized activities or untrustworthy individuals and unplanned
events respectively. The strategies and methodologies of computer security often differ from most other
computer technologies because of its somewhat elusive objective of preventing unwanted computer
behavior instead of enabling wanted computer behavior.

322
 Computer data often travels from one computer to another, leaving the safety of its protected
physical surroundings. Once the data is out of hand, people with bad intention could modify or
forge your data, either for amusement or for their own benefit.
 Cryptography can reformat and transform our data, making it safer on its trip between computers.
The technology is based on the essentials of secret codes, augmented by modern mathematics that
protects our data in powerful ways.

What is the difference among Computer, Network and Internet Security?

1. Computer Security – generic name for the collection of tools designed to protect data and to
thwart hackers.
2. Network Security – measures to protect data during their transmission.
3. Internet Security – measures to protect data during their transmission over a collection of
interconnected networks.

1.2.1 Why Security?

Computer security is required because most organizations can be damaged by hostile (unfriendly and not
liking or agreeing with something a hostile crowd) software or intruders. There may be several forms of
damage which are obviously interrelated. These include:

 Damage or destruction of computer systems. ™


 Damage or destruction of internal data. ™
 Loss of sensitive information to hostile parties. ™
 Use of sensitive information to steal items of monitory value. ™
 Use of sensitive information against the organization‘s customers which may result in legal action
by customers against the organization and loss of customers. ™
 Damage to the reputation of an organization. ™
 Monitory damage due to loss of sensitive information, destruction of data, hostile use of sensitive
data, or damage to the organization‘s reputation.

1.2.2 Principles of Security (Goals)


These three concepts form what is often referred to as the CIA triad (Figure 1.1). The three concepts
embody the fundamental security objectives for both data and for information and computing services.
FIPS PUB 199 provides a useful characterization of these three objectives in terms of requirements and
the definition of a loss of security in each category:

Figure 1.1: goals of security

323
1.2. 3The OSI Security Architecture
To assess effectively the security needs of an organization and to evaluate and choose various security
products and policies, the manager responsible for security needs some systematic way of defining the
requirements for security and characterizing the approaches to satisfying those requirements. The OSI
security architecture was developed in the context of the OSI protocol architecture. However, for our
purposes in this chapter, an understanding of the OSI protocol architecture is not required.

For our purposes, the OSI security architecture provides a useful, if abstract, overview of many of the
concepts. The OSI security architecture focuses on security attacks, mechanisms, and services. These can
be defined briefly as follows:

Threat: A potential for violation of security, which exists when there is a circumstance, capability,
action, or event that could breach security and cause harm. That is, a threat is a possible danger that might
exploit vulnerability.

Attack: An assault on system security that derives from an intelligent threat; that is, an intelligent act that
is a deliberate attempt (especially in the sense of a method or technique) to evade security services and
violate the security policy of a system

1.3. Security Attacks, Services and Mechanisms


 To assess the security needs of an organization effectively, the manager responsible for security
needs some systematic way of defining the requirements for security and characterization of
approaches to satisfy those requirements.
 One approach is to consider three aspects of information security:

1. Security attack –Any action that compromises the security of information owned by an
organization.
2. Security mechanism –A mechanism that is designed to detect, prevent or recover from a security
attack.
3. Security service –A service that enhances the security of the data processing systems and the
information transfers of an organization. The services are intended to counter security attacks and
they make use of one or more security mechanisms to provide the service.

1.3.1 Security Services


The classification of security services are as follows:

1. Confidentiality: Ensures that the information in a computer system and transmitted information
are accessible only for reading by authorized parties. E.g. printing, displaying and other forms of
disclosure.
2. Authentication: Ensures that the origin of a message or electronic document is correctly
identified, with an assurance that the identity is not false.

324
3. Integrity: Ensures that only authorized parties are able to modify computer system assets and
transmitted information. Modification includes writing, changing status, deleting, creating and
delaying or replaying of transmitted messages.
4. Non repudiation: Requires that neither the sender nor the receiver of a message be able to deny
the transmission.
5. Access control: Requires that access to information resources may be controlled by or the target
system.
6. Availability: Requires that computer system assets be available to authorized parties when
needed.

Security Services (X.800)

Authentication: The assurance that the communicating entity is the one that it claims to be.

 Peer Entity Authentication: Used in association with a logical connection to provide confidence
in the identity of the entities connected.
 Data Origin Authentication: In a connectionless transfer, provides assurance that the source of
received data is as claimed.

Access Control:-The prevention of unauthorized use of a resource (i.e., this service controls that can have
access to a resource, under what conditions access can occur, and what those accessing the resource are
allowed to do).

Data Confidentiality: The protection of data from unauthorized disclosure.

 Connection Confidentiality: The protection of all user data on a connection.


 Connectionless Confidentiality: The protection of all user data in a single data block.
 Selective-Field Confidentiality: The confidentiality of selected fields within the user data on a
connection or in a single data block.
 Traffic Flow Confidentiality: The protection of the information that might be derived from
observation of traffic flows.

Data Integrity:

 Connection Integrity with Recovery: Provides for the integrity of all user data on a connection
and detects any modification, insertion, deletion, or replay of any data within an entire data
sequence, with recovery attempted.
 Connection Integrity without Recovery: As above, but provides only detection without
recovery.
 Selective-Field Connection Integrity: Provides for the integrity of selected fields within the user
data of a data block transferred over a connection and takes the form of determination of whether
the selected fields have been modified, inserted, deleted, or replayed
 Connectionless Integrity: Provides for the integrity of a single connectionless data block and
may take the form of detection of data modification. Additionally, a limited form of replay
detection may be provided.
 Selective-Field Connectionless Integrity: Provides for the integrity of selected fields within a
single connectionless data block; takes the form of determination of whether the selected fields
have been modified.

325
Non-Repudiation: Provides protection against denial by one of the entities involved in a communication
of having participated in all or part of the communication

 Nonrepudiation, Origin: Proof that the message was sent by the specified party.
 Nonrepudiation, Destination: Proof that the message was received by the specified party.

Security Attacks

Security attacks can be classified in terms of Passive attacks and Active attacks as per X.800 and RFC
2828.
There are four general categories of attack which are listed below.

 Interruption: An asset of the system is destroyed or becomes unavailable or unusable. This is an


attack on availability. Examples: destruction of piece of hardware, Cutting of a communication
line or disabling of file management system.

Figure 1.2: interruption attack

 Interception: An unauthorized party gains access to an asset. This is an attack on confidentiality.


Unauthorized party could be a person, a program or a computer.

Examples: wiretapping to capture data in the network, illicit copying of files

Figure 1.3: interception attack

 Modification: An unauthorized party not only gains access to but tampers with an asset. This is
an attack on integrity.

Examples: changing values in data file, altering a program, modifying the contents of messages being
transmitted in a network.

326
Figure 1.4: modification attack

 Fabrication: An unauthorized party inserts counterfeit objects into the system. This is an attack
on authenticity.

Examples: insertion of spurious message in a network or addition of records to a file.

Figure 1.5: fabrication attack

A useful categorization of these attacks is in terms of

 Passive attacks: do not affect system resources.


 Active attacks: try to alter system resources or affect their operation.

1. Passive attack

Passive attacks are in the nature of eavesdropping on, or monitoring of, transmissions. The goal of the
opponent is to obtain information that is being transmitted. Passive attacks are of two types:

 Release of message contents: A telephone conversation, an e-mail message and a transferred file
may contain sensitive or confidential information. We would like to prevent the opponent from
learning the contents of these transmissions.

327
Figure 1.6: Release of message contents

 Traffic analysis: If we had encryption protection in place, an opponent might stillbe able to
observe the pattern of the message. The opponent could determine the location and identity of
communication hosts and could observe the frequency and length of messages being exchanged.
This information might be useful in guessing the nature of communication that was taking place.
Passive attacks are very difficult to detect because they do not involve any alteration of data.
However, it is feasible to prevent the success of these attacks.

Figure 1.7: Traffic analysis

Passive attacks are very difficult to detect because they do not involve any alteration of data. However, it
is feasible to prevent the success of these attacks.

2. Active attacks

These attacks involve some modification of the data stream or the creation of a false stream. These attacks
can be classified in to four categories:

 Masquerade –One entity pretends to be a different entity.

Figure 1.8: Masquerade

 Replay –involves passive capture of a data unit and its subsequent transmission to produce an
unauthorized effect.

328
Figure 1.9: Replay

 Modification of messages –Some portion of message is altered or the messages are delayed or
recorded, to produce an unauthorized effect.

Figure 1.10: Modification of messages

 Denial of service –Prevents or inhibits the normal use or management of communication


facilities. Another form of service denial is the disruption of an entire network, either by disabling
the network or overloading it with messages so as to degrade performance.

Figure 1.11: Denial of service

329
It is quite difficult to prevent active attacks absolutely, because to do so would require physical protection
of all communication facilities and paths at all times. Instead, the goal is to detect them and to recover
from any disruption or delays caused by them

1.3.2 Security Mechanisms:


According to X.800, the security mechanisms are divided into those implemented in a specific protocol
layer and those that are not specific to any particular protocol layer or security service. X.800 also
differentiates reversible & irreversible encipherment mechanisms. A reversible encipherment mechanism
is simply an encryption algorithm that allows data to be encrypted and subsequently decrypted, whereas
irreversible encipherment include hash algorithms and message authentication codes used in digital
signature and message authentication applications.

Specific Security Mechanisms:

Incorporated into the appropriate protocol layer in order to provide some of the OSI security services:

 Encipherment: It refers to the process of applying mathematical algorithms for converting data
into a form that is not intelligible. This depends on algorithm used and encryption keys.
 Digital Signature: The appended data or a cryptographic transformation applied to any data unit
allowing to prove the source and integrity of the data unit and protect against forgery.
 Access Control: A variety of techniques used for enforcing access permissions to the system
resources.
 Data Integrity: A variety of mechanisms used to assure the integrity of a data unit or stream of
data units.
 Authentication Exchange: A mechanism intended to ensure the identity of an entity by means of
information exchange.
 Traffic Padding: The insertion of bits into gaps in a data stream to frustrate traffic analysis
attempts.
 Routing Control: Enables selection of particular physically secure routes for certain data and
allows routing changes once a breach of security is suspected.
 Notarization: The use of a trusted third party to assure certain properties of a data exchange

Pervasive Security Mechanisms:

These are not specific to any particular OSI security service or protocol layer.

 Trusted Functionality: That which is perceived to be correct with respect to some criteria
 Security Level: The marking bound to a resource (which may be a data unit) that names or
designates the security attributes of that resource.
 Event Detection: It is the process of detecting all the events related to network security.
 Security Audit Trail: Data collected and potentially used to facilitate a security audit, which is
an independent review and examination of system records and activities.
 Security Recovery: It deals with requests from mechanisms, such as event handling and
management functions, and takes recovery actions.

1.4. A Model for Network Security

330
Figure 1.12: model for network security

Data is transmitted over network between two communicating parties, who must cooperate for the
exchange to take place. A logical information channel is established by defining a route through the
internet from source to destination by use of communication protocols by the two parties. Whenever an
opponent presents a threat to confidentiality, authenticity of information, security aspects come into play.
Two components are present in almost all the security providing
techniques.

A security-related transformation on the information to be sent making it unreadable by the opponent, and
the addition of a code based on the contents of the message, used to verify the identity of sender.
Some secret information shared by the two principals and, it is hoped, unknown to the opponent. An
example is an encryption key used in conjunction with the transformation to scramble the message before
transmission and unscramble it on reception.

A trusted third party may be needed to achieve secure transmission. It is responsible for distributing the
secret information to the two parties, while keeping it away from any opponent. It also may be needed to
settle disputes between the two parties regarding authenticity of a message transmission.

The general model shows that there are four basic tasks in designing a particular security service:

1. Design an algorithm for performing the security-related transformation. The algorithm should be
such that an opponent cannot defeat its purpose.
2. Generate the secret information to be used with the algorithm.
3. Develop methods for the distribution and sharing of the secret information.
4. Specify a protocol to be used by the two principals that makes use of the security algorithm and
the secret information to achieve a particular security service

Various other threats to information system like unwanted access still exist. The existence of hackers
attempting to penetrate systems accessible over a network remains a concern. Another threat is placement
of some logic in computer system affecting various applications and utility programs. This inserted code
presents two kinds of threats:

 Information access threats intercept or modify data on behalf of users who should not have access
to that data
 Service threats exploit service flaws in computers to inhibit use by legitimate users.

331
Viruses and worms are two examples of software attacks inserted into the system by means of a disk or
also across the network. The security mechanisms needed to cope with unwanted access fall into two
broad categories

Figure 1.13: network access security

Network Access Security Model

 Placing a gatekeeper function, which includes a password-based login methods that provide
access to only authorized users and screening logic to detect and reject worms, viruses etc.
 An internal control monitoring the internal system activities analyzes the stored information and
detects the presence of unauthorized users or intruders.

1.5. Enterprise security


What Is Enterprise security?

Introduction Enterprise security is about building systems to remain dependable in the face of malice,
error, or mischance. As a discipline, it focuses on the tools, processes, and methods needed to design,
implement, and test complete systems, and to adapt existing systems as their environment evolves.
Enterprise security requires cross-disciplinary expertise, ranging from cryptography and computer
security through hardware tamper-resistance and formal methods to knowledge of economics, applied
psychology, organizations and the law. System engineering skills, from business process analysis through
software engineering to evaluation and testing, are also important; but they are not sufficient, as they deal
only with error and mischance rather than malice. Many security systems have critical assurance
requirements.

Their failure may endanger human life and the environment (as with nuclear safety and control systems),
do serious damage to major economic infrastructure (cash machines and other bank systems), endanger
personal privacy (medical record systems), undermine the viability of whole business sectors (pay-TV),
and facilitate crime (burglar and car alarms). Even the perception that a system is more vulnerable than it
really is (paying with a credit card over the Internet) can significantly hold up economic development.
The conventional view is that while software engineering is about ensuring that certain things happen
(‗John can read this file‘), security is about ensuring that they don‗t (‗The Chinese government can‗t read

332
this file‘). Reality is much more complex. Security requirements differ greatly from one system to
another. One typically needs some combination of user authentication, transaction integrity and
accountability, fault-tolerance, message secrecy, and covertness. But many systems fail because their
designers protect the wrong things, or protect the right things but in the wrong way.

A Framework

Good Enterprise security requires four things to come together. There‗s policy: what you‗re supposed to
achieve. There‗s mechanism: the ciphers, access controls, hardware tamper-resistance and other
machinery that you assemble in order to implement the policy. There‗s assurance: the amount of reliance
you can place on each particular mechanism. Finally, there‗s incentive: the motive that the people
guarding and maintaining the system have to do their job properly, and also the motive that the attackers
have to try to defeat your policy. All of these interact (see Figure.1.14).

Figure 1.14: Enterprise Security Analysis Framework

1.6. Cyber Defense


Definition – What does Cyber Defense mean?

Cyber defense is a computer network defense mechanism which includes response to actions and
critical infrastructure protection and information assurance for organizations, government entities
and other possible networks. Cyber defense focuses on preventing, detecting and providing
timely responses to attacks or threats so that no infrastructure or information is tampered with.
With the growth in volume as well as complexity of cyber-attacks, cyber defense is essential for
most entities in order to protect sensitive information as well as to safeguard assets. With the
understanding of the specific environment, cyber defense analyzes the different threats possible
to the given environment. It then helps in devising and driving the strategies necessary to counter
the malicious attacks or threats. A wide range of different activities is involved in cyber defense
for protecting the concerned entity as well as for the rapid response to a threat landscape. These
could include reducing the appeal of the environment to the possible attackers, understanding the
critical locations & sensitive information, enacting preventative controls to ensure attacks would
be expensive, attack detection capability and reaction and response capabilities. Cyber defense
also carries out technical analysis to identify the paths and areas the attackers could target. Cyber
defense provides the much-needed assurance to run the processes and activities, free from

333
worries about threats. It helps in enhancing the security strategy utilizations and resources in the
most effective fashion. Cyber defense also helps in improving the effectiveness of the security
resources and security expenses, especially in critical locations.

Cyber Defense protects your most important business assets against attack.

By aligning the knowledge of the threats you face with an understanding of your environment,
you are able to maximize the effectiveness of your security spend and target your resources at the
critical locations. All of this is driven from your business strategy by identifying where it may be
at risk from a range of threats from a malicious insider right through to Advanced Persistent
Threats (APT). Cyber Defense covers a wide range of activities that are essential in enabling
your business to protect itself against attack and respond to a rapidly evolving threat landscape.
This will include cyber deterrents to reduce your appeal to the attackers, preventative controls
that require their attacks to be more costly, attack detection capability to spot when they are
targeting you and reaction and response capabilities to repel them. Typically a Cyber Defense
engagement will include a range of services that are aimed at long term assurance of your
business, from the understanding of how security impacts your business strategy and priorities,
through to training and guidance that enables your employees to establish the right security
culture. At the same time the engagement will include specialist technical analysis and
investigation to ensure that you can map out and protect the paths the attackers will use to
compromise your most sensitive assets.

These activities will also enable you to obtain evidence of any threats that may already have
breached your defenses and providing the capability to manage or remove them as needed. Using
this blend of services, Cyber Defense provides the assurances you need to run your business free
from worry about the threats that it faces and to ensure that your security strategy utilizes your
resources in the most effective manner.

Enterprise Security within an Enterprise Architecture Context:

Definitions

Many of the terms used in Enterprise security are straightforward, but some are misleading or
even controversial. There are more detailed definitions of technical terms in the relevant
chapters, which you can find using the index. The first thing we need to clarify is what we mean
by system. In practice, this can denote:

1. A product or component, such as a cryptographic protocol, a smartcard or the hardware


of a PC;
2. A collection of the above plus an operating system, communications and other things that
go to make up an organization‗s infrastructure;

334
3. The above plus one or more applications (media player, browser, word processor,
accounts / payroll package, and so on);
4. Any or all of the above plus IT staff;
5. Any or all of the above plus internal users and management;
6. Any or all of the above plus customers and other external users.

Enterprise Security Architecture: Establishing the Business Context

A business-driven approach to enterprise security architecture means that security is about


enabling the objective of an organization by controlling operational risk. This business-driven
approach becomes a key differentiator to existing security practices that are focused solely on
identifying threats to an enterprise and technical vulnerabilities in IT infrastructure, and
subsequently implementing controls to mitigate the risks introduced. A purely threat-based
approach to risk management fails to enable effective security and business operations. The term
security will carry very different meanings to different organizations. For example, consider
security as it relates to a military organization and security related to an online retailer that
processes credit card information. The business models for these two organizations will be very
different and, as a result, the security programs should be unique and relevant to their underlying
businesses. A military organization may determine that the most critical asset to protect is the life
of its soldiers as they are engaged in military operations. To provide assurance as to the safety of
a soldier, complex security architectures are needed to protect information and information
systems that could impact the soldiers‘ safety. Solutions could range from ensuring that logistic
systems that manage the delivery of supplies, food, and ammunitions remain available and data
integrity is protected to protecting confidentiality of mission plans and military intelligence that,
if compromised, could cause considerable harm to war fighters. Conversely, an online retailer is
likely most concerned with compliance with standards set by the payment card industry. These
standards are tailored to protect the confidentiality of personal information and the integrity of
transactions. An online retailer may have lower thresholds for availability then a military
logistics system. The needs for confidentiality, availability, and integrity of data must be
balanced and appropriate to the business activity. Developing security architecture begins with
an understanding of the business, which is achieved by defining business drivers and attributes.
A business driver is related to the organization‗s strategies, operational plans, and key elements
considered critical to success. A business attribute is a key property of the strategic objectives
that needs to be enabled or protected by the enterprise security program. An organization‗s
senior executives, who set the long-term strategy and direction of the business, can typically
provide knowledge regarding business drivers. The drivers are often reflected in an
organization‗s mission and vision statement. Consider our military organization, which may have
a strategic objective of ―operational excellence‖. This business driver can be distilled into
relevant attributes that require assurance to satisfy the overarching business driver. Conversely,
the online retailer may have a strategic objective of being ―customer focused‖, as expressed in
their vision statement to provide a superior online shopping experience. Business attributes can

335
generally be identified through an understanding of the business drivers that are set by the top
levels of an organization. Security architects will often conduct structured interviews with senior
management in order to identify business attributes by determining the essence of what is
conveyed by high level business drivers. In the example of the business driver labeled
―operational excellence‖, the executives might be referring to the availability, reliability, and
safety of their operations and resources. In this case, the business attributes defined are
―available‖, ―safe‖, and ―reliable‖. Each attribute is then linked to the business driver they
support. This pairing of a business driver and attribute results in the creation of a proxy asset.
Again, building on our example, a sample proxy asset is ―operational excellence‖ with the
attribute of ―available‖. Each proxy asset is owned by the organization and is assessed as having
value to them. The fact that the proxy asset has value sets the requirement that it should be
protected. The value of these proxy assets is difficult to define given that they are often
intangible and exist at a very high level. Despite being unable to assign a monetary value to a
proxy asset, it is still possible to identify risks that may act against the asset. Our online retailer
may have attributes of ―confidential‖, ―reputable‖, and ―error-free‖. An inventory of proxy
assets can be maintained by the security architect and will be considered as key assets to the
organization. This is later used to conduct a business threat and risk assessment to identify risks
to the business. It is through a business threat and risk assessment that the sometimes-competing
aspects of confidentiality, integrity, and availability can be reconciled. When the overall
objective and needs of a business are understood, through proxy assets, then impact can be
understood as it relates to confidentiality, integrity, and availability. Understanding of the
business helps prioritize which of these elements is most important, and which aspects of the
business are most in need of protection.

Example 1 – A Bank
Banks operate a surprisingly large range of security-critical computer systems.

1. The core of a bank‗s operations is usually a branch bookkeeping system. This keeps
customer account master files plus a number of journals that record the day‗s
transactions. The main threat to this system is the bank‗s own staff; about one percent of
bankers are fired each year, mostly for petty dishonesty (the average theft is only a few
thousand dollars). The main defense comes from bookkeeping procedures that have
evolved over centuries. For example, each debit against one account must be matched by
an equal and opposite credit against another; so money can only be moved within a bank,
never created or destroyed. In addition, large transfers of money might need two or three
people to authorize them. There are also alarm systems that look for unusual volumes or
patterns of transactions, and staff is required to take regular vacations during which they
have no access to the bank‗s premises or systems.
2. One public face of the bank is its automatic teller machines. Authenticating transactions
based on a customer‗s card and personal identification number in such a way as to defend
against both outside and inside attack is harder than it looks! There have been many

336
epidemics of ‗phantom withdrawals‗ in various countries when local villains (or bank
staff) have found and exploited loopholes in the system. Automatic teller machines are
also interesting as they were the first large scale commercial use of cryptography, and
they helped establish a number of crypto standards.
3. Another public face is the bank‗s website. Many customers now do more of their routine
business, such as bill payments and transfers between savings and checking accounts,
online rather than at a branch. Bank websites have come under heavy attack recently
from phishing from bogus websites into which customers are invited to enter their
passwords. The ‗standard‗ internet security mechanisms designed in the 1990s, such as
SSL/TLS, turned out to be ineffective once capable motivated opponents started attacking
the customers rather than the bank. Phishing is a fascinating Enterprise security problem
mixing elements from authentication, usability, psychology, operations and economics.
4. Behind the scenes are a number of high-value messaging systems. These are used to
move large sums of money (whether between local banks or between banks
internationally); to trade in securities; to issue letters of credit and guarantees; and so on.
An attack on such a system is the dream of the sophisticated white-collar criminal. The
defense is a mixture of bookkeeping procedures, access controls, and cryptography.
5. The bank‗s branches will often appear to be large, solid and prosperous, giving customers
the psychological message that their money is safe. This is theatre rather than reality: the
stone façade gives no real protection. If you walk in with a gun, the tellers will give you
all the cash you can see; and if you break in at night, you can cut into the safe or strong
room in a couple of minutes with an abrasive wheel. The effective controls these days
center on the alarm systems which are in constant communication with a security
company‗s control center. Cryptography is used to prevent a robber or burglar
manipulating the communications and making the alarm appear to say ‗all‗s well‗ when
it isn‗t.

Example 2 – A Military Base


Military systems have also been an important technology driver. They have motivated much of
the academic research that governments have funded into computer security in the last 20 years.
As with banking, there is not one single application but many.
Information Assurance and Security Module Page 26

1. Some of the most sophisticated installations are the electronic warfare systems whose
goals include trying to jam enemy radars while preventing the enemy from jamming
yours. This area of information warfare is particularly instructive because for decades,
well-funded research labs have been developing sophisticated counter measures and so
on with a depth, subtlety and range of deception strategies that are still not found
elsewhere. As I write, in 2007, a lot of work is being done on adapting jammers to disable
improvised explosive devices that make life hazardous for allied troops in Iraq.
Electronic warfare has given many valuable insights: issues such as spoofing and service-

337
denial attacks were live there long before bankers and bookmakers started having
problems with bad guys targeting their websites.
2. Military communication systems have some interesting requirements. It is often not
sufficient to just encipher messages: the enemy, on seeing traffic encrypted with
somebody else‗s keys, may simply locate the transmitter and attack it. Low-probability-
of-intercept (LPI) radio links are one answer; they use a number of tricks that are now
being adopted in applications such as copyright marking. Covert communications are also
important in some privacy applications, such as in defeating the Internet censorship
imposed by repressive regimes.
3. Military organizations have some of the biggest systems for logistics and inventory
management, which differ from commercial systems in having a number of special
assurance requirements. For example, one may have a separate stores management
system at each different security level: a general system for things like jet fuel and boot
polish, plus a second secret system for stores and equipment whose location might give
away tactical intentions. (This is very like the businessman who keeps separate sets of
books for his partners and for the tax man, and can cause similar problems for the poor
auditor.) There may also be intelligence systems and command systems with even higher
protection requirements. The general rule is that sensitive information may not flow
down to less restrictive classifications. So you can copy a file from a Secret stores system
to a Top Secret command system, but not vice versa. The same rule applies to
intelligence systems which collect data using wiretaps: information must flow up to the
intelligence analyst from the target of investigation, but the target must not know which
of his communications have been intercepted. Managing multiple systems with
information flow restrictions is a hard problem and has inspired a lot of research. Since
9/11, for example, the drive to link up intelligence systems has led people to invent
search engines that can index material at multiple levels and show users only the answers
they are cleared to know.
4. The particular problems of protecting nuclear weapons have given rise over the last two
generations to a lot of interesting security technology, ranging from electronic
authentication systems that prevent weapons being used without the permission of the
national command authority, through seals and alarm systems, to methods of identifying
people with a high degree of certainty using biometrics such as iris patterns. The civilian
security engineer can learn a lot from all this. For example, many early systems for
inserting copyright marks into digital audio and video, which used ideas from spread-
spectrum radio, were vulnerable to resynchronization attacks that are also a problem for
some spread-spectrum systems. Another example comes from munitions management.
There, a typical system enforces rules such as ‗Don‗t put explosives and detonators in the
same truck‗. Such techniques can be recycled in food logistics where hygiene rules forbid
raw and cooked meats being handled together.

338
Example3 A Hospital
From soldiers and food hygiene we move on to healthcare. Hospitals have a number of
interesting protection requirements mostly to do with patient safety and privacy.

1. Patient record systems should not let all the staff see every patient‗s record, or privacy
violations can be expected. They need to implement rules such as ‗nurses can see the
records of any patient who has been cared for in their department at any time during the
previous 90 days‗. This can be hard to do with traditional computer security mechanisms
as roles can change (nurses move from one department to another) and there are cross-
system dependencies(if the patient records system ends up relying on the personnel
system for access control decisions, then the personnel system may just have become
critical for safety, for privacy or for both).
2. Patient records are often anonymized for use in research, but this is hard to do well.
Simply encrypting patient names is usually not enough as an enquiry such as ‗show me
all records of 59 year old males who were treated for a broken collarbone on September
15th 1966‗ would usually be enough to find the record of a politician who was known to
have sustained such an injury at college. But if records cannot be anonymized properly,
then much stricter rules have to be followed when handling the data, and this increases
the cost of medical research.
3. Web-based technologies present interesting new assurance problems in healthcare. For
example, as reference books such as directories of drugs move online, doctors need
assurance that life-critical data, such as the figures for dosage per body weight, are
exactly as published by the relevant authority, and have not been mangled in some way.
Another example is that as doctors start to access patients‗ records from home or from
laptops or even PDAs during house calls, suitable electronic authentication and
encryption tools are starting to be required.
4. New technology can introduce risks that are just not understood. Hospital administrators
understand the need for backup procedures to deal with outages of power, telephone
service and so on; but medical practice is rapidly coming to depend on the net in ways
that are often not documented. For example, hospitals in Britain are starting to use online
radiology systems: X-rays no longer travel from the X-ray machine to the operating
theatre in an envelope, but via a server in a distant town. So a network failure can stop
doctors operating just as much as a power failure. All of a sudden, the Internet turns into
a safety-critical system, and denial-of-service attacks might kill people.

339
Example 4 – The Home
You might not think that the typical family operates any secure systems. But consider the
following.

1. Many families use some of the systems we‗ve already described. You may use a web-
based electronic banking system to pay bills, and in a few years you may have encrypted
online access to your medical records. Your burglar alarm may send an encrypted ‗all‗s
well‗ signal to the security company every few minutes, rather than waking up the
neighborhood when something happens.
2. Your car probably has an electronic immobilizer that sends an encrypted challenge to a
radio transponder in the key fob; the transponder has to respond correctly before the car
will start. This makes theft harder and cuts your insurance premiums. But it also increases
the number of car thefts from homes, where the house is burgled to get the car keys. The
really hard edge is a surge in car-jacking: criminals who want a getaway car may just take
one at gunpoint.
3. Early mobile phones were easy for villains to ‗clone‗: users could suddenly find their
bills inflated by hundreds or even thousands of dollars. The current GSM digital mobile
phones authenticate themselves to the network by a cryptographic challenge-response
protocol similar to the ones used in car door locks and immobilizers.
4. Satellite TV set-top boxes decipher movies so long as you keep paying your subscription.
DVD players use copy control mechanisms based on cryptography and copyright
marking to make it harder to copy disks (or to play them outside a certain geographic
area). Authentication protocols can now also be used to set up secure communications on
home networks (including WiFi, Bluetooth and Home Plug).

5. In many countries, households who can‗t get credit can get prepayment meters for
electricity and gas, which they top up using a smartcard or other electronic key which
they refill at a local store. Many universities use similar technologies to get students to
pay for photocopier use, washing machines and even soft drinks.
6. Above all, the home provides a haven of physical security and seclusion. Technological
progress will impact this in many ways. Advances in locksmithing mean that most
common house locks can be defeated easily; does this matter? Research suggests that
burglars aren‗t worried by locks as much as by occupants, so perhaps it doesn‗t matter
much but then maybe alarms will become more important for keeping intruders at bay
when no-one‗s at home. Electronic intrusion might over time become a bigger issue, as

340
more and more devices start to communicate with central services. The security of your
home may come to depend on remote systems over which you have little control.

341
UNIT - II

2.1. Introduction
Human being from ages had two inherent needs:

1. To communicate and share information and


2. To communicate selectively.
These two needs gave rise to the art of coding the messages in such a way that only the intended
people could have access to the information.
Unauthorized people could not extract any information, even if the scrambled messages fell in
their hand.
The art and science of concealing the messages to introduce secrecy in information security is
recognized as cryptography.
The word „cryptography‟ was coined by combining two Greek words, „Krypto‟ meaning hidden
and ―graphene‟ meaning writing.

What is Cryptography?

Cryptography is about constructing and analyzing protocols that prevent third parties or the public from
reading private messages; various aspects in information security such as data confidentiality, data
integrity, authentication, and non-repudiation are central to modern cryptography.
Modern cryptography exists at the intersection of the disciplines of mathematics, computer science,
electrical engineering, communication science, and physics.
Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies,
computer passwords, and military communications.
Cryptography is associated with the process of converting ordinary plain text into unintelligible text and
vice-versa. It is a method of storing and transmitting data in a particular form so that only those for whom
it is intended can read and process it. Cryptography not only protects data from theft or alteration, but can
also be used for user authentication.
Earlier cryptography was effectively synonymous with encryption but nowadays cryptography is mainly
based on mathematical theory and computer science practice.

Modern cryptography concerns with:

1. Confidentiality – Information cannot be understood by anyone.


2. Integrity – Information cannot be altered.
3. Non-repudiation – Sender cannot deny his/her intentions in the transmission of the information at
a later stage.
4. Authentication – Sender and receiver can confirm each.

342
 Cryptography is used in many applications like banking transactions cards, computer passwords,
and e- commerce transactions. Three types of cryptographic techniques used in general.

1. Symmetric-key cryptography
2. Hash functions.
3. Public-key cryptography

 Symmetric-key Cryptography: Both the sender and receiver share a single key. The sender uses
this key to encrypt plaintext and send the cipher text to the receiver. On the other side the receiver
applies the same key to decrypt the message and recover the plain text.
 Public-Key Cryptography: This is the most revolutionary concept in the last 300-400 years. In
Public-Key Cryptography two related keys (public and private key) are used. Public key may be
freely distributed, while its paired private key remains a secret. The public key is used for
encryption and for decryption private key is used.
 Hash Functions: No key is used in this algorithm. A fixed-length hash value is computed as per the plain
text that makes it impossible for the contents of the plain text to be recovered. Hash functions are also
used by many operating systems to encrypt passwords.

2.2. Ecommerce Security


2.2.1. Online Security Issues Overview

Computer security-The protection of assets from unauthorized access, use, alteration, or


destruction „Physical security- Includes tangible protection devices „Logical security- Protection
of assets using nonphysical means „Threat- Any act or object that poses a danger to computer
assets
E-commerce Security is a part of the Information Security framework and is specifically applied
to the components that affect e-commerce that include Computer Security, Data security and
other wider realms of the Information Security framework. E-commerce security has its own
particular nuances and is one of the highest visible security components that affect the end user
through their daily payment interaction with business. E-commerce security is the protection of
e-commerce assets from unauthorized access, use, alteration, or destruction. Dimensions of e
commerce security-Integrity, Non-repudiation, Authenticity, Confidentiality, Privacy,
Availability. E-Commerce offers the banking industry great opportunity, but also creates a set of
new risks and vulnerability such as security threats. Information security, therefore, is an
essential management and technical requirement for any efficient and effective Payment
transaction activities over the internet. Still, its definition is a complex Endeavour due to the
constant technological and business change and requires a coordinated match of algorithm and
technical solutions.

2.2.2 Ecommerce Security Issues

E-commerce security is the protection of e-commerce assets from unauthorized access, use,
alteration, or destruction. While security features do not guarantee a secure system, they are
necessary to build a secure system. Security features have four categories:

343
 Authentication: Verifies who you say you are. It enforces that you are the only one
allowed to logon to your Internet banking account.
 Authorization: Allows only you to manipulate your resources in specific ways. This
prevents you from increasing the balance of your account or deleting a bill.
 Encryption: Deals with information hiding. It ensures you cannot spy on others during
Internet banking transactions.
 Auditing: Keeps a record of operations. Merchants use auditing to prove that you bought
specific merchandise.
 Integrity: prevention against unauthorized data modification
 Nonrepudiation: prevention against any one party from reneging on an agreement after
the fact
 Availability: prevention against data delays or removal.

2.2.3.E-Commerce Security Tools


 Firewalls – Software and Hardware
 Public Key infrastructure
 Encryption software
 Digital certificates
 Digital Signatures
 Biometrics – retinal scan, fingerprints, voice etc
 Passwords
 Locks and bars – network operations center

2.2.4.Purpose of Security in E-Commerce

1. Data Confidentiality – is provided by encryption /decryption.


2. Authentication and Identification – ensuring that someone is who he or she claims to be is
implemented with digital signatures.
3. Access Control – governs what resources a user may access on the system. Uses valid IDs and
passwords.
4. Data Integrity – ensures info has not been tampered with. Is implemented by message digest or
hashing.
5. Non-repudiation – not to deny a sale or purchase

 SECURITY THREATS
o Three types of security threats
 denial of service,
 unauthorized access, and
 theft and fraud Security (DOS):
o Denial of Service (DOS)

344
 Two primary types of DOS attacks: spamming and viruses
o Spamming
 Sending unsolicited commercial emails to individuals
 E-mail bombing caused by a hacker targeting one computer or network, and
sending thousands of email messages to it.

Surfing involves hackers placing software agents onto a third-party system and setting it off to send
requests to an intended target. DDOS (distributed denial of service attacks) involves hackers placing
software agents onto a number of third-party systems and setting them off to simultaneously send requests
to an intended target

2.3. What is Web Security?

 Web security is also known as ―Cyber security‖. It basically means protecting a website
or web application by detecting, preventing and responding to cyber threats.
 Websites and web applications are just as prone to security breaches as physical homes,
stores, and government locations. Unfortunately, cybercrime happens every day, and
great web security measures are needed to protect websites and web applications from
becoming compromised.
 That‗s exactly what web security does – it is a system of protection measures and
protocols that can protect your website or web application from being hacked or entered
by unauthorized personnel. This integral division of Information Security is vital to the
protection of websites, web applications, and web services. Anything that is applied over
the Internet should have some form of web security to protect it.
 The web poses some additional security troubles because:
o so very many different computers are involved in any networked environment;
o the fundamental protocols of the Internet were not designed with security in mind;
and,
o The physical infrastructure of the Internet is not owned or controlled by any one
organization, and no guarantees can be made concerning the integrity and security
of any part of the Internet.
 Unfortunately, a web-based system is often advertised as ―secure‖ merely because the
web server uses SSL encryption to protect portions of the site. As we‘ll soon see, there is
a great deal more to the story than that.

Details of Web Security

 There are a lot of factors that go into web security and web protection. Any website or
application that is secure is surely backed by different types of checkpoints and
techniques for keeping it safe.

345
 There are a variety of security standards that must be followed at all times, and these
standards are implemented and highlighted by the OWASP. Most experienced web
developers from top cyber security companies will follow the standards of the OWASP
as well as keep a close eye on the Web Hacking Incident Database to see when, how, and
why different people are hacking different websites and services.
 Essential steps in protecting web apps from attacks include applying up-to-date
encryption, setting proper authentication, continuously patching discovered
vulnerabilities, avoiding data theft by having secure software development practices. The
reality is that clever attackers may be competent enough to find flaws even in a fairly
robust secured environment, and so a holistic security strategy is advised.

Available Technology

There are different types of technologies available for maintaining the best security standards. Some
popular technical solutions for testing, building, and preventing threats include:

 Black box testing tools


 Fuzzing tools
 White box testing tools
 Web application firewalls (WAF)
 Security or vulnerability scanners
 Password cracking tools

Likelihood of Threat

 Your website or web application‗s security depends on the level of protection tools that have been
equipped and tested on it. There are a few major threats to security which are the most common
ways in which a website or web application becomes hacked. Some of the top vulnerabilities for
all web-based services include:
o SQL injection
o Password breach
o Cross-site scripting
o Data breach
o Remote file inclusion
o Code injection
 Preventing these common threats is the key to making sure that your web-based service is
practicing the best methods of security.

The Best Strategies

 There are two big defense strategies that a developer can use to protect their website or web
application. The two main methods are as follows:

346
1. Resource assignment – By assigning all necessary resources to causes that are dedicated to
alerting the developer about new web security issues and threats, the developer can receive a
constant and updated alert system that will help them detect and eradicate any threats before
security is officially breached.
2. Web scanning – There are several web scanning solutions already in existence that are available
for purchase or download. These solutions, however, are only good for known vulnerability
threats – seeking unknown threats can be much more complicated. This method can protect
against many breaches, however, and is proven to keep websites safe in the long run.

Web Security also protects the visitors from the below-mentioned points –

 Stolen Data: Cyber-criminals frequently hacks visitor‗s data that is stored on a website like email
addresses, payment information, and a few other details.
 Phishing schemes: This is not just related to email, but through phishing, hackers design a layout
that looks exactly like the website to trick the user by compelling them to give their sensitive
details.
 Session hijacking: Certain cyber attackers can take over a user‗s session and compel them to take
undesired actions on a site.
 Malicious redirects. Sometimes the attacks can redirect visitors from the site they visited to a
malicious website.
 SEO Spam. Unusual links, pages, and comments can be displayed on a site by the hackers to
distract your visitors and drive traffic to malicious websites.

 Thus, web security is easy to install and it also helps the business people to make their website
safe and secure. A web application firewall prevents automated attacks that usually target small
or lesser-known websites. These attacks are borne out by malicious bots or malware that
automatically scan for vulnerabilities they can misuse, or cause DDoS attacks that slow down or
crash your website.
 Thus, Web security is extremely important, especially for websites or web applications that deal
with confidential, private, or protected information. Security methods are evolving to match the
different types of vulnerabilities that come into existence.

2.4. Layers Involved in Web Security


 Many ―layers‖ must work in concert to produce a functioning web-based system. Each layer has
its own security vulnerabilities, and its own procedures and techniques for coping with these
vulnerabilities.
 We‘ll examine each such layer in turn, proceeding from the hardware (furthest from the end user)
to the web browser (closest to the end user).
 Keep in mind that many attacks take advantage of weaknesses in multiple layers. Even if one
such weakness does not expose the service to attack, that weakness in concert with others can be
used for nefarious purposes. The complexity of these layers‘ interaction only makes the job of the
security professional that much more difficult.

Hardware

347
 Physical access to computer hardware gives evens a slightly-skilled person total control of that
hardware. Without physical security to protect hardware (i.e. doors that lock) nothing else about a
computer system can be called secure.
 Of course, there are many ways in which malicious humans can attack hardware:
o o Using operating system installation floppies and CDs to circumvent normal OS access
control to devices and hard disk contents;
o Physical removal or destruction of the hardware;
o Electromagnetic interference, including nuclear EMP munitions and e-bombs;
o Direct eavesdropping technologies such as keyboard loggers and network sniffers; and
o Indirect eavesdropping technologies such as van Eck Phreaking (reconstituting the
display of a computer monitor in a remote location by gathering the emitted radiation
from that monitor.
  Hardware is also most susceptible to natural occurrences:
o water and humidity;
o smoke and dust;
o heat and fire;
o lightning and other electrical phenomenon;
o radiation, particularly alpha particles which can flip memory bits;
o flora and fauna, especially circuit board-eating molds and insects; and
o Weather and geological effects such as tornados, hurricanes, and earthquakes.
 Securing hardware is usually a matter of installing locking doors and electromagnetic shielding,
deploying redundant hardware in remote locations, installing temperature/moisture/air quality
controls and filters, performing and checking filesystem backups, and so forth.
 Networking hardware is susceptible to all of the above problems, but often must be exposed (i.e.
cables) which makes it a fine target for attack. Some simple things greatly improve the security of
LANs: installing switches instead of hubs to limit Ethernet‘s chatty broadcasts, thus making it
much harder to jack in and eavesdrop with a ―promiscuous‖ NIC.

Operating System

 As the software charged with controlling access to the hardware, the file system, and the
network, weaknesses in an operating system are the most valued amongst crackers.
 When we speak here of an operating system, we really mean just the kernel, file
system(s), network software/stack, and authentication (who are you?) and authorization
(what can you do?) mechanisms.
 Most OS authentication is handled through user names and passwords. Biometric (e.g.
voice, face, retina, iris, fingerprint) and physical token-based (swipe cards, pin-generating
cards) authentication are sometimes used to augment simple passwords, but the costs and
accuracy of the technology limit their adoption.
 Once authenticated, the OS is responsible for enforcing authorization rules for a user‘s
account. The guiding thought here is the Principle of Least Privilege: disallow every
permission that isn‘t explicitly required.
 Protecting an operating system from attack is a cat and mouse game,that requires constant
vigilance. Obviously, code patches must be applied (if the benefit of the patch is deemed
to outweigh the risk of changing a functioning system), but system logs must be gathered
and studied on a regular basis to identify suspicious activity.

348
 A number of tools can be used to strengthen and monitor the security of an OS:
o File system rights (sometimes access control lists and partition mount permissions
limit non-super user accounts to only the files they require;
o disk quotas prevent users from intentionally or accidentally filling a disk, thereby
denying other users‘ access to the partition;
o -detection software (e.g. Tripwire) reports modifications to system-critical files
and directories;
o firewalls (i.e. packet filters, proxy servers, Network Address Translation, and
Virtual Private Networks) help to block out spurious network traffic, but don‘t
stop attacks on the layers that follow;
o intrusion-detection software (e.g. Snort) identify network-based attacks based on
a library of attack profiles; and
o Anti-virus software removes, disables, or warns about dangerous viruses, worms,
or Trojan horses.
 For server computers, the most important rule is to only install and run those software
packages that are absolutely required. The more programs that are running, the greater
the opportunity for someone to find a hole in the defenses.

Service

 For our purposes, a ―service‖ is any class of software that typically runs unattended on a
server-style computer and performs some task in response to a network-originated
request. Web servers (e.g. Apache, IIS, including server-side scripting platforms), FTP
servers, email servers (e.g. Send mail, Qmail, Exim), Telnet and SSH servers, file and
print servers (e.g. SMB/Samba), database servers (e.g. Oracle, SQL Server, MySQL,
DB/2, PostgreSQL) and so on are all example of these services.
 The most common attack on these services involves a buffer overflow: sending a
message containing too much data for a limited storage space in the computer‘s memory,
overflowing the bounds of that space and in some cases executing the code that was
delivered at the end of the message.
 The infamous Code Red and Code Red II worms of 2001 sent overly long query strings
to the Microsoft IIS webserver‘s indexing service, inducing a buffer overflow, and
allowing the worm to propagate and damage the infected system. An example Code Red
attack looks like this in an Apache webserver‘s log (a single wrapped line):
 Some attacks against the service layer are intended to change the service‘s normal OS
user account to that of an account with greater permissions, ideally the superuser account
(a.k.a. privilege escalation). Once that has been accomplished, other attacks using higher
layers of the web service can do more damage.
 Keeping on top of the latest patches for service software helps prevent these sorts of
exploits, but the best defense is to limit the access the service has to the computer it runs
on, and to other computers in its network neighborhood. The former is often

349
accomplished by running the service using a OS user account with minimal privileges
(e.g. ―nobody‖ for Apache), and by restricting the service to a closed-off region of the file
system. The latter is accomplished by setting up DMZ-like network configurations.

Data

 As an organization‘s most valuable IT asset, the nonchalant treatment and security of data
is often surprising. What is not surprising is that crackers know this and most of their
efforts are ultimately focused on displaying, corrupting, or stealing an organization‘s
data.
 Measures for protecting the database server hardware and the RDBMS service have
already been discussed.
 Backups of critical (and what data is not critical?) data must obviously be performed on a
regular schedule, and must also be checked periodically so that it‘s known that all data is
backed up properly and that the backup media is functioning.
 Backup media must also be removed to remote sites to guard against large-scale natural
disaster. Transporting the media must be performed by trusted couriers.
  Finally, backups should be encrypted in some way to prevent any of the many people
that come into contact with the media from reading all of the organization‘s data. In
practice, this encryption is rarely performed.
 Since most web applications use some form of special application account to access the
database, the permissions granted to this account must follow the Principle of Least
Privilege. While not a complete solution, this does reduce the chance that an application-
layer exploit, or a simple programming error, might damage the contents of the database.

Application

 The application layer consists of specialty software that performs the specific tasks
required of the web system. This software may be custom-written in-house or through
outsourcing, or may be purchased as a shrink-wrapped product. Generally, this sort of
software is not used by many different organizations, and so is not examined by as many
people for security defects. On the other hand, the relative obscurity of such software
means that few crackers will be aware of any such defects.
 The main vulnerability of web applications is Cross-Site Scripting (XSS).
 Cross-Site Scripting (a.k.a. XSS, script embedding, or script injection) is more an attack
on the users of a web application, than on the web system itself. It usually involves
injecting some client-side browser scripting code (i.e. JavaScript) into one of the

350
application‘s forms that, once displayed on the site, results in that code being run (on the
end user‘s browser). This code can do anything that client-side script code can do, but is
often used to redirect the user to another site for some malevolent purpose. Such script
code can also forward the user‘s session key to another site, so that the recipient of this
key can impersonate the legitimate owner of this key. The best method for defeating this
type of attack is to validate all input to a web application and disable (perhaps by
mapping to HTML entity codes) any special HTML characters such as, but not limited to:
―<―, ―>‖, and ―&‖.

Network Protocol

 It is at the network protocol layer that most of the web system security is addressed by
product marketing departments. While important, as we‘ve seen this is only one piece of
a very large pie.
 The primary technology that protects the web application protocol in question, HTTP, is
the Secure Sockets Layer (SSL), now renamed Transport Layer Security (TLS). TLS
provides both authentication and encryption services to communicating computers using
digital certificates issued by Certificate Authorities (CAs) also known as Trust
Authorities.
 TLS encrypts all data between client browser and webserver for those pages where it is
deemed necessary (identified with URLs beginning with HTTPS ://). The level and
method of encryption is negotiated between client and server, but relies on public key
cryptography to scramble and digitally sign the message. Encryption protects the message
from:
o Eavesdropping, or simple monitoring of the unprotected traffic;
o Modification of the message to erroneous or meaningless data;
o Man-in-the-middle attacks, which allow the attacker to interpose himself between
the client and the server, relaying messages between both, while modifying some
to his own ends; and,
o Replay attacks, used by the attacker to retransmit the same message over and over
to the server in order to execute web application functionality over and over,
usually to detriment of the original sender.
 TLS also provides authentication through the same digital certificate. In most cases, this
means that the user can verify that the web application they are visiting is indeed
registered to the company that purports to provide this service.

Browser

351
 Unfortunately, given the design of the HTTP protocol (even when secured through
SSL/TLS), there is very little that can be done to protect the web system at the browser
layer. Hence, web applications may never trust any data originating from a client
browser.
 TLS-based client digital certificates can be used to more positively identify clients to
servers, but they are as yet rarely used, partially because of expense, but also because
they are difficult to move from one client computer to another, thereby diminishing one
of the benefits of web systems: client location transparency.

2.5. Public-Key Infrastructure (PKI)

 A public key infrastructure (PKI) is a set of roles, policies, hardware, software and
procedures needed to create, manage, distribute, use, store and revoke digital certificates
and manage public-key encryption.
 Public Key Infrastructure (PKI) is a technology for authenticating users and devices in
the digital world. The basic idea is to have one or more trusted parties digitally sign
documents certifying that a particular cryptographic key belongs to a particular user or
device. The key can then be used as an identity for the user in digital networks.
 The users and devices that have keys are often just called entities. In general, anything
can be associated with a key that it can use as its identity. Besides a user or device, it
could be a program, process, manufacturer, component, or something else. The purpose
of a PKI is to securely associate a key with an entity.
 The trusted party signing the document associating the key with the device is called a
certificate authority (CA). The certificate authority also has a cryptographic key that it
uses for signing these documents. These documents are called certificates.
 In the real world, there are many certificate authorities, and most computers and web
browsers trust a hundred or so certificate authorities by default.
 A public key infrastructure relies on digital signature technology, which uses public key
cryptography. The basic idea is that the secret key of each entity is only known by that
entity and is used for signing. This key is called the private key. There is another key
derived from it, called the public key, which is used for verifying signatures but cannot
be used to sign. This public key is made available to anyone, and is typically included in
the certificate document.
 The purpose of a PKI is to facilitate the secure electronic transfer of information for a
range of network activities such as e-commerce, internet banking and confidential email.
It is required for activities where simple passwords are an inadequate authentication
method and more rigorous proof is required to confirm the identity of the parties involved
in the communication and to validate the information being transferred.
 In cryptography, a PKI is an arrangement that binds public keys with respective identities
of entities (like people and organizations). The binding is established through a process of
registration and issuance of certificates at and by a certificate authority (CA). Depending

352
on the assurance level of the binding, this may be carried out by an automated process or
under human supervision.
 The PKI role that assures valid and correct registration is called a registration authority
(RA). An RA is responsible for accepting requests for digital certificates and
authenticating the entity making the request. In a Microsoft PKI, a registration authority
is usually called a subordinate CA.
 An entity must be uniquely identifiable within each CA domain on the basis of
information about that entity. A third-party validation authority (VA) can provide this
entity information on behalf of the CA.
 The X.509 standard defines the most commonly used format for public key certificates.

In cyberspace there is a need to verify the identities of individuals for a number of purposes.
Some of these events include sending and receiving secure email, sending and receiving signed
email, setting up a secure session (SSL), and accessing a protected resource. The way in which
this goal of authentication is accomplished is by verifying that a public key belongs to an
individual that you know and trust. Public-Key Infrastructure is designed to allow this kind of
authentication.

Diffie Hellman Public-Key Encryption

One way to associate public keys to individuals is by publishing a mapping of names to keys.
This directory would act much like the White Pages does for distributing phone numbers based
on name. The directory must be trusted; therefore it must be authentic but need not be secret.
Entries would be of the form:

Digital Certificates

Digital Certificates were proposed by Loren Kohnfelder here at MIT in a B.S. thesis in ‘78. They
are an authenticated identifier pairing the public key to a significant name. This allows any user
to identify them and establishes trust between themselves and a verifier who trusts the certificate
authority. The CA is assumed to correctly identify the person who has requested the certificate.

Advantages

 Sender can include his certificate in an email or post it on the Web


 Receiver only needs to know the Certificate Authority and its PK
 Sender may have more than one key (e.g., one for signing, one for encryption)
 Certificates can have a validity period (not before / not after a certain time)

Difficulty Issues

353
  Scalability
o Need multiple CAs
o Naming (unique? human-readable?)
 Robustness
o Compromised keys? (Especially the root key!)
 Certificate as Credential (Attribute Certificate instead of ID certificate)
 Trustworthiness of CA and procedures; liability?
 Privacy, Anonymity

Naming

PK infrastructure has a very intimate link with naming. We want a system that is easy to use for
people, similar to that of file names. The naming relationship should be as follows:

 Names are for people to use.


 Keys are for machines to use.
 PKI can provide a binding between the two.

Naming is a large issue. Since the CA has the burden of properly identifying and labeling the parties with
certificates, names must be made clear and accurate.

Naming provides an interface between people and cyberspace. People must then write security policy
based on the name associated with a PK used to sign message. Writers of such policies need to
know/understand the relationship between keys and names.

Desirable naming properties

 Descriptive
 Global uniqueness
 Dynamic

Examples

 Role (purchasing agent at IBM)


 Legal names
 Email
 Phone #‘s (―enum‖)
 Mail address

X.509

X.509 is one of the most popular standards specifying the contents of a digital certificate. One of the main
goals of X.509 is global uniqueness of names.

X.509 Hierarchical Structure

354
X.509 maintains properties of distinguished names (DNs) and is organized in a hierarchical structure.
This naming scheme for certificates traverses through local CAs until arriving at a specific name. Each
local CA is responsible for only certificates in its specific domain.

Some major problems with DN here is that single points of failure disrupt the system. The structure itself
is also awkward.

What’s included in X.509 version 3 certificates?

 Version #
 Certificate Serial #
 Signature Algorithm Identifier
 Issuer Distinguished Name (DN)
 Validity Period
 Subject DN
 Subject PK Information
o algorithm identifier
o associated key parameters
 Issuer Unique #
 Subject Unique #
 Extensions
o key usage
o certificate policies
o subject/issuer alternate names
o path constraints
o criticality bits

2.6. Enterprise information security architecture


Enterprise information security architecture (EISA) is a part of enterprise architecture focusing on
information security throughout the enterprise. The name implies a difference that may not exist between
small/medium-sized businesses and larger organizations.

Overview

Enterprise information security architecture (EISA) is the practice of applying a comprehensive and
rigorous method for describing a current and/or future structure and behavior for an organization‘s
security processes, information security systems, personnel and organizational sub-units, so that they
align with the organization‘s core goals and strategic direction. Although often associated strictly with
information security technology, it relates more broadly to the security practice of business optimization
in that it addresses business security architecture, performance management and security process
architecture as well.

Enterprise information security architecture is becoming a common practice within the financial
institutions around the globe. The primary purpose of creating enterprise information security architecture
is to ensure that business strategy and IT security are aligned. As such, enterprise information security
architecture allows traceability from the business strategy down to the underlying technology.

355
Figure 2.1: Enterprise information security architecture

Positioning

Enterprise information security architecture was first formally positioned by Gartner in their whitepaper
called ―Incorporating Security into the Enterprise Architecture Process‖. This was published on 24
January 2006. Since this publication, security architecture has moved from being silo based architecture
to an enterprise focused solution that incorporates business, information and technology. The picture
below represents a one-dimensional view of enterprise architecture as a service-oriented architecture. It
also reflects the new addition to the enterprise architecture family called ―Security‖. Business
architecture, information architecture and
technology architecture used to be called BIT for short. Now with security as part of the architecture
family it has become BITS.

Security architectural change imperatives now include things like

 Business roadmaps
 Legislative and legal requirements
 Technology roadmaps
 Industry trends
 Risk trends
 Visionaries

Goals

 Provide structure, coherence and cohesiveness.


 Must enable business-to-security alignment.
 Defined top-down beginning with business strategy.
 Ensure that all models and implementations can be traced back to the business strategy, specific
business requirements and key principles.

356
 Provide abstraction so that complicating factors, such as geography and technology religion, can
be removed and reinstated at different levels of detail only when required.
 Establish a common ―language‖ for information security within the organization

Methodology

The practice of enterprise information security architecture involves developing an architecture security
framework to describe a series of ―current‖, ―intermediate‖ and ―target‖ reference architectures and
applying them to align programs of change. These frameworks detail the organizations, roles, entities and
relationships that exist or should exist to perform a set of business processes. This framework will provide
a rigorous taxonomy and ontology that clearly identifies what processes a business performs and detailed
information about how those processes are executed and secured. The end product is a set of artifacts that
describe in varying degrees of detail exactly what and how a business operates and what security controls
are required. These artifacts are often graphical.

Given these descriptions, whose levels of detail will vary according to affordability and other practical
considerations, decision makers are provided the means to make informed decisions about where to invest
resources, where to realign organizational goals and processes, and what policies and procedures will
support core missions or business functions.

A strong enterprise information security architecture process helps to answer basic questions like:

 What is the information security risk posture of the organization?


 Is the current architecture supporting and adding value to the security of the organization?
 How might security architecture be modified so that it adds more value to the organization?
 Based on what we know about what the organization wants to accomplish in the future, will the
current security architecture support or hinder that?

Implementing enterprise information security architecture generally starts with documenting the
organization‘s strategy and other necessary details such as where and how it operates. The process then
cascades down to documenting discrete core competencies, business processes, and how the organization
interacts with itself and with external parties such as customers, suppliers, and government entities.

Having documented the organization‘s strategy and structure, the architecture process then flows down
into the discrete information technology components such as:

 Organization charts, activities, and process flows of how the IT Organization operates
 Organization cycles, periods and timing
 Suppliers of technology hardware, software, and services
 Applications and software inventories and diagrams
 Interfaces between applications – that is: events, messages and data flows
 Intranet, Extranet, Internet, e-Commerce, EDI links with parties within and outside of the
organization
 Data classifications, Databases and supporting data models
 Hardware, platforms, hosting: servers, network components and security devices and where they
are kept
 Local and wide area networks, Internet connectivity diagrams

Wherever possible, all of the above should be related explicitly to the organization‘s strategy, goals, and
operations. The enterprise information security architecture will document the current state of the

357
technical security components listed above, as well as an ideal-world desired future state (Reference
Architecture) and finally a ―Target‖ future state which is the result of engineering tradeoffs and
compromises vs. the ideal. Essentially the result is a nested and interrelated set of models, usually
managed and maintained with specialized software available on the market.

Such exhaustive mapping of IT dependencies has notable overlaps with both metadata in the general IT
sense, and with the ITIL concept of the configuration management database. Maintaining the accuracy of
such data can be a significant challenge.

Along with the models and diagrams goes a set of best practices aimed at securing adaptability,
scalability, manageability etc. These systems engineering best practices are not unique to enterprise
information security architecture but are essential to its success nonetheless. They involve such things as
componentization, asynchronous communication between major components, and standardization of key
identifiers and so on.

Successful application of enterprise information security architecture requires appropriate positioning in


the organization. The analogy of city-planning is often invoked in this connection, and is instructive.

An intermediate outcome of an architecture process is a comprehensive inventory of business security


strategy, business security processes, organizational charts, technical security inventories, system and
interface diagrams, and network topologies, and the explicit relationships between them. The inventories
and diagrams are merely tools that support decision making. But this is not sufficient. It must be a living
process.

The organization must design and implement a process that ensures continual movement from the current
state to the future state. The future state will generally be a combination of one or more

 Closing gaps that are present between the current organization strategy and the ability of the IT
security dimensions to support it
 Closing gaps that are present between the desired future organization strategy and the ability of
the security dimensions to support it
 Necessary upgrades and replacements that must be made to the IT security architecture based on
supplier viability, age and performance of hardware and software, capacity issues, known or
anticipated regulatory requirements, and other issues not driven explicitly by the organization‘s
functional management.
 On a regular basis, the current state and future state are redefined to account for evolution of the
architecture, changes in organizational strategy, and purely external factors such as changes in
technology and customer/vendor/government requirements, and changes to both internal and
external threat landscapes over time.

358
2.7. An Overview of Intrusion Detection and Prevention Systems
IDS Concepts

Intrusion detection (ID) is the process of monitoring for and identifying specific malicious traffic. Most
network administrators do ID all the time without realizing it. Security administrators are constantly
checking system and security log files for something suspicious. An antivirus scanner is an ID system
when it checks files and disks for known malware. Administrators use other security audit tools to look
for inappropriate rights, elevated privileges, altered permissions, incorrect group memberships,
unauthorized registry changes, malicious file manipulation, inactive user accounts, and unauthorized
applications.

An IDS can take the form of a software program installed on an operating system, but today‗s commercial
network-sniffing IDS/IPS typically takes the form of a hardware appliance because of performance
requirements. An IDS uses either a packet-level network interface driver to intercept packet traffic or it
―hooks‖ the operating system to insert inspection subroutines. An IDS is a sort of virtual food-taster,
deployed primarily for early detection, but increasingly used to prevent attacks.

When the IDS notice a possible malicious threat, called an event, it logs the transaction and takes
appropriate action. The action may simply be to continue to log, send an alert, redirect the attack, or
prevent the maliciousness. If the threat is high risk, the IDS will alert the appropriate people. Alerts can
be sent by e-mail, Simple Network Management Protocol (SNMP), pager, SMTP to a mobile device, or
console broadcast. An IDS supports the defense-in-depth security principle and can be used to detect a
wide range of rogue events, including but not limited to the following:

 Impersonation attempts
 Password cracking
 Protocol attacks
 Buffer overflows
 Installation of rootkits
 Rogue commands
 Software vulnerability exploits
 Malicious code, like viruses, worms, and Trojans
 Illegal data manipulation
 Unauthorized file access
 Denial of service (DoS) attacks

Threat Types

To really understand IDS, you must understand the security threats and exploits it can detect and prevent.
Threats can be classified as attacks or misuse, and they can exploit network protocols or work as
malicious content at the application layer.

Attacks or Misuse

Attacks are unauthorized activity with malicious intent using specially crafted code or techniques. Attacks
include denial of service, virus or worm infections, buffer overflows, malformed requests, file corruption,
malformed network packets, or unauthorized program execution.

359
Misuse refers to unauthorized events without specially crafted code. In this case, the offending person
used normally crafted traffic or requests and their implicit level of authorization to do something
malicious. Misuse can also refer to unintended consequences, such as when a hapless new user overwrites
a critical document with a blank page. Another misuse event could be a user mapping a drive to a file
server share not intended by the network administrator.

Regardless of how an alert is detected, the administrator groups all alerts into one of four categories:

 True positives (correct escalation of important events)


 False positives (incorrect escalation of unimportant events)
 True negatives (correct ignorance of unimportant events)
 False negatives (incorrect ignorance of important events)

Network Protocol Attacks

Many of the security threats detected by an ID exploit network protocols (layers two and three of the OSI
model). Network protocols such as TCP/IP define standard ways of transmitting data to facilitate open
communications. The data is sent in a packet (layer three), which is then encapsulated into a layer two
frame, which is then transmitted as packages of electronic bits (1s and 0s) framed in a particular format
defined by a network protocol—but the protocols do not contemplate the consequences of malicious
packet creation. This is because protocols are designed to perform functions, not to be secure.

Flag Exploits: Abnormally crafted network packets are typically used for DoS attacks on host machines,
to skirt past network perimeter defenses (bypassing access control devices), to impersonate another user‗s
session (attack on integrity), or to crash a host‗s IP stack (DoS). Malicious network traffic works by
playing tricks with the legitimate format settings of the IP protocol. For instance, using a specially crafted
tool, an attacker can set incompatible sequences of TCP flags, causing destination host machines to issue
responses other than the normal responses, resulting in session hijacking or more typically a DoS
condition.

Other examples of maliciously formed TCP traffic include an attacker setting an ACK flag in an
originating session packet without sending an initial SYN packet to initiate traffic, or sending a SYN and
FIN (start and stop) combination at the same time. TCP flags can be set in multiple ways and each
generates a response that can either identify the target system, determine if a stateful packet-inspecting
device is in front of the target, or create a no-response condition. Port scanners often use different types of
scans to determine whether the destination port is open or closed, even if firewall-like blocking
mechanisms are installed to stop normal port scanners.

Fragmentation and Reassembly Attacks: Although not quite the security threat they once were, IP
packets can be used in fragmentation attacks. TCP/IP fragmentation is allowed because all routers have a
maximum transmission unit (MTU), which is the maximum number of bytes that they can send in a single
packet. A large packet can be broken down into multiple smaller packets (known as fragments) and sent
from source to destination. A fragment offset value located in each fragment tells the destination IP host
how to reassemble the separate packets back into the larger packet.

360
Attacks can use fragment offset values to cause the packets to maliciously reassemble and intentionally
force the reassembly of a malicious packet. If an IDS or firewall allows fragmentation and does not
reassemble the packets before inspection, an exploit may slip by. For example, suppose a firewall does
not allow FTP traffic, and an attacker sends fragmented packets posing as some other allowable traffic. If
the packets act as SMTP e-mail packets headed to destination port 25, they could be passed through, but
after they are past the firewall, they could reassemble to overwrite the original port number and become
FTP packets to destination port 21. The main advantage here for the attacker is stealth, which allows him
or her to bypass the IDS.
Today, most IDSs, operating systems, and firewalls have anti fragmentation defenses. By default, a
Windows host will drop fragmented packets.

Application Attacks

Content Obfuscation: Most IDSs look for known malicious commands or data in a network packet‗s
data payload. A byte-by-byte comparison is done between the payload and each potential threat signature
in the IDS‗s database. If something matches, it‗s flagged as an event. This is how ―signature-based‖
IDSs work. Someone has to have the knowledge to write the ―signature.

Because byte scanning is relatively easy to do, attackers use encoding schemes to hide their
malicious commands and content. Encoding schemes are non-plaintext character representations
that eventually get converted to plaintext for processing. The flexibility of the coding for
international languages on the Internet allows ASCII characters to be represented by many
different encoding schemes, including hexadecimal (base 16, in which the word ―Hello‖ looks
like ―48 65 6C 6C 6F‖), decimal notation (where ―Hello‖ is ―72 101108 108 111‖), octal
(base 8, in which ―Hello‖ appears as ―110 145 154 154 157‖), Unicode (where ―Hello‖ =
―0048 0065 006C 006C 006F‖), and any combination thereof. Web URLs and commands have
particularly flexible syntax. Complicating the issue, most browsers encountering common syntax
mistakes, like reversed slashes or incorrect case, convert them to their legitimate form.

Data Normalization: An IDS signature database has to consider all character encoding schemes
and tricks that can end up creating the same malicious pattern. This task is usually accomplished
by normalizing the data before inspection. Normalization reassembles fragments into single
whole packets, converts encoded characters into plain ASCII text, fixes syntax mistakes,
removes extraneous characters, converts tabs to spaces, removes common hacker tricks, and does
its best to convert the data into its final intended form.

First-Generation IDS

361
IDS development as we know it today began in the early 1980s, but only started growing in the
PC marketplace in the late 1990s. First-generation IDSs focused almost exclusively on the
benefit of early warning resulting from accurate detection. This continues to be a base
requirement of IDS, and vendors frequently brag about their product‗s accuracy. The practical
reality is that while most IDSs are considered fairly accurate, no IDS has ever been close to
being perfectly accurate. Although a plethora of antivirus scanners enjoy year-after-year 95 to 99
percent accuracy rates, IDSs never get over 90 percent accuracy against a wide spectrum of real-
world attack traffic. Most are in the 80 percent range. Some test results show 100 percent
detection rates, but in every such instance, the IDS was tuned after several previous, less accurate
rounds of testing. When an IDS misses a legitimate threat, it is called a false negative. Most IDS
are plagued with even higher false positive rates, however.

IDSs have high false positive rates. A false positive is when the IDS says there is a security
threat by ―alerting,‖ but the traffic is not malicious or was never intended to be malicious
(benign condition).

A common example is when an IDS flags an e-mail as infected with a particular virus because it
is looking for some key text known to be in the message body of the e-mail virus (for example,
the phrase ―cheap pharmaceuticals‖).When an e-mail intended to warn readers about the virus
includes the keywords that the reader should be on the lookout for, it can also create a false
positive. The IDS should be flagging the e-mail as infected only if it actually contains a virus,
not just if it has the same message text.

Simply searching for text within the message body to detect malware is an immature detection
choice. Many security web services that send subscribers early warning e-mails complain that
nearly 10 percent of their e-mails are kicked back by overly zealous IDSs. Many of those same
services have taken to misrepresenting the warning text purposely (by slightly changing the text,
such as ―che4p_pharmaceut1cals‖) in a desperate attempt to get past the subscribers‗ poorly
configured defenses.

Second-Generation IDS

The net effect of most IDSs being fairly accurate and none being highly accurate has resulted in vendors
and administrators using other IDS features for differentiation. Here are some of those other features that
may be more or less useful in different circumstances:

 IDS type and detection model


 End-user interface
 IDS management
 Prevention mechanisms

362
 Performance
 Logging and alerting
 Reporting and analysis

First-generation IDSs focused on accurate attack detection. Second-generation IDSs do that and work to
simplify the administrator‗s life by offering a bountiful array of back-end options. They offer intuitive
end-user interfaces, intrusion prevention, centralized device management, event correlation, and data
analysis. Second-generation IDSs do more than just detect attacks—they sort them, prevent them, and
attempt to add as much value as they can beyond mere detection.

IDS Types and Detection Models

Depending on what assets you want to protect, an IDS can protect a host or a network. All IDSs follow
one of two intrusion detection models anomaly (also called profile, behavior, heuristic, or statistical)
detection or signature (knowledge-based) detection although some systems use parts of both when it‗s
advantageous. Both anomaly and signature detection work by monitoring a wide population of events and
triggering based on predefined behaviors.

Host-Based IDS

A host-based IDS (HIDS) is installed on the host it is intended to monitor. The host can be a server,
workstation, or any networked device (such as a printer, router, or gateway). A HIDS installs as a service
or daemon, or it modifies the underlying operating system‗s kernel or application to gain first inspection
authority. Although a HIDS may include the ability to sniff network traffic intended for the monitored
host, it excels at monitoring and reporting direct interactions at the application layer. Application attacks
can include memory modifications, maliciously crafted application requests, buffer overflows, or file-
modification attempts.

A HIDS can inspect each incoming command, looking for signs of maliciousness, or simply track
unauthorized file changes.

A file-integrity HIDS (sometimes called a snapshot or checksum HIDS) takes a cryptographic hash of
important files in a known clean state and then checks them again later for comparison. If any changes are
noted, the HIDS alerts the administrator that there may be a change in integrity.

A behavior-monitoring HIDS performs real-time monitoring and intercepts potentially malicious


behavior. For instance, a Windows HIDS reports on attempts to modify the registry, manipulate files,
access the system, change passwords, escalate privileges, and otherwise directly modify the host. On a
UNIX host, a behavior-monitoring HIDS may monitor attempts to access system binaries, attempts to
download password files, and change permissions and scheduled jobs. A behavior-monitoring HIDS on a
web server may monitor incoming requests and report maliciously crafted HTML responses, cross site
scripting attacks, or SQL injection code.

Network-Based IDS (NIDS)

363
Network-based IDSs (NIDSs) are the most popular IDSs, and they work by capturing and analyzing
network packets speeding by on the wire. Unlike a HIDS, a NIDS is designed to protect more than one
host. It can protect a group of computer hosts, like a server farm, or monitor an entire network. Captured
traffic is compared against protocol specifications and normal traffic trends or the packet‗s payload data is
examined for malicious content. If a security threat is noted, the event is logged and an alert is generated.

With a HIDS, you install the software on the host you want monitored and the software does all the work.
Because a NIDS works by examining network packet traffic, including traffic not intended for the NIDS
host on the network, it has a few extra deployment considerations. It is common for brand-new NIDS
users to spend hours wondering why their IDS isn‗t generating any alerts. Sometimes it‗s because there is
no threat traffic to alert on, and other times it‗s because the NIDS isn‗t set up to capture packets headed to
other hosts.

Packet-Level Drivers

Network packets are captured using a packet-level software driver bound to a network interface card.
Many Unix and Windows systems do not have native packet-level drivers built in, so IDS
implementations commonly rely on open source packet-level drivers. Most commercial IDSs have their
own packet-level drivers and packet-sniffing software.

Promiscuous Mode

For a NIDS to sniff packets, the packets have to be given to the packet-level driver by the network
interface card. By default, most network cards are not promiscuous, meaning they only read packets off
the wire that are intended for them. This typically includes unicast packets, meant solely for one particular
workstation, broadcast packets, meant for every computer that can listen to them, and multicast traffic,
meant for two or more previously defined hosts. Most networks contain unicast and broadcast traffic.
Multicast traffic isn‗t as common, but it is gaining in popularity for web-streaming applications. By
default, a network card in normal mode drops traffic destined for other computers and packets with
transmission anomalies (resulting from collisions, bad cabling, and so on). If you are going to set up an
IDS, make sure its network interface card has a promiscuous mode and is able to inspect all traffic
passing by on the wire.

Sensors for Network Segments

For the purposes of this chapter, a network segment can be defined as a single logical packet domain. For
a NIDS, this definition means that all network traffic heading to and from all computers on the same
network segment can be physically monitored.

You should have at least one NIDS inspection device per network segment to monitor a network
effectively. This device can be a fully operational IDS interface or, more commonly, a router or switch
interface to which all network traffic is copied, known as a span port, or a traffic repeater device, known
as a sensor or tap. One port plugs into the middle of a connection on the network segment to be
monitored, and the other plugs into a cable leading to the central IDS console.

364
Anomaly-Detection (AD) Model

Anomaly detection (AD) was proposed in 1985 by noted security laureate Dr. Dorothy E. Denning, and it
works by establishing accepted baselines and noting exceptional differences. Baselines can be established
for a particular computer host or for a particular network segment. Some IDS vendors refer to AD
systems as behavior-based since they look for deviating behaviors. If an IDS looks only at network packet
headers for differences, it is called protocol anomaly detection.

Several IDSs have anomaly-based detection engines. Several massively distributed AD systems monitor
the overall health of the Internet, and a handful of high-risk Internet threats have been minimized over the
last few years because unusual activity was noticed by a large number of correlated AD systems.

The goal of AD is to be able to detect a wide range of malicious intrusions, including those for which no
previous detection signature exists. By learning known good behaviors during a period of ―profiling,‖ in
which an AD system identifies and stores all the normal activities that occur on a system or network, it
can alert to everything else that doesn‗t fit the normal profile. Anomaly detection is statistical in nature
and works on the concept of measuring the number of events happening in a given time interval for a
monitored metric. A simple example is someone logging in with the incorrect password too many times,
causing an account to be locked out and generating a message to the security log. Anomaly detection IDS
expands the same concept to cover network traffic patterns, application events, and system utilization.

Here are some other events AD systems can monitor and trigger alerts from:

 Unusual user account activity


 Excessive file and object access
 High CPU utilization
 Inappropriate protocol use
 Unusual workstation login location
 Unusual login frequency
 High number of concurrent logins
 High number of sessions
 Any code manipulation
 Unexpected privileged use or escalation attempts
 Unusual content

Signature-Detection Model

Signature-detection or misuse IDSs is the most popular type of IDS, and they work by using databases of
known bad behaviors and patterns. This is nearly the exact opposite of AD systems. When you think of a
signature detection IDS, think of it as an antivirus scanner for network traffic. Signature-detection engines
can query any portion of a network packet or look for a specific series of data bytes. The defined patterns
of code are called signatures, and often they are included as part of a governing rule when used within an
IDS.
Signatures are byte sequences that are unique to a particular malady. A byte signature may contain a

365
sample of virus code, a malicious combination of keystrokes used in a buffer overflow, or text that
indicates the attacker is looking for the presence of a particular file in a particular directory. For
performance reasons, the signature must be crafted so it is the shortest possible sequence of bytes needed
to detect its related threat reliably. It must be highly accurate in detecting the threat and not cause false
positives. Signatures and rules can be collected together into larger sets called signature databases or rule
sets.

Intrusion-Prevention Systems (IPS)

Since the beginning, IDS developers have wanted the IDS to do more than just monitor and report
maliciousness. What good is a device that only tells you you‗ve been maligned when the real value is in
preventing the intrusion? That‗s like a car alarm telling you that your car has been stolen, after the fact.
Like intrusion detection, intrusion prevention has long been practiced by
network administrators as a daily part of their routine. Setting access controls, requiring passwords,
enabling real-time antivirus scanning, updating patches, and installing perimeter firewalls are all
examples of common intrusion-prevention controls. Intrusion-prevention controls, as they apply to IDSs,
involve real-time countermeasures taken against a specific, active threat. For example, the IDS might
notice a ping flood and deny all future traffic originating from the same IP address. Alternatively, a host-
based IDS might stop a malicious program from modifying system files.

Going far beyond mere monitoring and alerting, second-generation IDSs are being called intrusion-
prevention systems (IPSs). They either stop the attack or interact with an external system to put down the
threat.

If the IPS, as shown in Figure 2.2, is a mandatory inspection point with the ability to filter real-time
traffic, it is considered inline. Inline IPSs can drop packets, reset connections, and route suspicious traffic
to quarantined areas for inspection. If the IPS isn‗t inline and is only inspecting the traffic, it still can
instruct other network perimeter systems to stop an exploit. It may do this by sending scripted commands
to a firewall, instructing it to deny all traffic from the remote attacker‗s IP address, calling a virus scanner
to clean a malicious file, or simply telling the monitored host to deny the hacker‗s intended modification.

For an IPS to cooperate with an external device, they must share a common scripting language, API, or
some other communicating mechanism. Another common IPS method is for the IDS device to send reset
(RST) packets to both sides of the connection, forcing both source and destination hosts to drop the
communication. This method isn‗t seen as being very accurate, because often the successful exploit has
happened by the time a forced reset has occurred, and the sensors themselves can get in the way and drop
the RST packets.

366
Figure 2.2: IDS placed to drop malicious packets before they can enter the
network

367
UNIT - III

3.1. Securing Private Networks


 Minimize external access to LAN
 Done by means of firewalls and proxy servers
 Firewalls provide a secure interface between an ―inner‖ trusted network and ―outer untrusted
network.
 Every packet to and from inner and outer network is processed‖
 Firewalls require hardware and software to implement
 Software that is used are proxies and filters that allow or deny network traffic access to either
network

3.2. Overview of Firewall


 Firewall is a router or other communications device which filters access to a protected network.
 Firewall is also a program that screens all incoming traffic and protects the network from
unwelcome intruders.
 It is a means of protection a local system or network of systems from network-based security
threats,
o while affording access to the outside world via WANs or the Internet

Firewall Objectives

 Keep intruders, malicious code and unwanted traffic or information out


 Keep private and sensitive information in
 security wall between private (protected) network and outside word

Firewall features

 General Firewall Features


o Port Control
o Network Address Translation
o Application Monitoring
o Packet Filtering
o Access control
 Additional features
o Data encryption
o Authentication
o Connection relay (hide internal network)
o reporting/logging

368
o e-mail virus protection
o spy ware protection
 Use one or both methods
o Packet filtering
o Proxy service
 It protects from
o Remote logins
o IP spoofing
o Source addressing
o SMTP session hijacking
o Spam
o Denial of service
o E-mail bombs

3.3. Firewall design principles

 Internet connectivity is no longer an option for most organizations. However, while internet
access provides benefits to the organization, it enables the outside world to reach and interact
with local network assets. This creates the threat to the organization.
 While it is possible to equip each workstation and server on the premises network with strong
security features, such as intrusion protection, this is not a practical approach. The alternative,
increasingly accepted, is the firewall.
 The firewall is inserted between the premise network and internet to establish a controlled link
and to create an outer security wall or perimeter.
o The aim of this perimeter is to protect the premises network from internet based attacks
and to provide a single choke point where security and audit can be imposed.
 The firewall can be a single computer system or a set of two or more systems that cooperate to
perform the firewall function.

3.4. Introduction to Secure Network Design


All information systems create risks to an organization, and whether or not the level of risk
introduced is acceptable is ultimately a business decision. Controls such as firewalls, resource
isolation, hardened system configurations, authentication and access control systems, and
encryption can be used to help mitigate identified risks to acceptable levels.

3.5. Designing Security into a Network


Security is often an overlooked aspect of network design, and attempts at retrofitting security on top of an
existing network can be expensive and difficult to implement properly. Separating assets of differing trust
and security requirements should be an integral goal during the design phase of any new project.
Aggregating assets that have similar security requirements in dedicated zones allows an organization to
use small numbers of network security devices, such as firewalls and intrusion-detection systems, to
secure and monitor multiple application systems.

369
Other influences on network design include budgets, availability requirements, the network‗s size and
scope, future growth expectations, capacity requirements, and management‗s tolerance of risks. For
example, dedicated WAN links to remote offices can be more reliable than virtual private networks
(VPNs), but they cost more, especially when covering large distances. Fully redundant networks can
easily recover from failures, but having duplicate hardware increases costs, and the more routing paths
available, the harder it is to secure and segregate traffic flows.

A significant but often missed or under-considered factor in determining an appropriate security design
strategy is to identify how the network will be used and what is expected from the business it supports.
This design diligence can help avoid expensive and difficult retrofits after the network is implemented.
Let‗s consider some key network design strategies

3.6. Designing an Appropriate Network


There are invariably numerous requirements and expectations placed upon a network, such as meeting
and exceeding the organization‗s availability and performance requirements, providing a platform that is
conducive for securing sensitive network assets, and enabling effective and secure links to other
networks. On top of that, the overall network design must provide the ability to grow and support future
network requirements.

Common steps for obtaining such information include meeting with project stakeholders, application and
system owners, developers, management, and users. It is important to understand their expectations and
needs with regard to performance, security, availability, budget, and the overall importance of the new
project. Adequately understanding these elements will ensure that project goals are met, and that
appropriate network performance and security controls are included in the design. One of the most
common problems encountered in a network implementation is unmet expectations resulting from a
difference of assumptions. That‗s why expectations should be broken down into mutually observable (and
measureable) facts as much as possible, so the security designers ensure that there is explicit agreement
with any functional proposals clearly understood and agreed.

Performance

The legacy Cisco Hierarchical Internetworking model, which most network engineers are intimately
familiar with, is a common design implemented in large-scale networks today, although many new types
of purposed designs have been developed that support emerging technologies like class fabrics, lossless
Ethernet, layer two bridging with trill or IEEE 802.1aq, and other data center–centric technologies.

The three-tier hierarchy still applies to campus networks, but no longer to data centers. This is a ―legacy‖
model socialized by Cisco, but even Cisco has newer thinking for data centers. Networks are becoming
much more specialized, and the security thinking for different types of networks is significantly different.
The Cisco three-tier model is derived from the Public Switched Telephone Network (PSTN) model,
which is in use for much of the world‗s telephone infrastructure. The Cisco Hierarchical Internetworking

370
model, depicted in Figure 3.1, uses three main layers commonly referred to as the core, distribution, and
access layers:

 Core layer Forms the network backbone and is focused on moving data as fast as possible
between distribution layers. Because performance is the core layer‗s primary focus, it should not
be used to perform CPU-intensive operations such as filtering, compressing, encrypting, or
translating network addresses for traffic.
 Distribution layer Sits between the core and the access layer. This layer is used to aggregate
access-layer traffic for transmission into and out of the core.
 Access layer Composed of the user networking connections. Filtering, compressing, encrypting,
and address-translating operations should be performed at the access and distribution layers

The Cisco model is highly scalable. As the network grows, additional distribution and access layers can
be added seamlessly. As the need for faster connections and more bandwidth arises, the core and
distribution equipment can be upgraded as required. This model also assists corporations in achieving
higher levels of availability by allowing for the implementation of redundant hardware at the distribution
and core layers. And because the network is highly segmented, a single network failure at the access or
distribution layers does not affect the entire network.

Figure 3.1: The cisco hierarchical internetworking model

3.7. Internal Security Practices


Organizations that deploy firewalls strictly around the perimeter of their network leave themselves
vulnerable to internally initiated attacks, which are statistically the most common threats today. Internal
controls, such as firewalls and early detection systems (IDS, IPS, and SIEM), should be located at
strategic points within the internal network to provide additional security for particularly sensitive
resources such as research networks, repositories containing intellectual property, and human resource
and payroll databases.
Dedicated internal firewalls, as well as the ability to place access control lists on internal network devices,
can slow the spread of a virus. Figure 3.2 depicts a network utilizing internal firewalls.

371
When designing internal network zones, if there is no reason for two particular networks to communicate,
explicitly configure the network to block traffic between those networks, and log any attempts that hosts
make to communicate between them. With modern VoIP networks, this can be a challenge as VoIP
streams are typically endpoint to endpoint, but consider only allowing the traffic you know to be
legitimate between any two networks.
A common technique used by hackers is to target an area of the network that is less secure, and then work
their way in slowly via ―jumping‖ from one part of the network to another. If all of the internal networks
are wide open, there is little hope of detecting, much less preventing, this type of threat vector.

Figure 3.2: internal firewall can be used to increase internal security

Intranets, Extranets, and DMZs

Organizations need to provide information to internal and external users and to connect their
infrastructure to external networks, so they have developed network topologies and application
architectures that support that connectivity while maintaining adequate levels of security. The most
prevalent terms for describing these architectures are intranet, extranet, and demilitarized zone (DMZ).
Organizations often segregate the applications deployed in their intranets and extranets from other
internal systems through the use of firewalls. An organization can exert higher levels of control through
firewalling to ensure the integrity and security of these systems.

Intranets

The main purpose of an intranet is to provide internal users with access to applications and information.
Intranets are used to house internal applications that are not generally available to external entities, such
as time and expense systems, knowledge bases, and organization bulletin boards. The main purpose of an
intranet is to share organization information and computing resources among employees. To achieve a
higher level of security, intranet systems are aggregated into one or more dedicated subnets and are
firewalled.

372
From a logical connectivity standpoint, the term intranet does not necessarily mean an internal network.
Intranet applications can be engineered to be universally accessible. Thus, employees can enter their time
and expense systems while at their desks or on the road. When intranet applications are made publicly
accessible, it is a good practice to segregate these systems from internal systems and to secure access with
a firewall. Additionally, because internal information will be transferred as part of the normal application
function, it is commonplace to encrypt such traffic. It is not uncommon to deploy intranet applications in
a DMZ configuration to mitigate risks associated with providing universal access.

Extranets

Extranets are application networks that are controlled by an organization and made available to trusted
external parties, such as suppliers, vendors, partners, and customers. Possible uses for extranets are varied
and can include providing application access to business partners, peers, suppliers, vendors, partners,
customers, and so on. However, because these users are external to the corporation, and the security of
their networks is beyond the control of the corporation, extranets require additional security processes and
procedures beyond those of intranets. As Figure 13-5 shows, access methods to an extranet can vary
greatly—VPNs, direct connections, and even remote users can connect.

Figure 3.3: a possible extranet design

3.8. IPv4 and IPv6 Security


IP Security Overview

The Internet community has developed application-specific security mechanisms in a number of areas,
including electronic mail (S/MIME, PGP), client/server (Kerberos), Web access (SSL), and others.
However, users have some security concerns that cut across protocol layers. For example, an enterprise
can run a secure, private TCP/IP network by disallowing links to untrusted sites, encrypting packets that
leave the premises, and authenticating packets that enter the premises. By implementing security at the IP

373
level, an organization can ensure secure networking not only for applications that have security
mechanisms but also for the many security-ignorant applications.

In response to these issues, the Internet Architecture Board (IAB) included authentication and encryption
as necessary security features in the next-generation IP, which has been issued as IPv6. Fortunately, these
security capabilities were designed to be usable both with the current IPv4 and the future IPv6. This
means that vendors can begin offering these features now, and many vendors do now have some IPsec
capability in their products.
IP-level security encompasses three functional areas: authentication, confidentiality, and key
management. The authentication mechanism assures that a received packet was, in fact, transmitted by the
party identified as the source in the packet header. In addition, this mechanism
assures that the packet has not been altered in transit. The confidentiality facility enables communicating
nodes to encrypt messages to prevent eavesdropping by third parties. The key management facility is
concerned with the secure exchange of keys. The current version of IPsec, known as IPsecv3,
encompasses authentication and confidentiality. Key management is provided by the Internet Key
Exchange standard, IKEv2.

We begin this section with an overview of IP security (IPsec) and an introduction to the IPsec
architecture. We then look at some of the technical details

3.9. Applications of IPsec

IPsec provides the capability to secure communications across a LAN, across private and public
WANs, and across the Internet. Examples of its use include the following:

 Secure branch office connectivity over the Internet: A company can build a secure virtual
private network over the Internet or over a public WAN. This enables a business to rely
heavily on the Internet and reduce its need for private networks, saving costs and network
management overhead.
 Secure remote access over the Internet: An end user whose system is equipped with IP
security protocols can make a local call to an Internet service provider and gain secure
access to a company network. This reduces the cost of toll charges for traveling
employees and telecommuters.
 Establishing extranet and intranet connectivity with partners: IPsec can be used to secure
communication with other organizations, ensuring authentication and confidentiality and
providing a key exchange mechanism.
 Enhancing electronic commerce security: Even though some Web and electronic
commerce applications have built-in security protocols, the use of IPsec enhances that
security.

The principal feature of IPsec that enables it to support these varied applications is that it can
encrypt and/or authenticate all traffic at the IP level. Thus, all distributed applications, including
remote logon, client/server, e-mail, file transfer, Web access, and so on, can be secured.

374
Benefits of IPsec:

 When IPsec is implemented in a firewall or router, it provides strong security that can be
applied to all traffic crossing the perimeter. Traffic within a company or workgroup does
not incur the overhead of security-related processing.
 IPsec in a firewall is resistant to bypass if all traffic from the outside must use IP and the
firewall is the only means of entrance from the Internet into the organization.
 IPsec is below the transport layer (TCP, UDP) and so is transparent to applications. There
is no need to change software on a user or server system when IPsec is implemented in
the firewall or router. Even if IPsec is implemented in end systems, upper-layer software,
including applications, is not affected.
 IPsec can be transparent to end users. There is no need to train users on security
mechanisms, issue keying material on a per-user basis, or revoke keying material when
users leave the organization.
 IPsec can provide security for individual users if needed. This is useful for off-site
workers and for setting up a secure virtual sub network within an organization for
sensitive applications.

Routing Applications

In addition to supporting end users and protecting premises systems and networks, IPsec can
play a vital role in the routing architecture required for internetworking. IPsec can assure that

 A router advertisement (a new router advertises its presence) comes from an authorized
router.
 A neighbor advertisement (a router seeks to establish or maintain a neighbor relationship
with a router in another routing domain) comes from an authorized router.
 A redirect message comes from the router to which the initial packet was sent.
 A routing update is not forged.

Without such security measures, an opponent can disrupt communications or divert some traffic.
Routing protocols such as Open Shortest Path First (OSPF) should be run on top of security
associations between routers that are defined by IPsec.

The Scope of IPsec

IPsec provides two main functions: a combined authentication/encryption function called


Encapsulating Security Payload (ESP) and a key exchange function. For virtual private networks,
both authentication and encryption are generally desired, because it is important both to (1)
assure that unauthorized users do not penetrate the virtual private network and (2) assure that
eavesdroppers on the Internet cannot read messages sent over the virtual private network. There

375
is also an authentication- only function, implemented using an Authentication Header (AH).
Because message authentication is provided by ESP, the use of AH is deprecated. It is included
in IPsecv3 for backward compatibility but should not be used in new applications.

The key exchange function allows for manual exchange of keys as well as an automated scheme.
The IPsec specification is quite complex and covers numerous documents. The most important of
these are RFCs 2401, 4302, 4303, and 4306. In this section, we provide an overview of some of
the most important elements of IPsec.

Security Associations

A key concept that appears in both the authentication and confidentiality mechanisms for IP is
the security association (SA). An association is a one-way relationship between a sender and a
receiver that affords security services to the traffic carried on it. If a peer relationship is needed,
for two-way secure exchange, then two security associations are required. Security services are
afforded to an SA for the use of ESP.

An SA is uniquely identified by three parameters:

 Security parameter index (SPI): A bit string assigned to this SA and having local
significance only. The SPI is carried in an ESP header to enable the receiving system to
select the SA under which a received packet will be processed.
 IP destination address: This is the address of the destination endpoint of the SA, which
may be an end-user system or a network system such as a firewall or router.
 Protocol identifier: This field in the outer IP header indicates whether the association is
an AH or ESP security association.

Hence, in any IP packet, the security association is uniquely identified by the Destination
Address in the IPv4 or IPv6 header and the SPI in the enclosed extension header (AH or ESP).

An IPsec implementation includes a security association database that defines the parameters
associated with each SA. An SA is characterized by the following parameters:

 Sequence number counter: A 32-bit value used to generate the Sequence Number field in
AH or ESP headers.
 Sequence counter overflow: A flag indicating whether overflow of the sequence number
counter should generate an auditable event and prevent further transmission of packets on
this SA.
 Antireplay window: Used to determine whether an inbound AH or ESP packet is a
replay, by defining a sliding window within which the sequence number must fall.

376
 AH information: Authentication algorithm, keys, key lifetimes, and related parameters
being used with AH.
 ESP information: Encryption and authentication algorithm, keys, initialization values,
key lifetimes, and related parameters being used with ESP.
 Lifetime of this security association: A time interval or byte count after which an SA
must be replaced with a new SA (and new SPI) or terminated, plus an indication of which
of these actions should occur.
 IPsec protocol mode: Tunnel, transport, or wildcard (required for all implementations).
 Path MTU: Any observed path maximum transmission unit (maximum size of a packet
that can be transmitted without fragmentation) and aging variables (required for all
implementations).

The key management mechanism that is used to distribute keys is coupled to the authentication
and privacy mechanisms only by way of the security parameters index. Hence, authentication
and privacy have been specified independent of any specific key management mechanism.

Encapsulating Security Payload

The Encapsulating Security Payload provides confidentiality services, including confidentiality


of message contents and limited traffic flow confidentiality. As an optional feature, ESP can also
provide an authentication service. Figure 22.7 shows the format of an ESP packet. It contains the
following fields:

 • Security Parameters Index (32 bits): Identifies a security association.


 • Sequence Number (32 bits): A monotonically increasing counter value.
 • Payload Data (variable): This is a transport-level segment (transport mode) or IP packet
(tunnel mode) that is protected by encryption.
 • Padding (0–255 bytes): May be required if the encryption algorithm requires the
plaintext to be a multiple of some number of octets.
 Pad Length (8 bits): Indicates the number of pad bytes immediately preceding this field.
 Next Header (8 bits): Identifies the type of data contained in the Payload Data field by
identifying the first header in that payload (e.g., an extension header in IPv6, or an upper-
layer protocol such as TCP).
 Integrity Check Value (variable): A variable-length field (must be an integral number of
32-bit words) that contains the integrity check value computed over the ESP packet
minus the Authentication Data field.

377
Figure 3.4: IPsec ESP format

3.10.Transport and Tunnel Modes


Transport Mode
Transport mode provides protection primarily for upper-layer protocols. That is, transport mode
protection extends to the payload of an IP packet. Examples include a TCP or UDP segment, both
of which operate directly above IP in a host protocol stack. Typically, transport mode is used for
end-to-end communication between two hosts (e.g., a client and a server, or two workstations).
When a host runs ESP over IPv4, the payload is the data that normally follow the IP header. For
IPv6, the payload is the data that normally follow both the IP header and any IPv6 extension
headers that are present, with the possible exception of the destination options header, which may
be included in the protection. ESP in transport mode encrypts and optionally authenticates the IP
payload but not the IP header.
Tunnel Mode
Tunnel mode provides protection to the entire IP packet. To achieve this, after the ESP fields are
added to the IP packet, the entire packet plus security fields are treated as the payload of new
outer IP packet with a new outer IP header. The entire original, inner, packet travels through a
tunnel from one point of an IP network to another; no routers along the way are able to examine
the inner IP header. Because the original packet is encapsulated, the new, larger packet may have
totally different source and destination addresses, adding to the security. Tunnel mode is used
when one or both ends of a security association are a security gateway, such as a firewall or
router that implements IPsec. With tunnel mode, a number of hosts on networks behind firewalls
may engage in secure communications without implementing IPsec. The unprotected packets
generated by such hosts are tunneled through external networks by tunnel mode SAs set up by the

378
IPsec software in the firewall or secure router at the boundary of the local network.

Here is an example of how tunnel mode IPsec operates. Host A on a network generates an IP
packet with the destination address of host B on another network. This packet is routed from the
originating host to a firewall or secure router at the boundary of A‗s network. The firewall filters
all outgoing packets to determine the need for IPsec processing. If this packet from A to B
requires IPsec, the firewall performs IPsec processing and encapsulates the packet with an outer
IP header. The source IP address of this outer IP packet is this firewall, and the destination
address may be a firewall that forms the boundary to B‗s local network. This packet is now routed
to B‗s firewall, with intermediate routers examining only the outer IP header. At B‗s firewall, the
outer IP header is stripped off, and the inner packet is delivered to B.
ESP in tunnel mode encrypts and optionally authenticates the entire inner IP packet, including the
inner IP header.

379
UNIT - IV

4.1. Conventional Encryption principles


A Symmetric encryption scheme has five ingredients

1. Plain Text: This is the original message or data which is fed into the algorithm as input.
2. Encryption Algorithm: This encryption algorithm performs various substitutions and
transformations on the plain text.
3. Secret Key: The key is another input to the algorithm. The substitutions and transformations
performed by algorithm depend on the key.

Figure 4.1: simplified model of conventional encryption

4. Cipher Text: This is the scrambled (unreadable) message which is output of the encryption
algorithm. This cipher text is dependent on plaintext and secret key. For a given plaintext, two
different keys produce two different cipher texts.
5. Decryption Algorithm: This is the reverse of encryption algorithm. It takes the cipher text and
secret key as inputs and outputs the plain text.

Two main requirements are needed for secure use of conventional encryption:

 A strong encryption algorithm is needed. It is desirable that the algorithm should be in such a way
that, even the attacker who knows the algorithm and has access to one or more cipher texts would
be unable to decipher the cipher text or figure out the key.
 The secret key must be distributed among the sender and receiver in a much secured way. If in
any way the key is discovered and with the knowledge of algorithm, all communication using this
key is readable.

380
The important point is that the security of conventional encryption depends on the secrecy of the key, not
the secrecy of the algorithm i.e. it is not necessary to keep the algorithm secret, but only the key is to be
kept secret. This feature that algorithm need not be kept secret made it feasible for wide spread use and
enabled manufacturers develop low cost chip implementation of data encryption algorithms. With the use
of conventional algorithm, the principal security problem is maintaining the secrecy of the key.

Cryptography

A cipher is a secret method of writing, as by code. Cryptography, in a very broad sense, is the study of
techniques related to aspects of information security. Hence cryptography is concerned with the writing
(ciphering or encoding) and deciphering (decoding) of messages in secret code.

Cryptographic systems are classified along three independent dimensions:

1. The type of operations used for performing plaintext to cipher text


All the encryption algorithms make use of two general principles; substitution, in which each
element in the plain text is mapped into another element and transposition, in which elements in
the plain text are rearranged. Important thing is that no information should be lost.
The systems which involve multiple stages of substitutions and transpositions are known as
product systems.
2. The number of keys used
If single key is used by both sender and receiver, it is called symmetric, single-key, secret-key or
conventional encryption. If sender and receiver each use a different key, then it is called
asymmetric, two-key or public-key encryption.
3. The way in which plaintext is processed
A block cipher processes the input one block of elements at a time, producing an output block for
each input block. Stream cipher processes the input elements continuously, producing output one
element at a time as it goes along.

Cryptanalysis

The process of attempting to discover the plaintext or key is known as cryptanalysis. It is very difficult
when only the cipher text is available to the attacker as in some cases even the encryption algorithm is not
known. The most common attack under these circumstances is brute-force approach of trying all the
possible keys. This attack is made impractical when the key size is considerably large. The table below
gives an idea on types of attacks on encrypted messages.

Cryptology covers both cryptography and cryptanalysis. Cryptology is a constantly evolving science;
ciphers are invented and, given time, are almost certainly breakable. Cryptanalysis is the best way to
understand the subject of cryptology. Cryptographers are constantly searching for the perfect security
system, a system that is both fast and hard and a system that encrypts quickly but is hard or impossible to
break. Cryptanalysts are always looking for ways to break the security provided by a cryptographic
system, mostly though mathematical understanding of the cipher structure.

381
Cryptography can be defined as the conversion of data into a scrambled code that can be deciphered and
sent across a public or a private network.

 A Cipher text-only attack is an attack with an attempt to decrypt cipher text when only the
cipher text itself is available.
 A Known-plaintext attack is an attack in which an individual has the plaintext samples and its
encrypted version (cipher text) thereby allowing him to use both to reveal further secret
information like the key
 A Chosen- plaintext attack involves the cryptanalyst be able to define his own plaintext, feed it
into the cipher and analyze the resulting cipher text.
 A Chosen-cipher text attack is one, where attacker has several pairs of plaintext-cipher text and
cipher text chosen by the attacker.

An encryption scheme is unconditionally secure if the cipher text generated by the scheme does not
contain enough information to determine uniquely the corresponding plain text, no matter how much
cipher text and time is available to the opponent. Example for this type is One-time Pad

An encryption scheme is computationally secure if the cipher text generated by the scheme meets the
following criteria:

 Cost of breaking cipher exceeds the value of the encrypted information.


 Time required to break the cipher exceeds the useful lifetime of the information.

The average time required for exhaustive key search is given below:

Table 4.1: The average time required for exhaustive

Types of Cryptography

1. Symmetric Key Cryptography


2. Asymmetric Key Cryptography
3. Hash Functions Symmetric Key Cryptography
o Also known as Secret Key Cryptography or Conventional Cryptography.
o Symmetric Key Cryptography is an encryption system in which the sender and receiver
of a message share a single, common key that is used to encrypt and decrypt the message.
o The Algorithm use is also known as a secret key algorithm or sometimes called a
symmetric algorithm
o A key is a piece of information (a parameter) that determines the functional output of a
cryptographic algorithm or cipher.

382
o The key for encrypting and decrypting the file had to be known to all the recipients. Else,
the message could not be decrypted by conventional means.
o Symmetric Key Cryptography – Examples
1. Data Encryption Standard (DES): The Data Encryption Standard was
published in 1977 by the US National Bureau of Standards. DES uses a 56 bit
key and maps a 64 bit input block of plaintext onto a 64 bit output block of
cipher text. 56 bits is a rather small key for today‘s computing power.
2. Triple DES: Triple DES was the answer to many of the shortcomings of DES.
Since it is based on the DES algorithm, it is very easy to modify existing
software to use Triple DES. It also has the advantage of proven reliability and a
longer key length that eliminates many of the shortcut attacks that can be used to
reduce the amount of time it takes to break DES.
3. Advanced Encryption Standard (AES) (RFC3602): Advanced Encryption
Standard (AES) is an encryption standard adopted by the U.S. government. The
standard comprises three block ciphers, AES-128, AES-192 and AES-256. Each
AES cipher has a 128-bit block size, with key sizes of 128, 192 and 256 bits,
respectively. The AES ciphers have been analyzed extensively and are now used
worldwide, as was the case with its predecessor, the Data Encryption Standard
(DES).
Problems with Conventional Cryptography
1. Key Management: Symmetric-key systems are simpler and faster; their
main drawback is that the two parties must somehow exchange the key in
a secure way and keep it secure after that. Key Management caused
nightmare for the parties using the symmetric key cryptography. They
were worried about how to get the keys safely and securely across to all
users so that the decryption of the message would be possible. This gave
the chance for third parties to intercept the keys in transit to decode the
top-secret messages. Thus, if the key was compromised, the entire
coding system was compromised and a ―Secret‖ would no longer
remain a ―Secret‖. This is why the ―Public Key Cryptography‖ came
into existence.

Asymmetric Key Cryptography

 Asymmetric cryptography, also known as Public-key cryptography, refers to a cryptographic


algorithm which requires two separate keys, one of which is private and one of which is public.
The public key is used to encrypt the message and the private one is used to decrypt the message.
 Public Key Cryptography is a very advanced form of cryptography. Officially, it was invented by
Whitfield Diffie and Martin Hellman in 1975. The basic technique of public key cryptography
was first discovered in 1973 by the British Clifford Cocks of Communications-Electronics
Security Group (CESG) of (Government Communications Headquarters – GCHQ) but this was a
secret until 1997.

Asymmetric Key Cryptography – Examples

383
1. Digital Signature Standard (DSS): Digital Signature Standard (DSS) is the digital signature
algorithm (DSA) developed by the U.S. National Security Agency (NSA) to generate a digital
signature for the authentication of electronic documents. DSS was put forth by the National
Institute of Standards and Technology (NIST) in 1994, and has become the United States
government standard for authentication of electronic documents. DSS is specified in Federal
Information Processing Standard (FIPS) 186.
2. Algorithm – RSA: – RSA (Rivest, Shamir and Adleman who first publicly described it in 1977)
is an algorithm for public-key cryptography. It is the first algorithm known to be suitable for
signing as well as encryption, and one of the first great advances in public key cryptography.
RSA is widely used in electronic commerce protocols, and is believed to be secure given
sufficiently long keys and the use of up-to-date implementations.
RSA Cryptanalysis
o Rivest, Shamir, and Adelman placed a challenge in Martin Gardner‘s column in
Scientific American (journal) in which the readers were invited to crack.
C=114,381,625,757,888,867,669,235,779,976,146,612,010,218,296,721,242,362,562,561
,842,935,706,935,245,733,897,830,597,123,563,958,705,058,989,075,147,599,290,026,8
79,543,541
o This was solved in April 26, 1994, cracked by an international effort via the internet with
the use of 1600 workstations, mainframes, and supercomputers attacked the number for
eight months before finding its Public key and its private key. Encryption key = 9007 The
message ―first solver wins one hundred dollars‖.
o Of course, the RSA algorithm is safe, as it would be incredibly difficult to gather up such
international participation to commit malicious acts.
3. ElGamal
o ElGamal is a public key method that is used in both encryption and digital signing.
o The encryption algorithm is similar in nature to the Diffie-Hellman key agreement
protocol.
o It is used in many applications and uses discrete logarithms.
o ElGamal encryption is used in the free GNU Privacy Guard software

Hash Functions

 A cryptographic hash function is a hash function that takes an arbitrary block of data and returns
a fixed-size bit string, the cryptographic hash value, such that any (accidental or intentional)
change to the data will (with very high probability) change the hash value. The data to be encoded
are often called the message, and the hash value is sometimes called the message digest or simply
digests.

384
 The ideal cryptographic hash function has four main properties:
o It is easy to compute the hash value for any given message.
o It is infeasible to generate a message that has a given hash.
o It is infeasible to modify a message without changing the hash.
o It is infeasible to find two different messages with the same hash.

Features of Hash Functions

The typical features of hash functions are –

 Fixed Length Output Hash Value


o Hash function coverts data of arbitrary length to a fixed length. This process is often
referred to as hashing the data. In general, the hash is much smaller than the input data;
hence hash functions are sometimes called compression functions.
o Since a hash is a smaller representation of a larger data, it is also referred to as a digest.
o Hash function with n bit output is referred to as an n-bit hash function. Popular hash
functions generate values between 160 and 512 bits.
 Efficiency of Operation
o Generally for any hash function h with input x, computation of hx is a fast operation.
Computationally hash functions are much faster than a symmetric encryption Properties
of Hash Functions
o In order to be an effective cryptographic tool, the hash function is desired to possess
following properties –
 Pre-Image Resistance
 This property means that it should be computationally hard to reverse a
hash function. In other words, if a hash function h produced a hash value
z, then it should be a difficult process to find any input value x that
hashes to z.
 This property protects against an attacker who only has a hash value and
is trying to find the input.
 Second Pre-Image Resistance

385
 This property means given an input and its hash, it should be hard to find
a different input with the same hash.
 In other words, if a hash function h for an input x produces hash value
hx, then it should be difficult to find any other input value y such that hy
= hx. This property of hash function protects against an attacker who has
an input value and its hash, and wants to substitute different value as
legitimate value in place of original input value.
 Collision Resistance
 This property means it should be hard to find two different inputs of any
length that result in the same hash. This property is also referred to as
collision free hash function.
 In other words, for a hash function h, it is hard to find any two different
inputs x and y such that hx = hy. Since, hash function is compressing
function with fixed hash length, it is impossible for a hash function not to
have collisions. This property of collision free only confirms that these
collisions should be hard to find. This property makes it very difficult for
an attacker to find two input values with the same hash.
 Also, if a hash function is collision-resistant then it is second pre-image
resistant.

Design of Hashing Algorithms

 At the heart of a hashing is a mathematical function that operates on two fixed-size blocks of data
to create a hash code. This hash function forms the part of the hashing algorithm.
 The size of each data block varies depending on the algorithm. Typically the block sizes are from
128 bits to 512 bits. The following illustration demonstrates hash function.

 Hashing algorithm involves rounds of above hash function like a block cipher. Each round takes
an input of a fixed size, typically a combination of the most recent message block and the output
of the last round.
 This process is repeated for as many rounds as are required to hash the entire message. Schematic
of hashing algorithm is depicted in the following illustration

386
 Since, the hash value of first message block becomes an input to the second hash operation,
output of which alters the result of the third operation, and so on. This effect, known as an
avalanche effect of hashing.
  Avalanche effect results in substantially different hash values for two messages that differ by
even a single bit of data. Understand the difference between hash function and algorithm
correctly. The hash function generates a hash code by operating on two blocks of fixed-length
binary data.
 Hashing algorithm is a process for using the hash functions, specifying how the message will be
broken up and how the results from previous message blocks are chained together.

Popular Hash Functions

 Let us briefly see some popular hash functions –

A. Message Digest MD

 MD5 was most popular and widely used hash function for quite some years. The MD family
comprises of hash functions MD2, MD4, MD5 and MD6. It was adopted as Internet Standard
RFC 1321. It is a 128-bit hash function. MD5 digests have been widely used in the software
world to provide assurance about integrity of transferred file. For example, file servers often
provide a pre-computed MD5 checksum for the files, so that a user can compare the checksum of
the downloaded file to it.

387
 In 2004, collisions were found in MD5. An analytical attack was reported to be successful only in
an hour by using computer cluster. This collision attack resulted in compromised MD5 and hence
it is no longer recommended for use.

B. Secure Hash Function SHA

 Family of SHA comprise of four SHA algorithms; SHA-0, SHA-1, SHA-2, and SHA-3. Though
from same family, there are structurally different.
 The original version is SHA-0, a 160-bit hash function, was published by the National Institute of
Standards and Technology NIST in 1993. It had few weaknesses and did not become very
popular.
 Later in 1995, SHA-1 was designed to correct alleged weaknesses of SHA-0. SHA-1 is the most
widely used of the existing SHA hash functions. It is employed in several widely used
applications and protocols including Secure Socket Layer SSL security.
 In 2005, a method was found for uncovering collisions for SHA-1 within practical time frame
making long-term employability of SHA-1 doubtful. SHA-2 family has four further SHA
variants, SHA-224, SHA-256, SHA-384, and SHA-512 depending up on number of bits in their
hash value. No successful attacks have yet been reported on SHA-2 hash function. Though SHA-
2 is a strong hash function. Though significantly different, its basic design is still follows design
of SHA-1. Hence, NIST called for new competitive hash function designs.
 In October 2012, the NIST chose the Keccak algorithm as the new SHA-3 standard. Keccak
offers many benefits, such as efficient performance and good resistance for attacks

C. RIPEMD

 The RIPEND is an acronym for RACE Integrity Primitives Evaluation Message Digest.
 This set of hash functions was designed by open research community and generally known as a
family of European hash functions.
 The set includes RIPEND, RIPEMD-128, and RIPEMD-160. There also exist 256, and 320-bit
versions of this algorithm.
 Original RIPEMD 128bit is based upon the design principles used in MD4 and found to provide
questionable security. RIPEMD 128-bit version came as a quick fix replacement to overcome
vulnerabilities on the original RIPEMD.
 RIPEMD-160 is an improved version and the most widely used version in the family. The 256
and 320-bit versions reduce the chance of accidental collision, but do not have higher levels of
security as compared to RIPEMD-128 and RIPEMD-160 respectively.

D. Whirlpool

 This is a 512-bit hash function.


 It is derived from the modified version of Advanced Encryption Standard AES.
 One of the designers was Vincent Rijmen, a co-creator of the AES.
 Three versions of Whirlpool have been released; namely WHIRLPOOL-0, WHIRLPOOL-T, and
WHIRLPOOL.

388
Applications of Hash Functions
There are two direct applications of hash function based on its cryptographic properties.

1. Password Storage
o Hash functions provide protection to password storage. Instead of storing password in
clear, mostly all logon processes store the hash values of passwords in the file. The
Password file consists of a table of pairs which are in the form userid, h(P). The process
of logon is depicted in the following illustration
o An intruder can only see the hashes of passwords, even if he accessed the password. He
can neither logon using hash nor can he derive the password from hash value since hash
function possesses the property of pre-image resistance.

2. Data Integrity Check


o Data integrity check is a most common application of the hash functions. It is
used to generate the checksums on data files. This application provides assurance
to the user about correctness of the data. The process is depicted in the following
illustration

389
 The integrity check helps the user to detect any changes made to original file. It however, does
not provide any assurance about originality. The attacker, instead of modifying file data, can
change the entire file and compute all together new hash and send to the receiver. This integrity
check application is useful only if the user is sure about the originality of file.

Introduction to the TCP/IP Stack

1. Application: supports network applications


o ftp, smtp, http, ssh, telnet, DHCP (Dynamic Host Configuration Protocol)…
2. Transport: data transfer from end system to end system.
o TCP, UDP, SPX…
3. Network: finding the way through the network from machine to machine.
IP (IPv4, IPv6), ICMP, IPX
4. (Data) link: data transfer between two neighbors in the network
o ppp, ethernet, ATM, ISDN, 802.11 (WLAN).
5. Physical: bits ―on the wire

Classical Encryption Techniques


There are two basic building blocks of all encryption techniques: substitution and transposition.
Substitution Encryption Techniques
These techniques involve substituting or replacing the contents of the plaintext by other letters, numbers
or symbols. Different kinds of ciphers are used in substitution technique.
Caesar Ciphers or Shift Cipher:
The earliest known use of a substitution cipher and the simplest was by Julius Caesar. The Caesar cipher
involves replacing each letter of the alphabet with the letter standing 3 places further down the alphabet.
Let us consider,
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Choose k, Shift all letters by k
For example, if k = 5
A becomes F, B becomes G, C becomes H, and so on…
Mathematically give each letter a number,
abcdefghijklmnopqrstuvwxyz
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
If shift = 3 then
e.g., plain text: pay more money
Cipher text: SDB PRUH PRQHB
Note that the alphabet is wrapped around, so that letter following „z‟ is „a‟.
For each plaintext letter p, substitute the cipher text letter c such that
C = E(p) = (p+3) mod 26
A shift may be any amount, so that general Caesar algorithm is
C = E (p) = (p+k) mod 26
Where k takes on a value in the range 1 to 25.The decryption algorithm is simply
P = D(C) = (C-k) mod 26
If it is known that a given cipher text is a Caesar cipher, then a brute force cryptanalysis is easily
performed.

390
With a Caesar cipher, there are only 26 possible keys, of which only 25 are of any use, since mapping A
to A etc doesn‘t really obscure the message!
Monoalphabetic Ciphers:
Here, Plaintext characters are substituted by a different alphabet stream of characters shifted to the right
or left by n positions. When compared to the Caesar ciphers, these monoalphabetic ciphers are more
secure as each letter of the cipher text can be any permutation of the 26 alphabetic characters leading to
26! Or greater than 4 x 1026 possible keys. But it is still vulnerable to cryptanalysis, when a cryptanalyst
is aware of the nature of the plaintext, he can find the regularities of the language. To overcome these
attacks, multiple substitutions for a single letter are used. For example, a letter can be substituted by
different numerical cipher symbols such as 17, 54, 69…etc. Even this method is not completely secure as
each letter in the plain text affects on letter in the cipher text.
Or, using a common key which substitutes every letter of the plain text.
The key ABCDEFGHIIJ LMNOPQRSTUVWXYZ
QWERTYUIIOPAS DFGHJ KLZXCV BNM
Would encrypt the message I think therefore I am
into OZIIOFAZIITKTYGKTOQD

But any attacker would simply break the cipher by using frequency analysis by observing the number of
times each letter occurs in the cipher text and then looking upon the English letter frequency table. So,
substitution cipher is completely ruined by these attacks. Monoalphabetic ciphers are easy to break as
they reflect the frequency of the original alphabet. A countermeasure is to provide substitutes, known as
homophones for a single letter.
Playfair Ciphers:
The Playfair Cipher is a manual symmetric encryption cipher invented in 1854 by Charles Wheatstone;
however its name and popularity came from the endorsement of Lord Playfair.
It is the best known multiple letter encryption cipher which treats diagrams in the plaintext as single units
and translates these units into cipher text diagrams. The Playfair Cipher is a digram substitution cipher
offering a relatively weak method of encryption. It was used for tactical purposes by British forces in the
Second Boer War and in World War I and for the same purpose by the Australians and Germans during
World War II. This was because Playfair is reasonably fast to use and requires no special equipment. A
typical scenario for Playfair use would be to protect important but non-critical secrets during actual
combat. By the time the enemy cryptanalysts could break the message, the information was useless to
them.
It is based around a 5×5 matrix, a copy of which is held by both communicating parties, into which 25 of
the 26 letters of the alphabet (normally either j and i are represented by the same letter or x is ignored) are
placed in a random fashion.
For example, the plain text is Shi Sherry loves Heath Ledger and the agreed key is sherry. The matrix will
be built according to the following rules.

 in pairs,
 without punctuation,
 All Js are replaced with Is.
o SH IS HE RR YL OV ES HE AT HL ED GE R

391
 Double letters which occur in a pair must be divided by an X or a Z.

Example: LI TE RA LL TE RA LX LY

 SH IS HE RX RY LO VE SH EA TH LE DG ER

The alphabet square is prepared using, a 5*5 matrix, no repetition letters, no Js and key is written first
followed by the remaining alphabets with no i and j.

For the generation of cipher text, there are three rules to be followed by each pair of letters.

 letters appear on the same row: replace them with the letters to their immediate right
respectively.
 letters appear on the same column: replace them with the letters immediately below
respectively
 not on the same row or column: replace them with the letters on the same row respectively but
at the other pair of corners of the rectangle defined by the original pair.

Based on the above three rules, the cipher text obtained for the given plain text is

 HE GH ER DR YS IQ WH HE SC OY KR AL RY

Another example which is simpler than the above one can be given as:
Here, key word is playfair.
Plaintext is Hellothere
hello there becomes -> he_lx_lo_th_er_ex .
Applying the rules again, for each pair,

392
If they are in the same row, replace each with the letter to its right (mod 5)
he -> KG
If they are in the same column, replace each with the letter below it (mod 5)
lo -> RV
Otherwise, replace each with letter we‗d get if we swapped their column indices
lx -> YV
So the cipher text for the given plain text is KG YV RV QM GI KU
To decrypt the message, just reverse the process. Shift up and left instead of down and right. Drop extra
x‗s and locate any missing I‗s that should be j‗s. The message will be back into the original readable
form.

No longer used by military forces because of the advent of digital encryption devices. Playfair is now
regarded as insecure for any purpose because modern hand-held computers could easily break the cipher
within seconds.

Public Key Cryptography:


The development of public-key cryptography is the greatest and perhaps the only true revolution in the
entire history of cryptography. It is asymmetric, involving the use of two separate keys, in contrast to
symmetric encryption, which uses only one key. Public key schemes are neither more nor less secure than
private key (security depends on the key size for both). Public-key cryptography complements rather than
replaces symmetric cryptography.
Both also have issues with key distribution, requiring the use of some suitable protocol.

The concept of public-key cryptography evolved from an attempt to attack two of the most difficult
problems associated with symmetric encryption:

1. Key distribution – how to have secure communications in general without having to


trust a KDC with your key
2. Digital signatures – how to verify a message comes intact from the claimed sender

Public-key/two-key/asymmetric cryptography involves the use of two keys:

 a public-key, which may be known by anybody, and can be used to encrypt messages, and verify
signatures
 a private-key, known only to the recipient, used to decrypt messages, and sign (create)
signatures.
 is asymmetric because those who encrypt messages or verify signatures cannot decrypt messages
or create signatures

Public-Key algorithms rely on one key for encryption and a different but related key for decryption.
These algorithms have the following important characteristics:

 It is computationally infeasible to find decryption key knowing only algorithm & encryption key
 It is computationally easy to en/decrypt messages when the relevant (en/decrypt) key is known.

393
 Either of the two related keys can be used for encryption, with the other used for decryption (for
some algorithms like RSA).

The following figure illustrates public-key encryption process and shows that a public-key encryption
scheme has six ingredients: plaintext, encryption algorithm, public & private keys, cipher text &
decryption algorithm.

Figure 4.2: public-key encryption process

The essential steps involved in a public-key encryption scheme are given below:

1. Each user generates a pair of keys to be used for encryption and decryption.
2. Each user places one of the two keys in a public register and the other key is kept private.
3. If B wants to send a confidential message to A, B encrypts the message using A‗s public key.
4. When A receives the message, she decrypts it using her private key. Nobody else can decrypt the
message because that can only be done using A‗s private key (Deducing a private key should be
infeasible).
5. If a user wishes to change his keys –generate another pair of keys and publish the public one: no
interaction with other users is needed.

Notations used in Public-key cryptography:

 The public key of user A will be denoted KUA.


 The private key of user A will be denoted KRA.
 Encryption method will be a function E.
 Decryption method will be a function D.
 If B wishes to send a plain message X to A, then he sends the crypto text Y=E(KUA,X)
 The intended receiver A will decrypt the message: D(KRA,Y)=X

The first attack on Public-key Cryptography is the attack on Authenticity. An attacker may impersonate
user B: he sends a message E(KUA,X) and claims in the message to be B –A has no guarantee this is so.
To overcome this, B will encrypt the message using his private key: Y=E(KRB,X). Receiver decrypts

394
using B‗s public key KRB. This shows the authenticity of the sender because (supposedly) he is the only
one who knows the private key. The entire encrypted message serves as a digital signature. This scheme
is depicted in the following figure:

Figure 4.3: encrypt massage with public key

But, a drawback still exists. Anybody can decrypt the message using B‗s public key. So, secrecy or
confidentiality is being compromised.
One can provide both authentication and confidentiality using the public-key scheme twice:

Figure 4.4: public key Cryptosystem: secrecy and authentication

 A encrypts X with his private key: Y=E(KRA,X)


 A encrypts Y with B‗s public key: Z=E(KUB,Y)
 B will decrypt Z (and she is the only one capable of doing it): Y=D(KRB, Z)
 B can now get the plaintext and ensure that it comes from A (he knows public key of A): decrypt
Y using A‗s public key: X=D(KUA, Y).

Applications for public-key cryptosystems:


1.) Encryption/decryption: sender encrypts the message with the receiver‗s public key.
2.) Digital signature: sender ―signs‖ the message (or a representative part of the

395
message) using his private key
3.) Key exchange: two sides cooperate to exchange a secret key for later use in a secret-key cryptosystem

The main requirements of Public-key cryptography are:

1. Computationally easy for a party B to generate a pair (public key KUb, private key KRb).
2. Easy for sender A to generate ciphertext:
C EKUb (M)
3. Easy for the receiver B to decrypt ciphertect using private key: M=DKRb (C) DKRb[EKUb
(M)].
4. Computationally infeasible to determine private key (KRb) knowing public key (KUb).
5. Computationally infeasible to recover message M, knowing KUb and cipher text C
6. Either of the two keys can be used for encryption, with the other used for
decryption:

Easy is defined to mean a problem that can be solved in polynomial time as a function of input length. A
problem is infeasible if the effort to solve it grows faster than polynomial time as a function of input size.
Public-key cryptosystems usually rely on difficult math functions rather than S-P networks as classical
cryptosystems. One-way function is one, easy to
calculate in one direction, infeasible to calculate in the other direction (i.e., the inverse is infeasible to
compute). Trap-door function is a difficult function that becomes easy if some extra information is
known. Our aim to find a trap-door one-way function, which is easy to calculate in one direction and
infeasible to calculate in the other direction unless certain additional information is known.

Security of Public-key schemes:

 Like private key schemes brute force exhaustive search attack is always theoretically possible.
But keys used are too large (>512bits).
 Security relies on a large enough difference in difficulty between easy (en/decrypt) and hard
(cryptanalyse) problems. More generally the hard problem is known, its just made too hard to do
in practise.
 Requires the use of very large numbers, hence is slow compared to private key schemes.

RSA algorithm:

RSA is the best known, and by far the most widely used general public key encryption algorithm, and was
first published by Rivest, Shamir & Adleman of MIT in 1978 [RIVE78]. Since that time RSA has reigned
supreme as the most widely accepted and implemented general-purpose approach to public-key
encryption. The RSA scheme is a block cipher in which the plaintext and the ciphertext are integers
between 0 and n-1 for some fixed n and typical size for n is 1024 bits (or 309 decimal digits). It is based

396
on exponentiation in a finite (Galois) field over integers modulo a prime, using large integers (eg. 1024
bits). Its security is due to the cost of factoring large numbers.

RSA involves a public-key and a private-key where the public key is known to all and is used to encrypt
data or message. The data or message which has been encrypted using a public key can only be decryted
by using its corresponding private-key. Each user generates a key pair i.e. public and private key using the
following steps:

 Each user selects two large primes at random – p, q


 Compute their system modulus n=p.q
 Calculate ø(n), where ø(n)=(p-1)(q-1)
 Selecting at random the encryption key e, where 1<e<ø(n),and gcd(e,ø(n))=1
 Solve following equation to find decryption key d: e.d=1 mod ø(n) and 0≤d≤n
 Publish their public encryption key: KU={e,n}
 Keep secret private decryption key: KR={d,n}

Both the sender and receiver must know the values of n and e, and only the receiver knows the value of d.
Encryption and Decryption are done using the following equations.

To encrypt a message M the sender:

 Obtains public key of recipient KU={e,n}


 Computes: C=Me mod n, where 0≤M<n

To decrypt the ciphertext C the owner:

 Uses their private key KR={d,n}


 Computes: M=Cd mod n = (Me) d mod n = M ed mod n

For this algorithm to be satisfactory, the following requirements are to be met.

1. It‗s possible to find values of e, d, n such that Med = M mod n for all M<n
2. It is relatively easy to calculate Me and C for all values of M < n.
3. It is impossible to determine d given e and n

The way RSA works is based on Number theory:

Fermat‘s little theorem: if p is prime and a is positive integer not divisible by p, then
ap-1≡ 1 mod p.
Corollary: For any positive integer a and prime p, ap≡ a mod p.

Fermat‗s theorem, as useful as will turn out to be does not provide us with integers d,e we are looking for
–Euler‗s theorem (a refinement of Fermat‗s) does. Euler‗s function associates to any positive integer n, a
number φ(n): the number of positive integers smaller than n and relatively prime to n. For example, φ(37)
= 36 i.e. φ(p) = p-1 for any prime p. For any two primes p,q, φ(pq)=(p-1)(q-1).

397
Euler‘s theorem: for any relatively prime integers a,n we have a
φ(n)≡1 mod n.
Corollary: For any integers a,n we have a φ(n)+1≡a mod n
Corollary: Let p,q be two odd primes and n=pq.
Then: φ(n)=(p-1)(q-1)
For any integer m with 0<m<n, m (p-1)(q-1)+1 ≡ m mod n
For any integers k,m with 0<m<n, m k(p-1)(q-1)+1 ≡ m mod n
Euler‗s theorem provides us the numbers d, e such that Med=M mod n. We have to choose d,e such that
ed=kφ(n)+1, or equivalently, d≡e-1 mod φ(n)

An example of RSA can be given as,  Select primes: p=17 & q=11

  Compute n = pq =17×11=187
  Compute ø(n)=(p–1)(q-1)=16×10=160
  Select e : gcd(e,160)=1; choose e=7
  Determine d: de=1 mod 160 and d < 160 Value is d=23 since 23×7=161= 10x160x1
  Publish public key KU={7,187}
  Keep secret private key KR={23,187}
  Now, given message M = 88 (nb. 88<187)
  encryption: C = 887 mod 187 = 11
  decryption: M = 1123 mod 187 = 88

Another example of RSA is given as,


Let p = 11, q = 13, e = 11, m = 7
n = pq i.e. n= 1113 = 143

ø(n)= (p-1)(q-1) i.e. (11-1)(13-1) = 120

e.d=1 mod ø(n) i.e. 11d mod 120 = 1 i.e. (1111) mod 120=1; so d = 11
public key :{11,143} and private key: {11,143}
C=Me mod n, so ciphertext = 7 11 mod143 = 727833 mod 143; i.e. C = 106 M=Cd mod n, plaintext =
10611 mod 143 = 1008 mod 143; i.e. M = 7

Another Example is:

398
Figure 4.5: encryption and decryption using RSA

For RSA key generation,

 Users of RSA must:


o determine two primes at random – p, q
o select either e or d and compute the other
 Primes p,q must not be easily derived from modulus N=p.q
o means must be sufficiently large
o typically guess and use probabilistic test
 Exponents e, d are inverses, so use Inverse algorithm to compute the other

Key Management

One of the major roles of public-key encryption has been to address the problem of key distribution. Two
distinct aspects to use of public key encryption are present.

 The distribution of public keys.


 Use of public-key encryption to distribute secret keys.

Distribution of Public Keys

The most general schemes for distribution of public keys are given below
Public Announcement of Public keys
Here any participant can send his or her public key to any other participant or broadcast the key to the
community at large. For example, many PGP users have adopted the practice of appending their public
key to messages that they send to public forums.

Figure 4.6: uncontrolled public key distribution

It has a major weakness. Anyone can forge such a public announcement. That is, some user could pretend
to be user A and send a public key to another participant.

Publicly Available Directory

399
A greater degree of security can be achieved by maintaining a publicly
available dynamic directory of public keys. Maintenance and distribution of the public directory would
have to be the responsibility of some trusted entity or organization. It includes the following elements:

1. The authority maintains a directory with a {name, public key} entry for each participant.
2. Each participant registers a public key with the directory authority. Registration would have to be
in person or by some form of secure authenticated communication

Figure 4.7: public key publication

3. A participant may replace the existing key with a new one at any time, either because of the
desire to replace a public key that has already been used for a large amount of data, or because the
corresponding private key has been compromised in some way.
4. Periodically, the authority publishes the entire directory or updates to the directory.
5. Participants could also access the directory electronically. For this purpose, secure, authenticated
communication from the authority to the participant is mandatory.

This scheme has still got some vulnerability. If an adversary succeeds in obtaining or computing the
private key of the directory authority, the adversary could authoritatively pass out counterfeit public keys
and subsequently impersonate any participant and eavesdrop on messages sent to any participant. Or else,
the adversary may tamper with the records kept by the authority.

Public Key Authority

Stronger security for public-key distribution can be achieved by providing


tighter control over the distribution of public keys from the directory. This scenario assumes the existence
of a public authority (whoever that may be) that maintains a dynamic directory of public keys of all users.
The public authority has its own (private key, public key) that it is using to communicate to users. Each
participant reliably knows a public key for the authority, with only the authority knowing the
corresponding private key.

For example, consider that Alice and Bob wish to communicate with each other and the following steps
take place and are also shown in the figure below:

400
Figure 4.8: public key authority

1. Alice sends a time stamped message to the central authority with a request for Bob‗s public key
(the time stamp is to mark the moment of the request)
2. The authority sends back a message encrypted with its private key(for authentication) message
contains Bob‗s public key and the original message of Alice –this way Alice knows this is not a
reply to an old request;
3. Alice starts the communication to Bob by sending him an encrypted message containing her
identity IDA and a nonce N1 (to identify uniquely this transaction)
4. Bob requests Alice‗s public key in the same way (step 1)
5. Bob acquires Alice‗s public key in the same way as Alice did. (Step-2)
6. Bob replies to Alice by sending an encrypted message with N1 plus a new generated nonce N2
(to identify uniquely the transaction)
7. Alice replies once more encrypting Bob‗s nonce N2 to assure bob that its correspondent is Alice

Thus, a total of seven messages are required. However, the initial four messages need be used only
infrequently because both A and B can save the other‘s public key for future use, a technique known as
caching. Periodically, a user should request fresh copies of the public keys of its correspondents to ensure
currency.

Public-Key Certificates

The above technique looks attractive, but still has some drawbacks. For any
communication between any two users, the central authority must be consulted by both users to get the
newest public keys i.e. the central authority must be online 24 hours/day. If the central authority goes
offline, all secure communications get to a halt. This clearly leads to an undesirable bottleneck.

A further improvement is to use certificates, which can be used to exchange


keys without contacting a public-key authority, in a way that is as reliable as if the keys were obtained
directly from a public-key authority. A certificate binds an identity to public key, with all contents signed
by a trusted Public-Key or Certificate Authority (CA). A user can present his or her public key to the
authority in a secure manner, and obtain a certificate. The user can then publish the certificate. Anyone
needed this user‘s public key can obtain the certificate and verify that it is valid by way of the attached

401
trusted signature. A participant can also convey its key information to another by transmitting its
certificate. Other participants can verify that the certificate was created
by the authority.

This certificate issuing scheme does have the following requirements:

1. Any participant can read a certificate to determine the name and public key of the certificate‘s
owner.
2. Any participant can verify that the certificate originated from the certificate authority and is not
counterfeit.
3. Only the certificate authority can create and update certificates.
4. Any participant can verify the currency of the certificate.

Figure 4.9: certificate authority

Application must be in person or by some form of secure authenticated communication.


For participant A, the authority provides a certificate of the form
CA = E(PRauth, [T||IDA||PUa])
where PRauth is the private key used by the authority and T is a timestamp. A may then pass this
certificate on to any other participant, who reads and verifies the certificate as follows:
D(PUauth, CA) = D(PUauth, E(PRauth, [T||IDA||PUa])) = (T||IDA||PUa)
The recipient uses the authority‘s public key, PUauth to decrypt the
certificate. Because the certificate is readable only using the authority‘s public key, this verifies that the
certificate came from the certificate authority. The elements IDA and PUa provide the recipient with the
name and public key of the certificate‘s holder. The timestamp T validates the currency of the certificate.
The timestamp counters the following scenario. A‘s private key is learned by an adversary. A generates a
new private/public key pair and applies to the certificate authority for a new certificate.

Meanwhile, the adversary replays the old certificate to B. If B then encrypts


messages using the compromised old public key, the adversary can read those messages. In this context,
the compromise of a private key is comparable to the loss of a credit card. The owner cancels the credit
card number but is at risk until all possible communicants are aware that the old credit card is obsolete.
Thus, the timestamp serves as something like an expiration date. If a certificate is sufficiently old, it is

402
assumed to be
expired.

One scheme has become universally accepted for formatting public-key


certificates: the X.509 standard. X.509 certificates are used in most network security applications,
including IP security, secure sockets layer (SSL), secure electronic transactions (SET), and S/MIME.

Public Key Distribution of Secret Keys


Public-key encryption is usually viewed as a vehicle for the distribution of secret keys to be used for
conventional encryption and the main reason for this is the relatively slow data rates associated with
public-key encryption.

Simple Secret Key Distribution:

If A wishes to communicate with B, the following procedure is employed:

1. A generates a public/private key pair {PUa, PRa} and transmits a message to B consisting of PUa
and an identifier of A, IDA.
2. B generates a secret key, Ks, and transmits it to A, encrypted with A‘s public key.
3. A computes D(PRa, E(PUa, Ks)) to recover the secret key. Because only A can decrypt the
message, only A and B will know the identity of Ks.
4. A discards PUa and PRa and B discards PUa.

Figure 4.10: Simple Secret Key Distributio

In this case, if an adversary, E, has control of the intervening communication channel, then E can
compromise the communication in the following fashion without being detected:

1. A generates a public/private key pair {PUa, PRa} and transmits a message intended
for B consisting of PUa and an identifier of A, IDA.
2. E intercepts the message, creates its own public/private key pair {PUe, PRe} and transmits
PUe||IDA to B.
3. B generates a secret key, Ks, and transmits E(PUe, Ks).
4. E intercepts the message, and learns Ks by computing D(PRe, E(PUe, Ks)).
5. E transmits E(PUa, Ks) to A.

403
The result is that both A and B know Ks and are unaware that Ks has also been revealed to E. A and B
can now exchange messages using Ks E no longer actively interferes with the communications channel
but simply eavesdrops. Knowing Ks E can decrypt all messages, and both A and B are unaware of the
problem. Thus, this simple protocol is only useful in an environment where the only threat is
eavesdropping.

Secret Key Distribution with Confidentiality and Authentication


It is assumed that A and B have exchanged public keys by one of the schemes described earlier. Then the
following steps occur:

Figure 4.11: Secret Key Distribution with Confidentiality and Authentication

1. A uses B‘s public key to encrypt a message to B containing an identifier of A (IDA) and a nonce
(N1), which is used to identify this transaction uniquely.
2. B sends a message to A encrypted with PUa and containing A‘s nonce (N1) as well as a new
nonce generated by B (N2) Because only B could have decrypted message (1), the presence of N1
in message (2) assures A that the correspondent is B.
3. A returns N2 encrypted using B‘s public key, to assure B that its correspondent is A.
4. A selects a secret key Ks and sends M = E(PUb, E(PRa, Ks)) to B. Encryption of this message
with B‘s public key ensures that only B can read it; encryption with
A‘s private key ensures that only A could have sent it.
5. B computes D(PUa, D(PRb, M)) to recover the secret key.

The result is that this scheme ensures both confidentiality and authentication in the exchange of a secret
key

4.2. Firewalls
Firewall design principles

Internet connectivity is no longer an option for most organizations. However, while internet access
provides benefits to the organization, it enables the outside world to reach and interact with local network
assets. This creates the threat to the organization. While it is possible
to equip each workstation and server on the premises network with strong security features, such as

404
intrusion protection, this is not a practical approach. The alternative, increasingly accepted, is
the firewall.

The firewall is inserted between the premise network and internet to establish a controlled link and to
erect an outer security wall or perimeter. The aim of this perimeter is to protect the premises network
from internet based attacks and to provide a single choke point
where security and audit can be imposed. The firewall can be a single computer system or a set of two or
more systems that cooperate to perform the firewall function.

Firewall characteristics:

 All traffic from inside to outside, and vice versa, must pass through the firewall. This is achieved
by physically blocking all access to the local network except via the firewall. Various
configurations are possible.
 Only authorized traffic, as defined by the local security policy, will be allowed to pass. Various
types of firewalls are used, which implement various types of security policies.
 The firewall itself is immune to penetration. This implies that use of a trusted system with a
secure operating system.

Four techniques that firewall use to control access and enforce the site‗s security policy is as follows:

 Service control – determines the type of internet services that can be accessed, inbound or
outbound. The firewall may filter traffic on the basis of IP address and TCP port number; may
provide proxy software that receives and interprets each service request before passing it on; or
may host the server software itself, such as web or mail service.
 Direction control – determines the direction in which particular service request may be initiated
and allowed to flow through the firewall.
 User control – controls access to a service according to which user is attempting to access it.
 Behavior control – controls how particular services are used.

Capabilities of firewall

 A firewall defines a single choke point that keeps unauthorized users out of the protected
network, prohibits potentially vulnerable services from entering or leaving the network, and
provides protection from various kinds of IP spoofing and routing attacks.
 A firewall provides a location for monitoring security related events. Audits and alarms can be
implemented on the firewall system.
 A firewall is a convenient platform for several internet functions that are not security related.
 A firewall can serve as the platform for IPsec.

Limitations of firewall

 The firewall cannot protect against attacks that bypass the firewall. Internal systems may have
dial-out capability to connect to an ISP. An internal LAN may support a modem pool that
provides dial-in capability for traveling employees and telecommuters.

405
 The firewall does not protect against internal threats. The firewall does not protect against
internal threats, such as a disgruntled employee or an employee who unwittingly cooperates with
an external attacker.
 The firewall cannot protect against the transfer of virus-infected programs or files. Because of the
variety of operating systems and applications supported inside the perimeter, it would be
impractical and perhaps impossible for the firewall to scan all incoming files, e-mail, and
messages for viruses.

Types of firewalls

There are 3 common types of firewalls.

 Packet filters
 Application-level gateways
 Circuit-level gateways

Packet filtering router

A packet filtering router applies a set of rules to each incoming IP packet and then forwards or discards
the packet. The router is typically configured to filter packets going in both directions.
Filtering rules are based on the information contained in a network packet:

 Source IP address – IP address of the system that originated the IP packet.


 Destinations IP address – IP address of the system, the IP is trying to reach.
 Source and destination transport level address – transport level port number.
 IP protocol field – defines the transport protocol.
 Interface – for a router with three or more ports, which interface of the router the packet come
from or which interface of the router the packet is destined for.

Figure 4.12: packet filtering router

The packet filter is typically set up as a list of rules based on matches to fields in the IP or TCP header. If
there is a match to one of the rules, that rule is invoked to determine whether to forward or discard the
packet. If there is no match to any rule, then a default action is taken.

Two default policies are possible:

 Default = discard: That which is not expressly permitted is prohibited.

406
 Default = forward: That which is not expressly prohibited is permitted.

The default discard policy is the more conservative. Initially everything is blocked, and services must be
added on a case-by-case basis. This policy is more visible to users, who are most likely to see the firewall
as a hindrance. The default forward policy increases ease of use for end users
but provides reduced security.

Advantages of packet filter router

 Simple
 Transparent to users
 Very fast

Weakness of packet filter firewalls

 Because packet filter firewalls do not examine upper-layer data, they cannot prevent attacks that
employ application specific vulnerabilities or functions.
 Because of the limited information available to the firewall, the logging functionality present in
packet filter firewall is limited.
 It does not support advanced user authentication schemes.
 They are generally vulnerable to attacks such as layer address spoofing.

Some of the attacks that can be made on packet filtering routers and the appropriate counter measures are
the following:

 IP address spoofing – the intruders transmit packets from the outside with a source IP address
field containing an address of an internal host.

Countermeasure: to discard packet with an inside source address if the packet arrives on an external
interface.

 Source routing attacks – the source station specifies the route that a packet should take as it
crosses the internet; i.e., it will bypass the firewall.

Countermeasure: to discard all packets that uses this option.

 Tiny fragment attacks – the intruder create extremely small fragments and force the TCP header
information into a separate packet fragment. The attacker hopes that only the first fragment is
examined and the remaining fragments are passed through.

Countermeasure: to discard all packets where the protocol type is TCP and the IP fragment offset is
equal to 1.

Application level gateway

407
An Application level gateway, also called a proxy server, acts as a relay of application level traffic. The
user contacts the gateway using a TCP/IP application, such as Telnet or FTP, and the gateway asks the
user for the name of the remote host to be accessed. When the user responds
and provides a valid user ID and authentication information, the gateway contacts the application on the
remote host and relays TCP segments containing the application data between the two endpoints.

Application level gateways tend to be more secure than packet filters. It is easy to log and audit all
incoming traffic at the application level. A prime disadvantage is the additional processing overhead on
each connection.

Figure 4.13: Application level gateway

Circuit level gateway


Circuit level gateway can be a stand-alone system or it can be a specified function performed by an
application level gateway for certain applications. A Circuit level gateway does not permit an end-to-end
TCP connection; rather, the gateway sets up two TCP connections, one between itself and a TCP user on
an inner host and one between itself and a TCP user on an outer host. Once the two connections are
established, the gateway typically relays TCP segments from one connection to the other without
examining the contents. The security function consists of determining which connections will be allowed.

A typical use of Circuit level gateways is a situation in which the system administrator trusts the internal
users. The gateway can be configured to support application level or proxy service on inbound
connections and circuit level functions for outbound connection

Figure 4.14: Circuit level gateway

408
UNIT - V

5.1. Viruses and Other Wildlife


 The word virus has become a generic term describing a number of different types of attacks on
computers using malicious code.
 Many have been infected at least once, either by one of the famous attacks such as Melissa,
ExploreZip, MiniZip, Code Red, NIMDA, BubbleBoy, I LoveYou, NewLove, KillerResume,
Kournikova, NakedWife, or Klez;
 Each of which uses a certain amount of the computer‗s resources to display or gather data about
the user.

Malicious Logic

 Computer viruses, worms, and Trojan horses are effective tools with which to attack computer
systems.
 They assume an authorized user‘s identity.
 This makes most traditional access controls useless.
 We study malicious logic, focusing on Trojan horses and computer viruses, and discuss defenses.

5.2. A Malware Taxonomy


 Denial of service attack (DoS) – Attack that produces so many requests of system resources in the
computer under attack such as calls to the operating system, or opening dialogs with other
machines and then hanging onto the line to tie it up that normal functions on the targeted
computer are overwhelmed and cease.
 Distributed DoS attack (DDoS) – DoS attack launched from many different computers, usually
zombies hijacked for this purpose.
 Rootkit – Malware, usually a small suite of programs, that install a new account or steal an
existing one, and then elevate the security level of that account to the highest degree (root for
Unix, Administrator for Windows) so that attackers can do their will without obstruction.
 Sniffer – An attack, usually a Trojan horse, that monitors computer transactions or keystrokes. A
keystroke logger, for instance, detects sensitive information by monitoring the user‗s keystrokes.
 Trojan horse – Malware named for its method of getting past computer defenses by pretending to
be something useful.
 Zombie – A corrupted computer that is waiting for instructions and commands from its master,
the attacker.

Financial Effects of Malicious Programs


Spending time recovering from a virus steals opportunity in a few ways:

 The time and effort it takes to takes to root out the virus and repair the damage.

409
 The diversion of time and effort from what may have been revenue production.
 The out and out loss of computer hardware (rare these days) or documents, files, and applications
that either cannot be recovered, or for which the time and expense of recovery can‗t be justified.
 The classification of malicious code into categories such as ―virus‖ or ―worm‖ is today somewhat
quaint.
 Attackers who want to harm your system will get there any way they can and whipping up a
software half-breed that blurs definitions
 For this reason, modern attack tools tend to be labeled by their function more than their
genealogy.
 Hence there are viruses, worms, rootkits, Trojan horses, password sniffers, and zombies.
 In this course we shall call all such programs malicious code, or for short, malware
 Malicious logic is a set of instructions that cause a site‘s security policy to be violated.

5.3. Viruses and Public Health

 Most malicious code today is concerned not only with trashing your machine, but also in using
your machine to infect others.
 A classic example is the software used to create a DDoS attack.
 After hiding itself in your computer, modern malware typically seeks information from you to use
to infect others, and it usually finds it in your address book or by prowling your local area
network.
 The malware then stalks its new victims, often by sending an email in your name and infects them
as well

5.3.1Viruses
 A virus is a code fragment that copies itself into a larger program, modifying that program.
 It is not an independent program but depends upon a host program, which it infects.
 A virus executes only when its host program begins to run.
 The virus then replicates itself, infecting other programs as it reproduces.
 After seeing to its own reproduction, it then does whatever dirty work it carries in its
programming, or payload.
 A virus might start reproducing right away, or it might lie dormant for some time, until it‗s
triggered by a particular event. (Friday the 13th virus).
 A virus may infect memory, a floppy disk, a hard drive, a backup tape, or any other type of
storage.
 Viruses also can move about as macros, such as those written in the scripting language used to
automate keystrokes in office programs such as Microsoft Word or Excel.

5.3.2 The history of viruses


 1949 – John von Neumann presented a paper on the ―Theory and Organization of Complicated
Automata,‖ in which he postulated that a computer program could reproduce.
 1950 – Bell Labs game they called ―Core Wars.‖ In which, two programmers would unleash
software ―organisms‖ and watch as they vied for control of the computer.

410
 1984 – Ken Thompson described the development of what can be considered the first practical
computer virus. Thompson wrote a self-reproducing program in the C programming language.
 1949 – John von Neumann presented a paper on the ―Theory and Organization of Complicated
Automata,‖ in which he postulated that a computer program could reproduce.
 1950 – Bell Labs game they called ―Core Wars.‖ In which, two programmers would unleash
software ―organisms‖ and watch as they vied for control of the computer.
 1984 – Ken Thompson described the development of what can be considered the first practical
computer virus. Thompson wrote a self-reproducing program in the C programming language.

5.3.3 Types of Viruses


Several types of computer viruses have been identified

 Boot Sector Infectors


 Multipartite Viruses
 Stealth Viruses
 Encrypted Viruses
 Polymorphic Viruses
 Macro Viruses

Boot Sector Infectors

 The boot sector is the part of a disk used to bootstrap the system or mount a disk
 Code in that sector is executed when the system ―sees‖ the disk for the first time
 When the system boots, or the disk is mounted, any virus in that sector is executed. (The actual
boot code is moved to another place, possibly another sector.)
 A boot sector infector is a virus that inserts itself into the boot sector of a disk.

5.3.4Executable Infectors
 The PC variety of executable infectors is called COM or EXE viruses because they infect
programs with those extensions.
 The virus can prepend itself to the executable or append itself.
 An executable infector is a virus that infects executable programs

Multipartite Viruses

 A multipartite virus is one that can infect either boot sectors or applications
 Such a virus typically has two parts, one for each type.

Stealth Viruses

 Stealth viruses are viruses that conceal the infection of files.


 It avoids detection by modifying parts of the system that could be used to detect it.

Encrypted Viruses

 Computer virus detectors often look for known sequences of code to identify computer viruses.

411
 To conceal these sequences, some viruses encipher most of the virus code, leaving only a small
decryption routine and a random cryptographic key in the clear.
 An encrypted virus is one that enciphers all of the virus code except for a small decryption
routine Polymorphic Viruses
 A polymorphic virus is a virus that changes its form each time it inserts itself into another
program
 Consider an encrypted virus. The body of the virus varies depending on the key chosen, so
detecting known sequences of instructions will not detect the virus.
 However, the decryption algorithm can be detected. Polymorphic viruses were designed to
prevent this.

5.3.5 Macro Viruses


A macro virus is a virus composed of a sequence of instructions that is interpreted, rather than executed
directly.

 A macro virus can infect either executable or data files.


 If it infects executable files, it must arrange to be interpreted at some point.
 Macro viruses are not bound by machine architecture
 If it infects executable files, it must arrange to be interpreted at some point.
 Macro viruses are not bound by machine architecture.

5.3.6 Virus Detection


The following techniques are used to detect viruses:

 Scanning
o Once a virus has been detected , it is possible to write scanning program that look for
signature string characteristics of the virus
 Integrity checking with checksums
o Integrity checking reads the entire disk and records integrity

5.4. Computer Worms


 A computer virus infects other programs. A variant of the virus is a program that spreads from
computer to computer, spawning copies of itself on each one.
 A worm is a program that replicates and propagates itself without having to attach itself to a host
 Worms can continue replicating themselves until they completely fill available resources, such as
memory, hard drive space, and network bandwidth.

Viruses and Worms

 Viruses and worms can be used to infect a system and modify a system to allow a hacker to gain
access. Many viruses and worms carry Trojans and backdoors.
 A virus and a worm are similar in that they‗re both forms of malicious software (malware).
 A virus infects another executable and uses this carrier program to spread itself. The virus code is
injected into the previously benign program and is spread when the program is run.
 A worm is similar to a virus in many ways but does not need a carrier program. A worm can self-
replicate and move from infected host to another host.

412
 A worm spreads from system to system automatically, but a virus needs another program in order
to spread.

History of Worms

 1975 – John Brunner‗s science fiction novel, The Shockwave Rider, programs called
―tapeworms‖ lived inside computers, spread from machine to machine, and were ―indefinitely
self-perpetuating so long as the net exists.‖
 1980 – John Schoch and Jon Hupp, researchers at Xerox Palo Alto Research Center, developed
the first experimental worm programs as a research tool.
 The Xerox PARC worms were, on the whole, useful creatures; they handled mail, ran distributed
diagnostics, and performed other distributed functions.

The Morris Worm

 A creation of Robert Tappan Morris, a 23-yearold doctoral student from Cornell, who on the
second of November 1988, at about 6:00 p.m., released a self-replication bit of code onto the
Internet designed to spread itself freely, but to do little else.
 There was no dangerous payload.
 Soon, however VAX and Sun machines (the only systems targeted) across the country started to
bog down.
 This same scene was replayed at the sites of over 6,000 machines across the country.
 While no physical damage was caused by the worm, the U.S. General Accounting Office
estimated that the worm cost between $100,000 and $10,000,000 due to lost access.

5.5. Trojans Horses and Backdoors

5.5.1Trojans Horses
 Trojans and backdoors are types of malware used to infect and compromise computer systems
 A Trojan horse is a program with an overt effect and a covert effect.
 An overt channel is the normal and legitimate way that programs communicate within a
computer system or network.
 A covert channel uses programs or communications paths in ways that were not intended
 Trojans can use covert channels to communicate. Some client Trojans use covert channels to
send instructions to the server component on the compromised system.
 Trojan horses can make copies of themselves a propagating Trojan horse
 Trojan horse hides in an independent program that performs a useful or appealing function or
appears to perform that function.
 Along with the apparent function, however, the program performs some other unauthorized
operation
 A typical Trojan horse tricks a user into running a program, often an attractive or helpful one.
When the unsuspecting user runs the program, it does indeed perform the expected function.
 But its real purpose is often to penetrate the defenses of the system by usurping the user‘s
legitimate privileges and thus obtaining information that the penetrator isn‘t authorized to access.
 An example of this would be the modern rootkit, which is a script that controls a small suite of
programs that create an administrative level account on the targeted system, and then create a
backdoor.

413
 Backdoor is an unmonitored entrance way that evades the security mechanisms, through which
the attacker can later gain convenient access.

5.5.2. Backdoors
 Backdoor is a program or a set of related programs that a hacker installs on a target system to
allow access to the system at a later time.
 A backdoor can be embedded in a malicious Trojan.
 The objective of installing a backdoor on a system is to give hackers access into the system at a
time of their choosing.
 The key is that the hacker knows how to get into the backdoor undetected and is able to use it to
hack the system further and look for important information.

5.5.3Types of Trojans
 The most common types of Trojans
 Remote Access Trojans (RATs) Used to gain remote access to a system.
 Data-Sending Trojans Used to find data on a system and deliver data to a hacker.
 Destructive Trojans Used to delete or corrupt files on a system.
 Denial-of-Service Trojans Used to launch a denial-of-service attack.
 Proxy Trojans Used to tunnel traffic or launch hacking attacks via other systems.
 FTP Trojans Used to create an FTP server in order to copy files onto a system.
 Security Software Disabler Trojans Used to stop antivirus software

Virus and Worm Hoaxes

 Hoaxes are false alarms claiming reports about a non-existing virus


 It disrupt the harmony and flow of an organization when they send group e-mails warning of
supposedly dangerous viruses that don‗t exist
 As frustrating as viruses and worms are, perhaps more time and money is spent on resolving virus
hoaxes
 the network becomes overloaded, and much time and energy is wasted as users forward the
warning message to everyone they know, post the message on bulletin boards, and try tor update
their antivirus protection software
 A number of Internet resources enable individuals to research viruses to determine if they are fact
or fiction.
 www.cert.org or www.hoax-slayer.com

5.6. Other Forms of Malicious Logic


 Logic Bombs
 Rabbits and Bacteria
 Spyware
 Spam
 Software problems (The Buffer-Overflow Attack)
 Software Attacks
 Hardware Threats …etc

414
Logic Bombs

 A logic bomb is a type of malware that executes its malicious purpose when a specific criteria is
met.
 Such as a user logging in or the arrival of midnight, Friday the 13th.
 The most common factor is date/time
 Logic bomb might delete files on a certain date/time
 Disaffected employees may plant Trojan horses in systems use logic bombs such as deleting the
payroll roster when that user‘s name is deleted.

Types of Bombs

1. A bomb that‗s set to go off on a particular date or after some period of time has elapsed is
called a time bomb.(e.g. Friday the 13th)
2. A bomb that‗s set to go off when a particular event occurs is called a logic bomb.

Rabbits and Bacteria

 A bacterium or a rabbit is a program that absorbs all of some class of resource.


 Multiplies so rapidly that resources become exhausted, this creates a denial of service attack.

While true
do
mkdir x
chdir x
done

Spyware

 Spyware is simply software that literally spies on what you do on your computer.
 Spyware can be as simple as a cookie used by a website to record a few brief facts about your
visit to that website, or spyware could be of a more insidious type, such as a key logger
 Cookie a text file that your browser creates and stores on your hard drive—that a website you
have visited downloads to your machine and uses to recognize you when you return to the site
 Key loggers are programs that record every keystroke you make on your keyboard  This
spyware then logs your keystrokes to the spy‗s file
 The most common use of a key logger is to capture usernames and passwords.
 And can capture every document you type, as well as anything else you might type
 This data can be stored in a small file hidden on your machine for later extraction or sent out in
TCP packets to some predetermined address
 Wait until after hours to upload this data to some server or to use your own email software to
send the data to an anonymous email address.
 There are also some key loggers that take periodic screenshots from your machine, revealing
anything that is open on your computer.
 Spyware is software that literally spies on your activities on a particular computer.

415
Spam

Spam is unwanted email.

 Spam is email that is sent out to multiple parties, that is unsolicited.


 Often it is used for marketing purposes, but it can be used for much more malicious goals.
o Can be used to spread a virus or worm.
 also used to send emails enticing recipients to visit phishing websites in order to steal the
recipient‗s identity.
 Essentially, spam is, at best, an annoyance and, at worst, a vehicle for spyware, viruses, worms,
and phishing attacks.

Software Attacks

 Password Crack
 Information Assurance and Security Module Page 134
 Attempting to reverse-calculate a password is often called cracking.
 A cracking attack is a component of many dictionary attacks
 It is used when a copy of password, obtained, and compared If they are the same, the password
has been cracked
 It can be brute force and dictionary attack
 Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS)
 In a denial-of-service (DoS) attack, the attacker sends a large number of requests to a target that
the target system becomes overloaded and cannot respond to legitimate requests for service
 A distributed denial of- service (DDoS) is an attack in which a coordinated stream of requests is
launched against a target from many locations at the same time
 Spoofing is a technique used to gain unauthorized access to computers, wherein the intruder
sends messages with a source IP address that has been forged to indicate that the messages are
coming from a trusted host
 Routers and firewall arrangements can offer protection against IP spoofing
 Man-in-the-middle or TCP hijacking attack, an attacker monitors (or sniffs) packets from the
network, modifies them, and inserts them back into the network.
 May uses IP spoofing to enable an attacker to impersonate another entity on the network.
 It allows the attacker to eavesdrop as well as to change, delete, reroute, add, forge, or divert data.
 A sniffer is a program or device that can monitor data traveling over a network.
 Sniffers can be used both for legitimate network management functions and for stealing
information.
 Sniffers add risk to the network, because many systems and users send information on local
networks in clear text.
 A sniffer program shows all the data going by, including passwords, the data inside files such as
word-processing documents—and screens full of sensitive data from applications
 Social engineering is the process of using social skills to convince people to reveal access
credentials or other valuable information to the attacker.
 A perpetrator posing as a person higher in the organizational hierarchy than the victim.
 To prepare for this false representation, the perpetrator may have used social engineering tactics
against others in the organization to collect seemingly unrelated information that, when used
together, makes the false representation more credible

416
5.7. Counter measures
 There are many programs that can help you keep viruses and other wildlife away from your
system and can wipe out the critters if they gain access (virus protection programs)
 These products, and the system administration procedures that go along with them, have two
overlapping goals:
 They don‘t let you run a program that‘s infected, and they keep infected programs from

damaging your system.

Antivirus

 Virus protection software uses two main techniques:


 The first uses signatures, which are snapshots of the code patterns of the virus.
 The antivirus program lurks in the background watching files come and go until it detects a
pattern that aligns with one of its stored signatures, and then it sounds the alarm and maybe
isolates or quarantines the code.
 Alternatively, the virus protection program can go looking for trouble. It can periodically scan the
various disks and memories of the computer, detecting and reporting suspicious code segments,
and placing them in quarantine.
 One problem with signature-based virus protection programs is that they require a constant flow
of new signatures in response to evolving attacks.
 Their publishers stay alert for new viruses, determine the signatures, and then make them
available as updated virus definition tables to their users.
 Another problem is called the Zero Day problem. Basically, this occurs when a user trips over a
new virus before the publisher discovers it and can issue an updated signature.
 A third problem is that, just as with biological pathogens, viruses can mutate. Sometimes this
happens accidentally; other times, it happens because a clever programmer uses file compression
software to change the signature of the virus to elude signature detection.
 This means it can change its own form by introducing extra statements or adding random
numbers, to elude signature detection.
 To counter these, virus protection publishers are adding what is called heuristic detection features
to their wares.
 A heuristic is a rule or behavior. If a virus exhibits that behavior, the antivirus software tries to
stop it in the act.
 For instance, a code s that suddenly accesses a critical operating system area or file, or
unexplained changes in file size, particularly in system files, sudden decreases in available hard
disk space, or changes in file time or date stamps.

What Makes a System Secure?

 System access controls – ensure that unauthorized users don‘t get into the system and encourage
(sometimes force) authorized users to be security-conscious.
 Data access controls – monitor who can access what data, and for what purpose. Another word
for this is authorization, that is, what you can do once you are authenticated.
o discretionary access controls
o mandatory access controls
 System and Security Administration – methods perform the offline procedures that make or
break a secure system—by clearly delineating system administrator responsibilities, by training
users appropriately, and by monitoring users to make sure that security policies are observed.

417
 System Design – take advantage of basic hardware and software security characteristics.

I. System Access: Logging into Your System

Trying to log into a system is a kind of challenge/response scenario.


You tell the system who you are, and the system requests that you prove it by providing information that
matches what the computer has stored about you.
In security terms, this two-step process is called identification and authentication

Identification and Authentication

 Identification is the way you tell the system who you are.
 Authentication is the way you prove to the system that you are who you say you are.

There are three classic ways to do so:

 What you know – The most familiar example is a password. The theory is that if you know the
secret password for an account, you must be the owner of that account.
 What you have – Examples are keys, tokens, badges, and smart cards you must use to unlock
your terminal or your account. The theory is that if you have the key or equivalent, you must be
the owner of it.
 What you are – Examples are physiological or behavioral traits, such as your fingerprint,
handprint, retina pattern, iris pattern, voice, signature, or keystroke pattern. Biometric systems
compare your particular trait against the one stored for you and determine whether you are who
you claim to be.

Multifactor authentication

 Multifactor authentication is a way to cascade the three methods listed previously such that if an
attacker gets past one safeguard, they still have to pass another.
 Passwords are still, far and away, the authentication tool of choice.
 In a multifactor authentication system, username and password would be augmented with one of
the other two systems.

Login Processes

 Encryption – This method scrambles a password so that it cannot be deciphered by someone


who monitors storage or transmissions.
 Challenge and response – With this method, the user is asked to authenticate at the beginning of
the exchange, and frequently at random intervals thereafter.

Models of Login mechanisms

 Password Authentication Protocol – user provides a username and password, and these are
compared with values stored in a table to see if they match.
 Challenge Handshake Authentication Protocol (CHAP) – the device doing the authenticating,
usually a network server, sends the client program an ID value and a random number, and both
the sender and peer share a predefined secret word, phrase or value.

418
 Mutual authentication – can be thought of as two-way authentication. The client authenticates to
the server, and then the server authenticates to the client or workstation.
 One-time password – is a variation of the username/password combination. With OTP, the user
creates a password, and the system creates a variation of the password each time a password is
required.
 Per-session authentication – requires the client to re authenticate for each exchange of
information is burdensome, but it provides a great deal of security.
 Tokens – A token or token card is usually a small device that supplies the response to a challenge
that is received when trying to log on.

Table 5.1: Sample login/password controls

419

You might also like