Network and System Administration

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 79

Hawasa Institutes Of Technology

University Faculty Of Informatics


Department Of Computer Science

CHAPTER ONE:-INTRODUCTION TO SYSTEM & NETWORK


ADMINISTRATION
1.1 What is network and system administration?
 What is ‘the system’?
In system administration, the word system is used to refer both to the operating system of a computer and
often, collectively the set of all computers that cooperate in a network. If we look at computer systems
analytically, we would speak more precisely about human–computer systems:
 Network infrastructure

There are three main components in a human–computer system:

1. Humans: who use and run the fixed infrastructure, and cause most problems.
2. Host computers: computer devices that run software. These might be in a fixed location, or mobile
devices.
3. Network hardware: This covers a variety of specialized devices including the following key
components:
o Dedicated computing devices that direct traffic around the Internet. Routers talk at the IP

address level, or ‘layer 3’, simplistically speaking.

o Switches: fixed hardware devices that direct traffic around local area networks. Switches talk at the

level of Ethernet or ‘layer 2’ protocols, in common parlance.

o Cables: There are many types of cable that interconnect devices: fiber optic cables, twisted pair
cables, null-modem cables etc.
Network and system administration is a branch of engineering that concerns the operational management of
human–computer systems. It is unusual as an engineering discipline in that it addresses both the technology
of computer systems and the users of the technology on an equal basis. It is about putting together a network
of computers (workstations, PCs and supercomputers), getting them running and then keeping them running
in spite of the activities of users who tend to cause the systems to fail.
A system administrator works for users, so that they can use the system to produce work. However, a system
administrator should not just cater for one or to selfish needs, but also work for the benefit of a whole
community. Today, that community is a global community of machines and organizations, which spans
every niche of human society and culture, thanks to the Internet. It is often a difficult balancing act to

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 1


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

determine the best policy, which accounts for the different needs of everyone with a stake in a system. Once
a computer is attached to the Internet, we have to consider the consequences of being directly connected to
all the other computers in the world.
In the future, improvements in technology might render system administration a somewhat easier task – one
of pure resource administration – but, today, system administration is not just an administrative job, it is an
extremely demanding engineer‘s job. It‘s about hardware, software, user support, diagnosis, repair and
prevention. System administrators need to know a bit of everything: the skills are technical, administrative
and socio-psychological.
The terms network administration and system administration exist separately and are used both variously and
inconsistently by industry and by academics. System administration is the term used traditionally by
mainframe and UNIX engineers to describe the management of computers whether they are coupled by a
network or not. To this community, network administration means the management of network infrastructure
devices (routers and switches). The world of personal computers (PCs) has no tradition of managing
individual computers and their subsystems, and thus does not speak of system administration, per se.
To this community, network administration is the management of PCs in a network. Network and system
administration are increasingly challenging. The complexity of computer systems is increasing all the time.
Even a single PC today, running Windows NT, and attached to a network, approaches the level of
complexity that mainframe computers had 23 years ago. We are now forced to think systems not just
computers.
A key task of network and system administration is to build hardware configurations; another is to configure
software systems. Both of these tasks are performed for users. Each of these tasks presents its own
challenges, but neither can be viewed in isolation.
Hardware has to conform to the constraints of the physical world; it requires power, a temperate (usually
indoor) climate, and a conformance to basic standards in order to work systematically. The type of hardware
limits the kind of software that can run on it. Software requires hardware, a basic operating system
infrastructure and a conformance to certain standards, but is not necessarily limited by physical concerns as
long as it has hardware to run on.
Modern software, in the context of a global network, needs to inter-operate and survive the possible
hostilities of incompatible or inhospitable competitors. Today the complexity of multiple software systems
sharing a common Internet space reaches almost the level of the biological. In older days, it was normal to
find proprietary solutions, whose strategy was to lock users into one company‘s products. Today that strategy

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 2


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

is less dominant, and even untenable, thanks to networking. Today, there is not only a physical environment
but a technological one, with a diversity that is constantly changing. Part of the challenge is to knit
apparently disparate pieces of this community into a harmonious whole. We apply technology in such an
environment for a purpose (running a business or other practice), and that purpose guides our actions and
decisions, but it is usually insufficient to provide all the answers. Software creates abstractions that change
the basic world view of administrators. The software domain .com does not have any fixed geographical
location, but neither do the domains .uk or .no. Machines belonging to these software domains can be located
anywhere in the world. It is not uncommon to find foreign embassies with domain names inside their country
of origin, despite being located around the world. We are thus forced to think globally.
The global view, presented to us by information technology means that we have to think penetratingly about
the systems that are deployed. The extensive filaments of our inter-networked systems are exposed to attack,
both accidental and malicious in a competitive jungle. Ignore the environment and one exposes oneself to
unnecessary risk.
For humans, the task of system administration is a balancing act. It requires patience, understanding,
knowledge and experience. It is like working in the casualty ward of a hospital. Administrators need to be the
doctor, the psychologist, and – when instruments fail – the mechanic. We need to work with the limited
resources we have, be inventive in a crisis, and know a lot of general facts and figures about the way
computers work. We need to recognize that the answers are not always written down for us to copy, that
machines do not always behave the way we think they should. We need to remain calm and attentive, and
learn a dozen new things a year.
The Scope and duties of System Administration
Computing systems require the very best of organizational skills and the most professional of attitudes. To
start down the road of system administration, it needs to know many facts and build confidence though
experience – but we also need to know our limitations in order to avoid the careless mistakes which are all
too easily provoked.
The system administrator or sysAdmin is a person who is responsible upkeep, configuration, and reliable
operation of the computer system, especially in multi-user computers such as Servers. The system
administrator seeks to ensure that uptime, performance, resources, and security of the computer system
managed by sysAdmin to meet the needs of the users, without exceeding a set budget when doing so.
The scope of the system Administrator responsibilities very broad, in that it can apply to small or single
office environment but can scale all way up to a complex data center, with hundreds to thousands of servers

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 3


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

and physical virtual networking and service infrastructures. The care and feeding of system is part and parcel
of the system administrator job, but that function can cover a multitude of possibilities when looking at
specific positions and the responsibilities that go with them. System Administrator generally include working
with one or more of major server operating systems as part of their day-to-day routine, among them
Windows Servers and various version of Linux.
The system administrator has the following duties:
o Morning checks of systems/software
o Monitor system performance and provide security measures, troubleshooting and maintenance as
needed.
o Assist users to diagnose and solve their problems.
o Adding/deleting/modifying user account information, resetting passwords, etc.
o Design and implement systems, network configurations, and network architecture, including
hardware and software technology, site locations, and integration of technologies.
o Maintain the peripherals, such as printers, that are connected to the network.
o Identify areas of operation that need upgraded equipment such as Computer, Drop Cables (UTP),
fiber optic cables, Hub, Switch, routers, etc..
o Train users in use of equipment.
o Develop and write procedures for installation, use, and troubleshooting of communications hardware
and software.
Ethical issues in System Administration
A system administrator wields great power. He or she has the means to read everyone's mail, change
anyone's files, to start and kill anyone's processes. This power can easily be abused and that temptation could
be great. Another danger is the temptation for an administrator to think that the system exists primarily for
him or her and that the users are simply a nuisance to the smooth running of things; if network service is
interrupted, or if a silly mistake is made which leads to damage in the course of an administrator's work, that
is okay: the users should accept these mistakes because they were made trying to improve the system. When
wielding such power there is always the chance that such arrogance will build up. Some simple rules of
thumb are useful.
The ethical integrity of a system administrator is clearly an important issue. Administrators for top secret
government organizations and administrators for small businesses have the same responsibilities towards
their users and their organizations. One has only to look at the governing institutions around the world to see

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 4


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

that power corrupts. Few individuals, however good their intentions, are immune to the temptations of such
power at one time or other. As with governments it is perhaps a case of: those who wish the power are least
suited to deal with it.
Administrators `watch over' backups, email, private communications and they have access to everyone's
files. While it is almost never necessary to look at a user's private files, it is possible at any time and users do
not usually consider the fact that their files are available to other individuals in this way. Users need to be
able to trust the system and its administrator.
o What kind of rules can you fairly impose on users?
o What responsibilities do you have to the rest of the network community, i.e. the rest of the world?
o Censoring of information or views
o Restriction of personal freedom
o Taking sides in personal disputes
o Extreme views (some institutions have policies about this)
o Unlawful behavior
Objectivity of the administrator means avoiding taking sides in ethical, moral, religious or political debates,
in the role of system administrator. Personal views should be kept separate from professional views.
However, the extent to which this is possible depends strongly on the individual and organizations have to be
aware of this. Some organizations dictate policy for their employees. This is also an issue to be cautious
with: if a policy is too loose it can lead to laziness and unprofessional behavior; if it is too paranoid or
restrictive it can lead to bad feelings in the organization. Historically, unhappy employees have been
responsible for the largest cases of computer crime.
Because computer systems are human–computer communities, there are ethical considerations involved in
their administration. Even if certain decisions can be made objectively, e.g. for maximizing productivity or
minimizing cost, one must have a policy for the use and management of computers and their users. Some
decisions have to be made to protect the rights of individuals. A system administrator has many
responsibilities and constraints to consider. Ethically, the first responsibility must be to the greater network
community, and then to the users of the system. An administrator‘s job is to make users‘ lives bearable and
to empower them in the production of real work. The following are ethical consideration in Network and
System Administration.
o Strive to ensure the necessary integrity, reliability, and availability of the system.
o Design and maintain each system to support the purpose of the system to the organization.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 5


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Cooperate with the larger computing community to maintain the integrity of network and computing
resources.
o Strive to build and maintain a safe, healthy and productive workplace.
o Make decisions consistent with the safety, privacy, and well-being of the community and the public,
and to disclose promptly factors that might pose unexamined risks or dangers.
o Accept and offer honest criticism of technical work as appropriate and will credit properly the
contribution of others.
o Lead by example, maintaining high ethical standards and degree of professionalism in the
performance of all duties.
The challenges of System Administration
System administration is not just about installing operating systems. It is about planning and designing an
efficient community of computers so that real users will be able to get their jobs done. That means:
o Designing a network which is logical and efficient.
o Deploying large numbers of machines which can be easily upgraded later.
o Deciding what services are needed.
o Planning and implementing adequate security.
o Providing a comfortable environment for users.
o Developing ways of fixing errors and problems which occur.
o Keeping track of and understanding how to use the enormous amount of knowledge which increases
every year.
Some system administrators are responsible for both the hardware of the network and the computers which it
connects, i.e. the cables as well as the computers. Some are only responsible for the computers. Either way,
an understanding of how data flow from machine to machine is essential as well as an understanding of how
each machine affects every other.
A computer which is plugged into the network is not just yours. It is part of a society of machines which
shares resources and communicates with the whole. What your machine does affects other machines. What
other machines do affects your machine.
The ethical issues associated with connection to the network are not trivial. Administrators are responsible
for their organization's conduct to the entire rest of the Internet. This is a great responsibility which must be
borne wisely.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 6


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

As System Administrator know your network


System administration requires its operatives to know a lot of facts about hardware and software. The road to
real knowledge is long and hard, so how does one get started?
A top down approach is useful for understanding the network interrelationships. The best place to start is at
the network level. In most daily situations, one starts with a network already in place--i.e. one doesn't have to
build one from scratch. It is important to know what hardware one has to work with and where everything is
to be found; how it is organized (or not) and so on. Here is a checklist:
o How does the network fit together? (What is its topology?)
o Which function does each host/machine have on the network?
o Where are the key network services?
Having thought about the network as a whole, one can begin to think about individual hosts/machines. First
there is a hardware list.
o What kind of machines are on the network? What are their names and addresses and where are they?
Do they have disks? How big?
o How much memory do they have? If they are PC's, which screen cards do they have?
o What operating systems are running on the network? MS-DOS, Novell, NT or Unix (if so which
Unix? GNU/Linux, Solaris, HPUX?)
o What kind of network cables are used? Is it thin/thick Ethernet? Is it a star net (hubs/twisted pair), or
fiber optic FDDI net?
o Where are hubs/repeaters/the router or other network control boxes located? Who is responsible for
maintaining them?
o What is the hierarchy of responsibility?
Then there is a software list:
o How many different subnets does your network have?
o What are their network addresses?
o Find the router addresses (the default routes) on each segment.
o What is the net mask?
o What is the local time zone?
o What broadcast address convention is used?
o Find the key servers on these networks?
o Where are the NFS network disks located? Which machine are they attached to?

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 7


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Finding and recording this information is an important learning process. The information will change as time
goes by. Networks are not static; they grow and evolve with time.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 8


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

CHAPTER TWO: HOST MANAGEMENT


2.1. Booting and Shutting Down of an Operating System
The two most fundamental operations which one can perform on a host are to start it up and to shut it
down. With any kind of mechanical device with moving parts, there has to be a procedure for shutting it
down. One does not shut down any machine in the middle of a crucial operation, whether it be a washing
machine in the middle of a program, an aircraft in mid-flight, or a computer writing to its disk.
With a multitasking operating system, the problem is that it is never possible to predict when the system
will be performing a crucial operation in the background. For this simple reason, every multitasking
operating system provides a procedure for shutting down safely. A safe shutdown avoids damage to disks
by mechanical interruption, but it also synchronizes hardware and memory caches, making sure that no
operation is left incomplete.
2.1.1. Boot and Shutdown of Linux
The process of starting up Linux is multifaceted. So much happens during a system boot that it is easy to
lose touch with what procedures actually take place. Much of the wizardry of system administration is
simply familiarity with a process such as booting. Knowing this process well makes it fairly easy to
configure the system, to fix it when it breaks, and to explain it to your users.
The system performs a similar sequence of tasks after you command it to shut down. There are many
active processes that must be shut down and devices and file systems that must be un mounted to avoid
causing damage to your system. This process also occurs in stages
 The Linux Boot Process
When you start up a Linux system, a series of events occurs after you power up and before you receive a
login prompt. This sequence is referred to as the boot process. Although this sequence can vary based on
configuration, the basic steps of the boot process can be summed up as follows:
1. The BIOS starts and checks for hardware devices
The BIOS is described as firmware because it is built into the hardware memory. The BIOS will
automatically run when the power is applied to the computer. The purpose of the BIOS is to find the
hardware devices that will be needed by the boot process, to load and initiate the boot program
stored in the Master Boot Record (MBR), and then to pass off control to that boot program. In the
case of Linux, the BIOS performs its checks and then looks to the MBR, which contains the first

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 9


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

stage boot loader, such as GRUB or LILO. After finding the boot loader, the BIOS initiates it. Note
sometimes, however, the MBR contains another boot loader, which in turn finds the boot loader on
the first sector of a Linux partition.
2. The BIOS hands over control to the first stage boot loader
Which then reads in the partition table and looks for the second stage boot loader on the partition
configured as bootable.
3. The first stage boot loader runs the second stage boot loader, which then finds the kernel image
and runs it.
4. Running if the Kernel
The kernel image contains a small, uncompressed program that decompresses the compressed
portion of the kernel and runs it. The kernel scans for system information, including the CPU type
and speed. Its drivers scan for other hardware and configure what they find. The kernel then mounts
the root file system in read−only mode to prevent corruption during the boot process.
5. The kernel starts the init process by running /sbin/init.
6. The init process starts up getty programs for the virtual consoles and serial terminals and initiates
other processes as configured and monitors them until shutdown.
This general boot process can be affected by various factors even within the same distribution. For
instance, the steps above assume the system has only one bootable kernel image. That's probably the
case when you first install, but you might also have a bootable sector installed with another
operating system, like Windows, or a different distribution of Linux. Later, if you install a different
version of the kernel and compile it, you'll have to configure your boot loader to see it.
The Master Boot Record (MBR) plays a crucial role in the boot up process. Located on the first disk
drive, in the first sector of the first cylinder of track 0 and head 0 (this whole track is generally reserved
for boot programs), it is a special area on your hard drive that is automatically loaded by your computer's
BIOS. Since the BIOS is loaded on an electronically Erasable Programmable Read Only Memory
(EEPROM) chip, which is generally not reprogrammed at the user/administrator level, the MBR is the
earliest point at which a configured boot loader can take control of the boot process. There are several
boot loaders to choose from. Alternatives include System Commander, NTLDR, Linux Loader
(LILO), and the Grand Unified Boot loader (GRUB).
System Commander is a boot management utility that allows you to choose from up to 100 operating
systems at boot time.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 10


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

NTLDR is the boot manager for Windows NT. Most likely, the default boot loader for your distribution
will be either LILO or GRUB, but any of these boot loaders can be installed after the initial operating
system installation. Whichever you choose, this boot loader is the first non-BIOS step in the boot process.
The GRUB
Red Hat began to use GRUB as the default boot loader. The main advantage of GRUB over LILO is that
GRUB is far more flexible. GRUB does not require you to run an install program each time you make
changes. If you make changes to LILO, say updating a kernel image that is included in the LILO
configuration, and forget to run the LILO installer program, you might have to boot a rescue disk to get
back into your system. With GRUB, you do not encounter this problem. Additionally, you can view some
information from your system using the GRUB boot prompt, which is not possible with other boot
loaders.
Another nice feature is the ability to decompress files that were compressed using gzip; the
decompression is transparent to the user. GRUB also cares less about disk geometries than other boot
loaders. In fact, you can relocate the kernel image and GRUB will still find it. Other boot loaders have to
know the block location of the image.
The LILO
Red Hat and the other popular Linux distributions launched LILO by default to complete the Linux boot
process. In most situations, LILO was copied to the MBR. In other situations, LILO was installed in the
first sector of the Linux boot partition. In this scenario, LILO is known as the secondary boot loader and
must be initiated by another boot loader
LILO is very versatile and allows you to boot multiple kernel images as well as the boot sector of any
other bootable partition on the system. This bootable partition might point to a Windows 95 or 98
partition, a Windows NT partition, or any of a number of other operating systems, allowing you to boot
any one of them. You must make LILO aware of the images and any other operating systems that it is
expected to boot. To do that, you'll add information about each kernel image or operating system into the
/etc/lilo.conf file, including a label by which to refer to each image. Then you'll run the LILO program.
LILO loads the selected kernel image and the corresponding initrd image, if there is one, into memory
and relinquishes control to that kernel. LILO is configured using the /etc/lilo.conf file. The basics of
LILO are very simple, but its power lies in the many options that can be passed if needed.
The LOADLIN

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 11


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

LOADLIN (Load Linux) is a DOS executable that can initiate a Linux system boot. This program comes
with most Linux distributions. Red Hat places it in the dosutils directory of the first installation
CD−ROM. Copy the LOADLIN.EXE file to a DOS partition or DOS boot floppy. (You might want to
create a C:\LOADLIN directory.) You'll also need to copy a Linux kernel image file, probably located in
/boot on your Linux system, to the DOS partition or floppy. From this point, you can boot Linux (assume
is located on the first partition on the second IDE drive) as follows:
C> LOADLIN C:\vmlinuz root=/dev/hdb1 ro
To boot using a RAM disk image, use this form of the command:
C> LOADLIN C:\vmlinuz root=/dev/ram rw initrd=C:\rootdsk.gz
To boot from a root floppy in drive A, use this command:
C> LOADLIN C:\image root=/dev/fd0 rw ramdisk=1440
LOADLIN is sometimes used if your Linux system won't boot because of a LILO configuration problem
and you need to get back into the system to fix the LILO boot information; it's also useful if you are
forced to restore from a backup and don't have a running system from which to start the restore. This can
also be done with a Linux boot floppy as already described before, so it really comes down to personal
preference.
LOADLIN can be particularly handy if you have a piece of hardware that requires initialization in DOS
before it can be used in Linux. For example, some sound cards must be initialized into a special
SoundBlaster compatibility mode before they can be used in Linux, and the programs to do this only run
under DOS. You can create a DOS partition that runs the sound card initialization program from
CONFIG.SYS or AUTOEXEC.BAT and then launches LOADLIN. The result is a Linux boot with the
hardware in a condition that Linux can accept.
Although LOADLIN works from the DOS prompt in Windows and from the DOS compatibility mode of
Windows 95 and 98, that mode has been effectively removed from Windows Me/NT/2000/XP.
Therefore, LOADLIN does not work from a full Windows Me/NT/2000/XP boot without special
handling, although it does work from a Windows Me emergency floppy or Windows 95/98 boot floppy.
Shutdown Linux
Red Hat uses the BSD−style shutdown command. This command's syntax is:
Shutdown [options] time [message]
When the system has been signalled to shut down, all logged−in users are notified that the system is
going down using either the standard message or the one specified as a parameter to the shutdown

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 12


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

command. The shutdown command then signals the init process to change the run level to 0 (halt) if the
option −h was provided or 6 (reboot) if the –r option was used instead.
In General, starting a Linux system involves the interaction of a wide range of software, from the
computer's BIOS to the Linux startup scripts. Critical components that you can configure include the
installation of a boot loader (LILO, GRUB, and LOADLIN are common for Linux) and the specification
of what services should be run. Most Linux systems use scripts in the /etc/rc.d directory tree to control
these services, and these configurations can be edited manually or by using tools such as ntsysv or tksysv.
Shutting a system down is also a process that involves running specialized system scripts. At all steps
along the way, information is logged to assorted files, which you can consult when troubleshooting your
system.
2.1.2. Booting and shutting down Windows
Booting and shutting down Windows is a trivial matter. To boot the system, it is simply a matter of
switching on the power. To shut it down, one chooses shutdown from the Start Menu. There is no direct
equivalent of single-user mode for Windows, though ‗secure mode‘ or ‗Safe Mode‘ is sometimes
invoked, in which only the essential device drivers are loaded, if some problem is suspected. To switch
off network access on a Windows server so that disk maintenance can be performed, one must normally
perform a reboot and connect new hardware while the host is down. File system checks are performed
automatically if errors are detected. The plug and play (PnP) style automation of Windows removes the
need for manual work on file systems, but it also limits flexibility.
The Windows boot procedure on a PC begins with the BIOS, or PC hardware. This performs a
memory check and looks for a boot-able disk. A boot-able disk is one which contains a master boot
record (MBR). Normally the BIOS is configured to check the floppy drive A: first and then the hard-disk
C: for a boot block. The boot block is located in the first sector of the boot-able drive. It identifies which
partition is to be used to continue with the boot procedure. On each primary partition of a boot-able disk,
there is a boot program which ‗knows‘ how to load the operating system it finds there. Windows has a
menu-driven boot manager program which makes it possible for several OSs to coexist on different
partitions.
Once the disk partition containing Windows has been located, the program NTLDR is called to load the
kernel. The file BOOT.INI configures the defaults for the boot manager. After the initial boot, a program
is run which attempts to automatically detect new hardware and verify old hardware. Finally the kernel is
loaded and Windows starts properly.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 13


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

2.2. Installation and configuration of Software


2.2.1. Installing Operating System
The installation process is one of the most destructive things we can do to a computer. Everything on the
disk will disappear during the installation process. One should therefore have a plan for restoring the
information if it should turn out that reinstallation was in error.
Today, installing a new machine is a simple affair. The operating system comes on some removable
medium (like a CD or DVD) that is inserted into the player and booted. One then answers a few questions
and the installation is done. Operating systems are now large so they are split up into packages. One is
expected to choose whether to install everything that is available or just certain packages. Most operating
systems provide a package installation program which helps this process. In order to answer the questions
about installing a new host, information must be collected and some choices made:
 We must decide a name for each machine.
 We need an unused Internet address for each.
 We must decide how much virtual memory (swap) space to allocate.
 We need to know the local netmask and domain name.
 We need to know the local timezone.
We might need to know whether a Network Information Service (NIS) or Windows domain controller is
used on the local network; if so, how to attach the new host to this service. When we have this information,
we are ready to begin.
 Installing Linux Operating System
Installing Linux is simply a case of inserting a CD-ROM and booting from it, then following the
instructions. However, Linux is not one, but a family of operating systems. There are many distributions,
maintained by different organizations and they are installed in different ways. Usually one balances ease of
installation with flexibility of choice.
What makes Linux installation unique amongst operating system installations is the complete size of the
program base. Since every piece of free software is bundled, there are literally hundreds of packages to
choose from. This presents Linux distributors with a dilemma. To make installation as simple as possible,
package maintainers make software self-installing with some kind of default configuration. This applies to
user programs and to operating system services. Here lies the problem: installing network services which
we don‘t intend to use presents a security risk to a host. A service which is installed is a way into the
system. A service which we are not even aware of could be a huge risk. If we install everything, then, we

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 14


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

are faced with uncertainty in knowing what the operating system actually consists of, i.e. what we are
getting ourselves into. As with most operating systems, Linux installations assume that you are setting up a
stand-alone PC which is yours to own and do with as you please. Although Linux is a multiuser system, it
is treated as a single-user system. Little thought is given to the effect of installing services like news servers
and web servers. The scripts which are bundled for adding user accounts also treat the host as a little
microcosm, placing users in /home and software in /usr/local. To make a network workstation out of Linux,
we need to override many of its idiosyncrasies.
 Installing Windows Operating System
The installation of Windows is similar to both of the above. One inserts a CD-ROM and boots. Here it is
preferable to begin with an already partitioned hard-drive (the installation program is somewhat ambiguous
with regard to partitions). On rebooting, we are asked whether we wish to install Windows a new, or repair
an existing installation. This is rather like the Linux rescue disk. Next we choose the file system type for
Windows to be installed on, either DOS or NTFS. There is clearly only one choice: installing on a DOS
partition would be irresponsible with regard to security. Choose NTFS. Windows reboots several times
during the installation procedure, though this has improved somewhat in recent versions. The first time
around, it converts its default DOS partition into NTFS and reboots again. Then the remainder of the
installation proceeds with a graphical user interface. There are several installation models for Windows
workstations, including regular, laptop, minimum and custom. Having chosen one of these, one is asked to
enter a license key for the operating system. The installation procedure asks us whether we wish to use
DHCP to configure the host with an IP address dynamically, or whether a static IP address will be set. After
various other questions, the host reboots and we can log in as Administrator.
Windows service packs are patch releases which contain important upgrades. These are refreshingly trivial
to install on an already-running Windows system.
One simply inserts them into the CD-ROM drive and up pops the Explorer program with instructions and
descriptions of contents. Clicking on the install link starts the upgrade. After a service pack upgrade,
Windows reboots predictably and then we are done. Changes in configuration require one to reinstall
service packs, however.

 Dual boot
There are many advantages to having both Windows and Linux (plus any other operating systems you
might like) on the same PC. This is now easily Since Windows 9x is largely history, and NT changes names

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 15


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

(NT, 2000, XP,) faster than a speeding bullet. Achieved with the installation procedures provided by these
two operating systems. It means, however, that we need to be able to choose the operating system from a
menu at boot time. The boot-manager GRUB that is now part of Linux distributions performs this tasks
very well, so one scarcely needs to think about this issue anymore. Note, however, that it is highly
advisable to install Windows before installing Linux, since the latter tends to have more respect for the
former than vice versa! Linux can preserve an existing Windows partition, and even repartition the disk
appropriately.
2.2.2. Third Party Software installation
Most standard operating system installations will not leave us in possession of an immediately usable
system. We also need to install third party software in order to get useful work out of the host. Software
installation is a similar problem to that of operating system installation. However, third party software
originates from a different source than the operating system; it is often bound by license agreements and it
needs to be distributed around the network. Some software has to be compiled from source.
There are therefore two kinds of software installation: the installation of software from binaries and the
installation of software from source. Commercial software is usually installed from a CD by running an
installation program and following the instructions carefully; the only decision we need to make is where
we want to install the software. Free software and open source software usually come in source form and
must therefore be compiled. UNIX programmers have gone to great lengths to make this process as simple
as possible for system administrators.
2.3. Installation and configuration of devices and drivers
Devices are the hardware part of the computer system. Drivers are the part of system software which tells
the device what to do, When to do, and How to do tasks in the computer systems. Devices are connected to
the host computer and shared for the network users/nodes. The devices attached to the host computer
directly is termed as Local devices. Installation and configuration steps of devices are the following steps.
 Connect the device directly to host
 Install the device drivers.
 Configure the device.
 Share the devices for network users/nodes.
The device drivers are categorized in to two main group. These are:

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 16


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

 Logical device driver (LDD):-This kinds of device drivers are developed by the operating
system vendors such as printer drivers, NIC drivers, Audio or VGA drives are built by the
operating system vendors.
 Physical Device Driver (PDD):- This part the device drivers are developed and implemented
by the device vendors.

2.4. Super user/Administrator Privileges


The super user refers to a privileged account with unrestricted access to all files and commands. The
username of this account is root. Many administrative tasks and their associated commands require super
user status. There are two ways to become the super user in Linux. The first is to log in as root directly. The
second way is to execute the command su while logged in to another user account.
The su command may be used to change one‘s current account to that of a different user after entering the
proper password. It takes the username corresponding to the desired account as its argument; root is the
default when no argument is provided. After you enter the su command (without arguments), the system
prompts you for the root password. If you type the password correctly, you‘ll get the normal root account
prompt (by default, a number sign: #), indicating that you have successfully become super user and that the
rules normally restricting file access and command execution do not apply. Administrator accounts
provides:
 Complete access to files.
 Access to Directories.
 Deny/allow Services for users.
 Access other facilities.
 Deletion or disable other users on the network.
The super user is:

 The user who has all rights or permissions in single or multi-user environment.
 The user who has administrative privileges.
 The user who can made the system wide change.
 Super user be able add, delete and modify network users.
This is a network account with privileges levels far beyond those of most accounts.
2.5. User Management
Without users, there would be few challenges in system administration. Users are both the reason that
computers exist and their greatest threat. The role of the computer, as a tool, has changed extensively

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 17


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

throughout history. From John von Neumann‘s vision of the computer as a device for predicting the
weather, to a calculator for atomic weapons, to a desktop typewriter, to a means of global communication,
computers have changed the world and have reinvented themselves in the process. System administrators
must accommodate to all needs, and ensure the stability and security of the system.
User management is about interfacing humans to computers. This brings to light a number of issues:
 Accounting: registering new users and deleting old ones.
 Comfort and convenience.
 Support services.
 Ethical issues.
 Trust management and security.
Some of these (account registration) are technological, while others (support services) are human issues.
Comfort and convenience lies somewhere in between. User management is important because the system
exists to be used by human beings, and they are both friend and enemy.
2.5.1. User registration
One of the first issues on a new host is to issue accounts for users. Surprisingly this is an area where
operating system designers provide virtually no help. The tools provided by operating systems for this task
are, at best, primitive and are rarely suitable for the task without considerable modification. For small
organizations, user registration is a relatively simple matter. Users can be registered at a centralized
location by the system manager, and made available to all of the hosts in the network by some sharing
mechanism, such as a login server, distributed authentication service or by direct copying of the data.
For larger organizations, with many departments, user registration is much more complicated. The need for
centralization is often in conflict with the need for delegation of responsibility. It is convenient for
autonomous departments to be able to register their own users, but it is also important for all users to be
registered under the umbrella of the organization, to ensure unique identities for the users and flexibility of
access to different parts of the organization. What is needed is a solution which allows local system
managers to be able to register new users in a global user database.
PC server systems like NT and Netware have an apparent advantage in this respect. By forcing a particular
administration model onto the hosts in a network, they can provide straightforward delegation of user
registration to anyone with domain credentials. Registration of single users under NT can be performed
remotely from a workstation, using the
net user username password /ADD/domain

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 18


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Command. While most Unix-like systems do not provide such a ready-made tool, many solutions have
been created by third parties. The price one pays for such convenience is an implicit trust relationship
between the hosts. Assigning new user accounts is a security issue, thus to grant the right of a remote user
to add new accounts requires us to trust the user with access to that facility.
It is rather sad that no acceptable, standardized user registration methods have been widely adopted. This
must be regarded as one of the unsolved problems of system administration. Part of the problem is that the
requirements of each organization are rather different. Many Unix-like systems provide shell scripts or user
interfaces for installing new users, but most of these scripts are useless, because they follow a model of
system layout which is inadequate for a network environment, or for an organization‘s special needs.
2.5.2. Local and network accounts
Most organizations need a system for centralizing passwords, so that each user will have the same
password on each host on the network. In fixed model computing environments such as NT or Novell
Netware, where a login or domain server is used, this is a simple matter.
Both UNIX and NT support the creation of accounts locally on a single host, or ‗globally‘ within a network
domain. With a local account, a user has permission to use only the local host. With a network account, the
user can use any host which belongs to a network domain. Local accounts are configured on the local host
itself. UNIX registers local users by added them to the files /etc/passwd and /etc/shadow. In NT the
Security Accounts Manager (SAM) is used to add local accounts to a given workstation.
For network accounts, Unix-like systems have widely adopted Sun Microsystems‘ Network Information
Service (NIS), formerly called Yellow Pages or simply YP, though this is likely to be superceded and
replaced by the more widely accepted standard LDAP in the next few years. The NIS-plus service was later
introduced to address a number of weaknesses in NIS, but this has not been widely adopted. NIS is
reasonably effective at sharing passwords, but it has security implications: encrypted passwords are
distributed in the old password format, clearly visible, making a mockery of shadow password files. NIS
users have to be registered locally as users on the master NIS server; there is no provision for remote
registration, or for delegation of responsibility.
NT uses its model of domain servers, rather like a NIS, but including a registration mechanism. A user in
the SAM of a primary domain controller is registered within that domain and has account on any host
which subscribes to that domain.
An NT domain server involves not only shared databases but also shared administrative policies and shared
security models. A host can subscribe to one or more domains and one domain can be associated with one

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 19


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

another by a trust relationship. When one NT domain ‗trusts‘ another, then accounts and groups defined in
the trusted domain can be used in the trusting domain. NIS is indiscriminating in this respect. It is purely an
authentication mechanism, implying no side-effects by the login procedure.
2.5.3. Groups of users
Both UNIX and NT allow users to belong to multiple groups. A group is an association of usernames which
can be referred to collectively by a single name. File and process permissions can be granted to a group of
users. Groups are defined statically by the system administrator.
On Unix-like systems they are defined in the /etc/group file, like this:
users::100:user1,mark,user2,user3
The name of the group, in this case, is users, with group-id 100 and member‘s user1, mark, user2 and
user3. The second, empty field provides space for a password, but this facility is seldom used. A number of
default groups are defined by the system, for instance
root::0:root
other::1:
bin::2:root,bin,daemon
The names and numbers of system groups vary with different flavors of UNIX. The root group has super
user privileges. UNIX groups can be created for users or for software which runs under a special user-id. In
addition to the names listed in the group file, a group also accrues users from the default group membership
in field four of /etc/passwd. Thus, if the group file had the groups:
users::100:
msql::36:
ftp::99:
www::500:www
www-data::501:www,toreo,mark,geirs,sigmunds,mysql,ulfu,magnem privwww::502: and every user in
/etc/passwd had the default group 100, then the users group would still contain every registered user on the
system. By way of contrast, the group ftp contains no members at all, and is to be used only by a process
which the system assigns that group identity, whereas www-data contains a specific named list and no
others as long as all users have the default group 100.
NT also allows the creation of groups. Groups are created by command, rather than by file editing, using:
net group groupname /ADD
Users may then be added with the syntax,

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 20


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

net group groupname username1 username2... /ADD


They can also be edited with the GUI on a local host. NT distinguishes global groups (consisting only of
domain registered users) from local groups, which may also contain locally registered users. Some standard
groups are defined by the system, e.g.
 Administrators
 Users
 Guest
The Administrators group has privileged access to the system.
2.5.4. Account policy
Most organizations need a strict policy for assigning accounts and opening the system for users. Users are
the foremost danger to a computing system, so the responsibility of owning an account should not be dealt
out lightly. There are many ways in which accounts can be abused. Users can misuse accounts for
villainous purposes and they can abuse the terms on which the account was issued, wasting resources on
personal endeavors. For example, in education, students have been known to undergo semester registration
simply to have an account, giving them essentially free access to the Internet and a place to host their web
sites.
Policy rules are required for guiding user behavior, and also for making system rules clear. Experience
indicates that simple rules are always preferable, though this is so far unsubstantiated by any specific
studies. A complex and highly specific rule that is understood only by its author, may seem smart, but most
users will immediately write it off as being nonsense. Such a rule is ill advised because it is opaque. The
reason for the rule is not clear to all parties, and thus it is unlikely to be respected. Simple rules make
system behavior easy to understand. Users tolerate rules if they understand them and this tends to increase
their behavioral predictability.
What should an account policy contain?
1. Rules about what users are allowed/not allowed to do.
2. Specifications of what mandatory enforcement users can expect, e.g. tidying of garbage files.
Any account policy should contain a clause about weak passwords. If weak passwords are discovered, it
must be understood by users that their account can be closed immediately. Users need to understand that
this is a necessary security initiative.
2.5.5. Controlling user resources
Every system has a mixture of passive and active users.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 21


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Passive users utilize the system often minimally, quietly accepting the choices which have been made for
them. They seldom place great demands on the system.
They do not follow the development of the system with any zeal and they are often not even aware of what
files they have. They seldom make demands other than when things go wrong. Passive users can be a
security risk, because they are not aware of their actions.
Active users, on the other hand, follow every detail of system development. They frequently find every
error in the system and contact system administrators frequently, demanding upgrades of their favorite
programs. Active users can be of great help to a system administrator, because they test out problems and
report them actively. They are an important part of the system administration team, or community, and can
also go a long way to helping the passive users. An important point about active users, however, is that they
are not authorized staff. Active users need to understand that, while their skills are appreciated, they do not
decide system policy: they must obey it. Even in a democracy, rules are determined by process and then
obeyed by everyone.
Skilled administrators need to be social engineers, placating active user wishes and keeping them in the
loop, without bowing to their every whim. Individual skill amongst users does not necessarily carry with it
responsibility to the whole system. System administrators have a responsibility to find a balance which
addresses users‘ needs but which keeps the system stable and functional. If we upgrade software too often,
users will be annoyed. New versions of software function differently and this can hinder people in their
work. If we do not upgrade often enough, we can also hinder work by restricting possibilities.
2.5.6. Resource consumption
Disks fill up at an alarming rate. Users almost never throw away files unless they have to. If one is lucky
enough to have only very experienced and extremely friendly users on the system, then one can try asking
them nicely to tidy up their files. Most administrators do not have this luxury however. Most users never
think about the trouble they might cause others by keeping lots of junk around. After all, multi-user
systems and network servers are designed to give every user the impression that they have their own
private machine. Of course, some users are problematical by nature.
No matter what we do to fight the fire, users still keep feeding the flames. To keep hosts working it is
necessary to remove files, not just add them. Quotas limit the amount of disk space users can have access
to, but this does not solve the real problem. The real problem is that in the course of using a computer many
files are created as temporary data but are never deleted afterwards. The solution is to delete them.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 22


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

2.5.7. Quotas and limits in general


In a shared environment, all users share the same machine resources. If one user is selfish that affects all of
the other users. Given the opportunity, users will consume all of the disk space and all of the memory and
CPU cycles somehow, whether through greed or simply through inexperience. Thus it is in the interests of
the user community to limit the ability of users to spoil things for other users. One way of protecting
operating systems from users and from faulty software is to place quotas on the amount of system
resources which they are allowed.
 Disk quotas: Place fixed limits on the amount of disk space which can be used per user. The advantage of
this is that the user cannot use more storage than this limit; the disadvantage is that many software systems
need to generate/cache large temporary files (e.g. compilers, or web browsers) and a fixed limit means that
these systems will fail to work as a user approaches his/her quota.
 CPU time limit: Some faulty software packages leave processes running which consume valuable CPU
cycles to no purpose. Users of multiuser computer systems occasionally steal CPU time by running huge
programs which make the system unusable for others. The C-shell limit CPU time function can be globally
configured to help prevent accidents.
• Policy decisions: Users collect garbage. To limit the amount of it, one can specify a system policy which
includes items of the form: ‗Users may not have mp3, wav, mpeg etc. files on the system for more than one
day‘.
Quotas have an unpleasant effect on system morale, since they restrict personal freedom. They should
probably only be used as a last resort. There are other ways of controlling the build-up of garbage.
2.5.8. Moving users
When disk partitions become full, it is necessary to move users from old partitions to new ones. Moving
users is a straightforward operation, but it should be done with some caution. A user who is being moved
should not be logged in while the move is taking place, or files could be copied incorrectly. We begin by
looking for an appropriate user, perhaps one who has used a particularly large amount of disk space.
Users need to be informed about the move: we have to remember that they might hard-code the names of
their home directories in scripts and programs, e.g. CGI-scripts. Also, the user‘s account must be closed by
altering their login shell, for instance, before the files are moved.
2.5.9. Deleting old users
Users who leave an organization eventually need to be deleted from the system. For the sake of certainty, it
is often advisable to keep old accounts for a time in case the user actually returns, or wishes to transfer data

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 23


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

to a new location. Whether or not this is acceptable must be a question of policy. Clearly it would be
unacceptable for company secrets to be transferred to a new location. Before deleting a user completely, a
backup of the data can be made for safe-keeping. Then we have to remove the following:
 Account entry from the password database.
 Personal files.
 E-mail and voice mail and mailing lists.
 Removal from groups and lists (e.g. mailing lists).
 Removal of corn and batch tasks.
 Revocation of smartcards and electronic ID codes

2.6. Process Management and Monitoring


The file system by itself is static. The activity on an operating system is represented as processes, which
open and manipulate files within the file system. Each program that is run by a computer's CPU is called a
process. A command may include many processes. There are several processes running at any one time on
an operating system. As a system administrator, you will be directly responsible for ensuring that the
necessary system processes are run and for maintaining the schedule by which they are run. You are further
responsible for managing the system so that the higher priority processes get the CPU's attention first. Last,
you must ensure that the memory space is adequate to house all the processes that must be run. To do these
things, you need to know the fundamental details about what a process is and how it works.
Even programs that don‘t crash outright occasionally misbehave in other ways. For instance, a program
might stop responding, or it may consume an inordinate amount of CPU time. In these cases, it‘s important
that you know how to exercise super user control over these programs so that you can rein in their appetites
or terminate them outright. The first step to doing this, though, is knowing how to find out what programs
are running on the computer.
Before proceeding, though, it‘s important that you understand a bit of terminology. In Linux, a process is
more or less synonymous with a running program. Because Linux is a multiuser, multitasking OS, it‘s
possible for one program to be running as more than one process at a time, however. The computer will
have two processes running at once. Indeed, a single user can do this. It‘s also possible for a single program
to create (or fork) sub processes. When one process forks another, the original process is known as the
parent process, and the forked process is known as the child process. This parent/child relationship
produces a tree-like hierarchy that ultimately leads back to initialization the first process. Initialization
forks the login processes, which in turn fork bash processes, which fork additional processes. (It‘s actually

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 24


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

slightly more complex than this; init doesn‘t directly fork login but instead does this by using another
process, such as Getty.) This can continue for an arbitrary number of layers, although many programs
aren‘t able to fork others.
The Concept of Multitasking
The most precious resource in the system is the CPU. There is usually only one CPU, so its usage must be
optimized. Now days most of the operating systems are a multiprocessing operating system, which means
that one of its goals is to dish out CPU time among various processes. If there are more processes than
CPUs, the rest of the processes must wait for the CPU to become available. Multiprocessing is
conceptually simple; processes execute until some situation forces them to stop. Typically this is because a
system resource that is needed is unavailable. In a multiprocessing system, several processes are kept in
memory at the same time so that when one process is forced to stop, the CPU can be used by another
process instead of sitting idle. The CPU is shared among multiple processes (tasks); hence the name
multitasking.
Types of Processes
There are three basic types of processes: interactive processes, batch processes, and daemon processes. The
kernel manages these processes and assigns each a unique process ID (PID) in ascending order until the
highest defined process number is reached; at this point it starts at the first unused number again. The only
process ever assigned a PID of 1 is initialize the process, which has a parent PID of 0 to distinguish it from
the other processes. We'll talk more about this important process in a moment.
Processes are owned in much the same way that files are owned; the user ID of the user who starts the
process is assigned as the real user ID of the process. Likewise, the group ID of the user is assigned as the
real group ID of the process. But it is not the real user ID and group ID that determine the process's access
rights and who can start and stop it. Instead, entities called the effective UID and GID are used. If the
executable has its permission set, then the process will be run as if the owner of that file had executed it
and will allow access for anyone in the group it was assigned.
Interactive Processes
Some processes are run interactively by users in a terminal session. If you log into a Linux system and run
a mail reader, the mail reader process is an interactive process. While you are interacting with the mail
reader, it is running in the foreground, actively attached to the terminal process. A process running in the
foreground has full control of the terminal until that process is terminated or interrupted. A background
process is suspended, returning terminal control to the parent of the suspended process. The background

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 25


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

process may be able to continue its work in the background—if the continuation does not require
interaction with the terminal. Interactive process control, or job control as it is commonly called, allows a
process to be moved between the foreground and the background, restarted in the background if applicable,
or restarted in the foreground.
Let's assume that you are running a backup process in the foreground using the tar command, tar zcf
/dev/st0 /. You have observed the output and determined that the backup is properly running. You might
want to do something else on that terminal, necessitating that you move the tar process to the background.
Type Ctrl+Z. You will see a message on your terminal to indicate that the process has been stopped:
[1]+ Stopped tar zcf /dev/st0 /
The process will be stopped, waiting in the background. But what if you aren't sure that the process is
actually running? Use the jobs utility to list the background processes.
$ jobs
[1]+ Stopped tar zcf /dev/st0 /
As you see, the stopped process is just out there waiting for you. To allow the backup process to continue
in the background, type bg at the command line. Running the jobs program again will yield:
[1]+ Running tar zcf /dev/st0 /
At this point, you may let it run on in the background until it terminates when the backup has completed, or
you may use the fg command to bring the process to the foreground again. If you want to start a process in
the background, simply append an ampersand to the command. The tar command would then look like this:
$ tar zcf /dev/st0 / &
You will then be given a process ID, which looks like this:
[2] 4036
You can then bring the process to the foreground using the fg command as we did before or let it run its
course in the background.
Batch Processes
Batch processes are submitted from a queue and have no association with a terminal at all. Batch processes
are great for performing recurring processes at some time when the system usage is low. To accomplish this
on a Linux system, use the batch and at commands. The at command is really pretty simple to use. The at
command is really pretty simple to use. Let's assume that you want to run the tar command that we just
used in the interactive process example at 2:00 in the morning tomorrow. You would use the at command as
shown below:

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 26


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

# at 2:00
at> tar −zcf /dev/st0 /
After entering this command, you must press Ctrl+D. The system responds with a warning that the
command will be run using /bin/sh, since some shell scripts are written only for a specific shell, and then
gives a recap of what shell will be used to run the process:
Warning: commands will be executed using (in order) a) $SHELL b) login
Shell c) /bin/sh
Daemon Processes
Daemons are server processes. These processes run continuously in the background until the service they
represent is called. For instance, httpd (the Apache Hypertext Transfer Protocol daemon) is a standalone
Web server service. It is typically started at boot time but spends most of its time monitoring the port
assigned to Apache, looking for work. When data comes in on port 80 (Apache's default port), httpd picks
up the traffic and starts processing it as appropriate. When the job is done, it returns to a listening mode.
Parent Processes and their Children
When a process needs to perform several functions in order to complete its task, it cannot exit until all of
the functions are finished. This means that if a system needs to do several database queries to complete a
task, it might not update the screen until all of the queries have been completed. In order to prevent such an
obvious lag, the parent process can initiate child processes to complete the subtasks in a more expedient
manner by performing as many of them as possible simultaneously. The creation of child processes is
called forking. Each child process is assigned a new PID, but also retains its parent's ID as the PPID. This
helps you track which process called which child process. The child process is a duplicate of the parent
process, inheriting nearly all of its parent's environment with the exception of its PID. The child then uses
the exec command to replace itself in memory with the binary code of a program that can perform the
requested function. When the replacement program finishes its task, it exits.
Managing Processes
Part of your responsibility as system administrator is to manage the system's processes to prevent a
runaway process (one that has started consuming system resources far beyond what it should) from causing
your system to crash. You need to understand how to obtain information about the processes that are
running. We have looked at a couple of examples. In Linux the ps command to illustrate earlier points.
Now we'll look at ps more formally. The ps (process status) command is the most often used method of
obtaining data about current processes. The ps command called without arguments will display all of the

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 27


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

processes running from that specific login including their PIDs, tty information, CPU time, and the
command that started the process.
You can see that the output from ps −e gives you the basic information about all the running processes. The
output includes the PID, the tty, the time, and the command itself. This is quite sufficient when you are
trying to obtain the PID of a process to kill it or restart it.
In Windows the taskmgr command lets you to know processes are ongoing. It is the same as that of ps
command in Linux.
2.6.1. Killing old processes
Processes sometimes do not get terminated when they should. There are several reasons for this.
Sometimes users forget to log out, sometimes poorly written terminal software does not properly kill its
processes when a user logs out. Sometimes background programs simply crash or go into loops from which
they never return. One way to clean up processes in a work environment is to look for user processes which
have run for more than a day. (Note that the assumption here is that everyone is supposed to log out each
day and then log in again the next day – that is not always the case.)

2.7. Maintaining Log Files


There are two more items to consider with respect to managing the many system log files: limiting the
amount of disk space they consume while simultaneously retaining sufficient data for projected future
requirements, and monitoring the contents of these log files in order to identify and act upon important
entries.
Unchecked, log files grow without bounds and can quickly consume quite a lot of disk space. A common
solution to this situation is to keep only a fraction of the historical data on disk. One approach involves
periodically renaming the current log file and keeping only a few recent versions on the system. This is
done by periodically
deleting the oldest one, renaming the current one, and then recreating it.

2.8. File System Repair, Backup and Restoration


One of the most important system administration tasks is to reliably create and verify backups. Failure to
do so might go unnoticed for several weeks or even months; unglamorous tasks like backups tend to slip
through the cracks all too often. The first time a system on the backup list fails and there is no backup from
which to restore it, however, you can count the seconds before someone gets a very serious reprimand,
perhaps to the point of losing the job entirely.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 28


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

This might seem excessive, but if the data is valuable enough to make the backup list, you can bet that
someone will miss it if it's gone. If you work for a software company or any company that stores the
working version of its "product" on the computers under your administrative control, backups are
especially critical. Hundreds or thousands of employee hours might be lost if the system failed without a
recent backup.
A system administrator is expected to prevent such loss and will probably not hold that title for long if
unable to do so. Think of it as health insurance for your computers. You wouldn't go without health
insurance, and neither should your computers.
Backup Strategies
Defining a backup strategy means deciding how much information you need to back up, and how often. At
one extreme are full backups which, as you might guess, back up everything. If you do a full backup every
night, you'll certainly be able to restore anything to the state it was in the previous night. But this is very
time consuming and requires significantly higher media consumption than other methods, since you will be
backing up everything every night. An alternative is the incremental backup, including only those files that
have changed (or are likely to have changed) since the last backup. Most administrators try to develop
backup strategies that combine these two methods, reducing the time expended backing up the system
without sacrificing the ability to restore most of what was lost.
In addition to formal backups of a computer or network, you may want to archive specific data. For
instance, you might want to store data from scientific experiments, the files associated with a project you've
just completed, or the home directory of a user. Such archives are typically done on an as−needed basis,
and may work best with different hardware than you use to back up an entire computer or network.
Combining Full and Incremental Backups
Including incremental backups in your strategy saves a great deal of time and effort. Much of the data on
your system (whether a network or a single computer) is static. If data hasn't changed since the last reliable
backup, any time spent backing it up is a waste. There are two ways to determine which files to include on
an incremental backup. The first is to use commands that look for files newer than the date of the last full
backup. The second method is to determine which data is most likely to be changed and to include this
data, whether actually changed or not, on the incremental backup.
Most backup strategies combine full backups with incremental backups (often referred to as daily backups),
to cover the more dynamic data. Typically each night, when the system's workload is at its lowest, a backup
of one of these forms is performed.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 29


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Including Differential Backups


The term differential backup is sometimes used to refer to a backup consisting of all files that have changed
since the previous backup at any level. This differs from an incremental backup, which includes everything
that has changed since the last full backup, because the immediately previous backup might be a full
backup, an incremental backup, or another differential backup.

2.9. Kernel Customization


The kernel is the heart of the operating system. It is the core program, always running while the operating
system is up, providing and overseeing the system environment. The kernel is responsible for all aspects of
system functioning, including:
 Process creation, termination and scheduling
 Virtual memory management (including paging)
 Device I/O (via interfaces with device drivers: modules that perform the actual low-level
communication with physical devices such as disk controllers, serial ports, and network
adapters).
 Interprocess communication (both local and network).
 Enforcing access control and other security mechanisms
The operating system kernel is that most important part of the system which drives the hardware of the
machine and shares it between multiple processes. If the kernel does not work well, the system as a whole
will not work well. The main reason for making changes to the kernel is to fix bugs and to upgrade system
software, such as support for new hardware; performance gains can also be achieved however, if one is
patient.
Kernel configuration varies widely between operating systems. Some systems require kernel modification
for every miniscule change, while others live quite happily with the same kernel unless major changes are
made to the hardware of the host.
Many operating system kernels are monolithic, statically compiled programs which are specially built for
each host, but static programs are inflexible and the current trend is to replace them with software
configurable systems which can be manipulated without the need to recompile the kernel.
Windows has also taken a modular view to kernel design. Configuration of the Windows kernel also does
not require a recompilation, only the choice of a number of parameters, accessed through the system editor
in the Performance Monitor, followed by a reboot.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 30


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Linux switched from a static, monolithic kernel to a modular design quite quickly. The Linux kernel strikes
a balance between static compilation and modular loading. This balances the convenience of modules with
the increased speed of having statically compiled code forever in memory. Typically, heavily used kernel
modules are compiled in statically, while infrequently used modules are accessed on demand.
The standard procedure for installing a new kernel breaks a basic principle: don‘t mess with the operating
system distribution, as this will just be overwritten by later upgrades. It also potentially breaks the principle
of reproducibility: the choices and parameters which we choose for one host do not necessarily apply for
others. It seems as though kernel configuration is doomed to lead us down the slippery path of making
irreproducible, manual changes to every host.
We should always bear in mind that what we do for one host must usually be repeated for many others. If it
were necessary to recompile and configure a newkernel on every host individually, it would simply never
happen. It would be a project for eternity.
The situation with a kernel is not as bad as it seems, however. Although, in the case of GNU/Linux, we
collect kernel upgrades from the net as though it were third party software, it is rightfully a part of the
operating system. The kernel is maintained by the same source as the kernel in the distribution, i.e. we are
not in danger of losing anything more serious than a configuration file if we upgrade later. However,
reproducibility across hosts is a more serious concern. We do not want to repeat the job of kernel
compilation on every single host. Ideally, we would like to compile once and then distribute to similar
hosts. Kernels can be compiled, cloned and distributed to different hosts provided they have a common
hardware base (this comes back to the principle of uniformity). Life is made easier if we can standardize
kernels; in order to do this we must first have standardized hardware.
The modular design of newer kernels means that we also need to upgrade the modules in /lib/modules to
the receiving hosts. This is a logistic problem which requires some experimentation in order to find a viable
solution for a local site. These days it is not usually necessary to build custom kernels. The default kernels
supplied with most OSs are good enough for most purposes.
In many instances, the standard kernel program provided with the operating system works perfectly well
for the system‘s needs. There are a few circumstances, however, where it is necessary to create a custom
kernel (or perform equivalent customization activities) to meet the special needs of a particular system or
environment. Some of the most common are:
 To add capabilities to the kernel (e.g., support for disk quotas or a new filesystem type)
 To add support for new devices.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 31


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

 To remove unwanted capabilities/features from the kernel to reduce its size and resource
consumption (mostly memory) and thereby presumably improve system performance.
 To change the values of hardwired kernel parameters that cannot be modified dynamically.
In general, building a custom kernel consists of these steps:
 Installing the kernel source code package (if necessary)
 Applying any patches, adding new device driver code, and/or making any other source code
changes you may require.
 Saving the current kernel and its associated configuration files.
 Modifying the current system configuration as needed.
 Building a new kernel executable imag.
 Building any associated kernel modules (if applicable).
 Installing and testing the new kernel

2.10. System Performance Tuning


System performance related complaints can take on a variety of forms, ranging from sluggish interactive
response time, to a job that takes too long to complete or is unable to run at all because of insufficient
resources. In general, system performance depends on how efficiently a system‘s resources are applied to
the current demand for them by various jobs in the system. The most important system resources from a
performance perspective are CPU, memory, and disk and network I/O, although sometimes other device
I/O can also be relevant.
How well a system performs at any given moment is the result of both the total demand for the various
system resources and how well the competition among processes for them is being managed. Accordingly,
performance problems can arise from a number of causes, including both a lack of needed resources and
ineffective control over them. Addressing a performance problem involves identifying what these resources
are and figuring out how to manage them more effectively.
When the lack of a critical resource is the source of a performance problem, there are a limited number of
approaches to improving the situation. Put simply, when you don‘t have enough of something, there are
only a few options: get more, use less, eliminate inefficiency and waste to make the most of what you have,
or ration what you have. In the case of a system resource, this can mean obtaining more of it (if that is
possible), reducing job or system requirements to desire less of it, having its various consumers share the
amount that is available by dividing it between them, having them take turns using it, or otherwise

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 32


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

changing the way it is allocated or controlled. For example, if your system is short of CPU resources, your
options for improving things may include some or all of the following:
 Adding more CPU capacity by upgrading the processor.
 Adding additional processors to allow different parts of the work load to proceed in parallel.
 Taking advantage of currently unused CPU capacity by scheduling some jobs to run during times
when the CPU is lightly loaded or even idle.
 Reducing demands for CPU cycles by eliminating some of the jobs that are contending for them (or
moving them to another computer).
 Using process priorities to allocate CPU time explicitly among processes that want it, favoring
some over the others.
 Employing a batch system to ensure that only a reasonable number of jobs run at the same time,
making others wait.
 Changing the behavior of the operating system‘s job scheduler to affect how the CPU is divided
among multiple jobs.
Naturally, not all potential solutions will necessarily be possible on any given computer system or within
any given operating system. It is often necessary to distinguish between raw system resources like CPU
and memory and the control mechanisms by which they are accessed and allocated. For example, in the
case of the system‘s CPU, you don‘t have the ability to allocate or control this resource as such (unless
you count taking the system down). Rather, you must use features like nice numbers and scheduler
parameters to control usage.
Resources Control Mechanisms
CPU  Nice numbers
 Process priorities
 Batch queues
 Schedule parameters
Memory  Process resource limit
 Memory management- related parameters.
 Paging(swap) space.
Disk I/O  File system organization accross physical.
 Disk and controllers.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 33


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

 File placement on disk.


 I/O related parameters.
Network I/O  Network memory buffers.
 Network related parameters.
 Network infrastructure.

Excessive network traffic is also a cause of impaired performance. We should try to eliminate unnecessary
network traffic whenever possible. Before any complex analysis of network resources is undertaken, we
can make sure that we have covered the basics:
 Make sure that there is a DNS server on each large subnet to avoid sending unnecessary
queries through a router. (On small subnets this would be overkill.)
 Make sure that the nameservers themselves use the loopback address 127.0.0.1 as the
primary nameserver on Unix-like hosts, so that we do not cause collisions by having the
nameserver talk to itself on the public network.
 Try to avoid distributed file accesses on a different subnet. This loads the router. If possible,
file-servers and clients should be on the same subnet.
 If we are running X-windows, make sure that each workstation has its DISPLAY variable
set to :0.0 rather than hostname:0.0, to avoid sending data out onto the network, only to
come back to the same host.
Some operating systems have nice graphical tools for viewing network statistics, while others have only
netstat, with its varying options. Collision statistics can be seen with netstat -i for Unix-like OSs or netstat
/S on Windows. DNS efficiency is an important onsideration, since all hosts are more or less completely
reliant on this service.
Measuring performance reliably, in a scientifically stringent fashion is a difficult problem, but adequate
measurements can be made, for the purpose of improving efficiency, using the process tables and virtual
memory statistics. If we see frantic activity in the virtual memory system, it means that we are suffering
from a lack of resources, or that some process has run amok. Once a problem is identified, we need a
strategy for solving it. Performance tuning can involve everything from changing hardware to tweaking
software.
 Optimizing choice of hardware
 Optimizing chosen hardware
 Optimizing kernel behavior
 Optimizing software configurations
 (Optimizing service availability).

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 34


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Hardware has physical limitations. For instance, the heads of a hard-disk can only be in one place at a
time. If we want to share a hard-disk between two processes, the heads have to be moved around between
two regions of the disk, back and forth. Moving the read heads over the disk platter is the slowest
operation in disk access and perhaps the computer as a whole, and unfortunately something we can do
nothing about. It is a fundamental limitation. Moreover, to get the data from disk into RAM, it is necessary
to interrupt processes and involve the kernel.
Time spent executing kernel code is time not spent on executing user code, and so it is a performance
burden. Resource sharing is about balancing overheads. We must look for the sources of overheads and try
to minimize them, or mitigate their effects by cunning.
The Tunning Process
The following process offers the most effective approach to addressing system performance issues.
1. Define the problem in as much detail as you can
The more specific you can be about what is wrong (or less than optimal) with the way things are currently,
the more likely it is you can find ways to improve them. Ideally, you‘d like to move from an initial
problem description like this one:
System response time is slow. To one like this:
Interactive users running X experience significant delays opening new windows and switching between
windows. A good description of the current performance issues will also implicitly state your performance
goals. For example, in this case, the performance goal is clearly to improve interactive response time for
users running under X. It is important to understand such goals clearly, even if it is not always possible to
reach them (in which case, they are really wishes more than goals).
2. Determine what’s causing the problem
To do so, you‘ll need to answer questions like these:
 What is running on the system (or, when the performance of a single job or process is the issue,
what else is running)? You may also need to consider the sources of the other processes (for
example, local users, remote users, the cron subsystem, and so on).
 When or under what conditions does the problem occur? For example, does it only occur at
certain, predictable times of the day or when remote NFS mounts of local disks have reached a
certain level? Are all users affected or only some or even one of them?
 Has anything about the system changed that could have introduced or exacerbated the
problem?

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 35


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

 What is the critical resource that is adversely affecting performance? Answering this question
will involve finding the performance bottleneck for the job(s) in which you are interested (or
for this type of system workload).
For example, if we examined the system with the X windows performance problems, we might find that
the response-time problems occurred only when more than one simulation job and/or large compilation
job is running. By watching what happens when a user tries to switch windows under those conditions, we
could also figure out that the critical resource is system memory and that the system is paging.
3. Formulate explicit performance improvement goals
This step involves transforming the implicit goals (wishes) that were part of the problem description into
concrete, measurable goals. Again, being as precise and detailed as possible will make your job easier. In
many cases, tuning goals will need to be developed in conjunction with the users affected by the
performance problems, and possibly with other users and management personnel as well. System
performance is almost always a matter of compromises and tradeoffs, because it inevitably involves
deciding how to apply and apportion the finite available resources. Tuning is easiest and most successful
where there is a clear agreement about the relative priority and importance of the various competing
activities on the system.
To continue with our example, setting achievable tuning goals will be difficult unless it is decided whose
performance is more important. In other words, it is probably necessary to choose between snappy
interactive response time for X users and fast completeion times for simulation and compilation jobs
(remember that the status quo has already been demonstrated not to work). Decided one way, the tuning
goal could become something like this:
Improve interactive response time for X users as much as possible without making simulation jobs take
any longer to complete. Compilations can be delayed somewhat in order to keep the system from paging.
4. Design and implement modifications to the system and applications to achieve those goals
Figuring out what to do is, of course, the trickiest part of tuning a system. It is important to tune the
system as a whole. Focusing only on part of the system workload will give you a distorted picture of the
problem, because system performance is ultimately the result of the interactions among everything on the
system.
5. Monitor the system to determine how well the changes worked
The purpose here is to evaluate the system status after the change is made and determine whether or not
the change has improved things as expected or desired. The most successful tuning method introduces

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 36


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

small changes to the system, one at a time, allowing you to thoroughly test each one and judge its
effectiveness and to back it out again if it makes things worse instead of better.
6. Return to the first step and begin again
System performance tuning is inevitably an iterative process, because even a successful change will often
reveal new interactions to understand and new problems to address. Similarly, once the bottleneck caused
by one system resource is relieved, a new one centered around a different resource may very well arise. In
fact, the initial performance problem can often be just a secondary symptom of the real, more serious
underlying problem (e.g., a CPU shortage can be a symptom of serious memory shortfalls).

References

[1] O. A. Frisch, Essential System Administration, 2002.

[2] M. Burgess, Principles of Network and System Administration, Jony Wiley & sons Ltd, 2004 .

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 37


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

CHAPTER THREE:-NETWORK MANAGEMENT


3.1. Introduction to Network Management

Network management, in general, is a service that employs a variety of protocols, tools, applications, and
devices to assist human network managers in monitoring and controlling of the proper network resources,
both hardware and software, to address service needs and the network objectives.
A network management system (NMS) refers to a collection of applications that enable network components to be
monitored and controlled. In general, network management systems have the same basic architecture, as shown in
Figure 3.1. The architecture consists of two key elements: a managing device, called a management station, or a
manager and the managed devices, called management agents or simply an agent. A management station serves as
the interface between the human network manager and the network management system. It is also the platform for
management applications to perform management functions through interactions with the management agents. The
management agent responds to the requests from the management station and also provides the management
station with unsolicited information.
Given the diversity of managed elements, such as routers, bridges, switches, hubs and so on, and the wide variety
of operating systems and programming interfaces, a management protocol is critical for the management station to
communicate with the management agents effectively.
SNMP and CMIP are two well-known network management protocols. A network management system is
generally described using the Open System Interconnection (OSI) network management model. As an OSI network
management protocol, CMIP was proposed as a replacement for the simple but less sophisticated SNMP; however,
it has not been widely adopted.
A network management system consists of incremental hardware and software additions implemented among
existing network components. The software used in accomplishing the network management tasks resides in the
host computers and communications processors (e.g., front-end processors, terminal cluster controllers, bridges,
routers).A network management system is designed to view the entire network as a unified architecture, with
addresses and labels assigned to each point and the specific attributes of each element and link known to the
system. The active elements of the network provide regular feedback of status information to the network control
center

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 38


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Network Management System

Display

Network Management Application

Network Management Protocol

Agent Agent Agent

Manage Device Manage Device Manage Device

Figure 3.1:- Typical Network Management Architecture


OSI Network Management Model
The OSI network management comprises four major models:
o Organization Model defines the manager, agent, and managed object. It describes the
components of a network management system, the components‘ functions and infrastructure.
o Information Model is concerned with the information structure and storage. It specifies the
information base used to describe the managed objects and their relationships. The Structure
of Management Information (SMI) defines the syntax and semantics of management
information stored in the Management Information Base (MIB). The MIB is used by both the
agent process and the manager process for management information exchange and storage.
o Communication Model deals with the way that information is exchanged between the agent
and the manager and between the managers. There are three key elements in the
communication model: transport protocol, application protocol and the actual message to be
communicated.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 39


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Functional Model comprises five functional areas of network management.


OSI Model TCP/IP Model

Application Application

Presentation Not present in


this model
Session

Transport

TCP/UDP
Network

Data Link Layer Internetwork

Physical Layer Network Interface &


Hardware

Figure 3.2: The OSI and TCP/IP Reference Models

Network Management Layers


Two protocol architectures have served as the basis for the development of interoperable
communications standards: the International Organization for Standardization (ISO) OSI reference
model and the TCP/IP reference model, which are compared in Figure 3.2. The OSI reference model
was developed based on the promise that different layers of the protocol provide different services
and functions. It provides a conceptual framework for communications among different network
elements. The OSI model has seven layers. Network communication occurs at different layers, from
the application layer to the physical layer; however, each layer can only communicate with its
adjacent layers.
The OSI and TCP/IP reference models have much in common. Both are based on the concept of a
stack of independent protocols. Also, the functionality of the corresponding layers is roughly similar.
However, the difference does exist between the two reference models. The concepts that are central
to the OSI model include service, interface, and protocol. The OSI reference model makes the
distinction among these three concepts explicit. The TCP/IP model, however, does not clearly
distinguish among these three concepts. As a consequence, the protocols in the OSI model are better

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 40


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

hidden than in the TCP/IP model and can be replaced relatively easily as the technology changes.
The OSI model was devised before the corresponding protocols were invented.
Therefore it is not biased toward one particular set of protocols, which makes it quite general. With
TCP/IP, the reverse is true: the protocols came first, and the model was really just a description of
the existing protocols. Consequently, this model does not _t any other protocol stacks.
ISO Network Management Functions
The fundamental goal of network management is to ensure that the network resources are available
to the designated users. To ensure rapid and consistent progress on network management functions,
ISO has grouped the management functions into five areas:
i. Configuration management
ii. Fault management
iii. Accounting management
iv. Security management, and
v. Performance management
The ISO classification has gained broad acceptance for both standardized and proprietary network
management systems. A description of each management function is provided in the following
subsections.
Configuration Management
Modern data communication networks are composed of individual components and logical subsystems
(e.g., the device driver in an operating system) that can be configured to perform many different
applications. The same device, for example, can be configured to act either as a router or as an end
system node or both. Once it is decided how a device is to be used, the configuration manager can
choose the appropriate software and set of attributes and values (e.g., a transport layer retransmission
timer) for that device.
Configuration management is concerned with initializing a network and gracefully shutting down part
or all of the network. It is also concerned with maintaining, adding, and updating the relationships
among components and the status of components themselves during network operation.
Configuration management is concerned with initializing a network, provisioning the network resources
and services, and monitoring and controlling the network. More specifically, the responsibilities of
configuration management include setting, maintaining, adding, and updating the relationship among
components and the status of the components during network operation.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 41


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Configuration management consists of both device configuration and network configuration. Device
configuration can be performed either locally or remotely. Automated network configuration, such as
Dynamic Host Configuration Protocol (DHCP) and Domain Name Services (DNS), plays a key role in
network management.

User Requirements
Startup and shut down operations on a network are the specific responsibilities of configuration
management. It is often desirable for these operations on certain components to be performed
unattended (e.g., starting up or shutting down a network interface unit).The network manager
needs the capability to identify initially the components that comprise the network and to define
the desired connectivity of these components. Those who regularly configure a network with
the same or a similar set of resource attributes need ways to define and modify default
attributes and to load these predefined sets of attributes into the specified network components.
The network manager needs the capability to change the connectivity of network components
when users‘ needs change. Reconfiguration of a network is often desired in response to
performance evaluation or in support of network upgrade, fault recovery, or security checks.
Users often need to, or want to, be informed of the status of network resources and
components. Therefore, when changes in configuration occur, users should be notified of these
changes. Configuration reports can be generated either on some routine periodic basis or in
response to a request for such a report. Before reconfiguration, users often want to inquire
about the upcoming status of resources and their attributes.
Network managers usually want only authorized users (operators) to manage and control
network operation (e.g., software distribution and updating).

Fault Management
Fault management involves detection, isolation, and correction of abnormal operations that may cause
the failure of the OSI network. The major goal of fault management is to ensure that the network is
always available and when a fault occurs, it can be fixed as rapidly as possible.
Faults should be distinct from errors. An error is generally a single event, whereas a fault is an
abnormal condition that requires management attention to fix. For example, the physical
communication line cut is a fault, while a single bit error on a communication line is an error.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 42


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

To maintain proper operation of a complex network, care must be taken those systems as a whole, and
each essential component individually, are in proper working order. When a fault occurs, it is
important, as rapidly as possible, to
o Determine exactly where the fault is.
o Isolate the rest of the network from the failure so that it can continue to function without
interference.
o Reconfigure or modify the network in such a way as to minimize the impact of operation without
the failed component or components.
o Repair or replace the failed components to restore the network to its initial state.
Central to the definition of fault management is the fundamental concept of a fault. Faults are to be
distinguished from errors. A fault is an abnormal condition that requires management attention (or
action) to repair. A fault is usually indicated by failure to operate correctly or by excessive errors. For
example, if a communications line is physically cut, no signals can get through. Or a crimp in the cable
may cause wild distortions so that there is a persistently high bit error rate. Certain errors (e.g., a single
bit error on a communication line) may occur occasionally and are not normally considered to be faults.
It is usually possible to compensate for errors using the error control mechanisms of the various
protocols.

User Requirements
Users expect fast and reliable problem resolution. Most end users will tolerate occasional outages. When
these infrequent outages do occur, however, the user generally expects to receive immediate notification
and expects that the problem will be corrected almost immediately. To provide this level of fault
resolution requires very rapid and reliable fault detection and diagnostic management functions. The
impact and duration of faults can also be minimized by the use of redundant components and alternate
communication routes, to give the network a degree of fault tolerance. The fault management capability
itself should be redundant to increase network reliability.
Users expect to be kept informed of the network status, including both scheduled and unscheduled
disruptive maintenance. Users expect reassurance of correct network operation through mechanisms that
use confidence tests or analyze dumps, logs, alerts, or statistics. After correcting a fault and restoring a
system to its full operational state, the fault management service must ensure that the problem is truly
resolved and that no new problems are introduced. This requirement is called problem tracking and
control.
As with other areas of network management, fault management should have minimal effect on network
COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 43
performance.
Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Security Management
Security management protects the networks and systems from unauthorized access and security attacks.
The mechanisms for security management include authentication, encryption and authorization.
Security management is also concerned with generation, distribution, and storage of encryption keys as
well as other security-related information. Security management may include security systems such as
firewalls and intrusion detection systems that provide real-time event monitoring and event logs.
Security management is concerned with generating, distributing, and storing encryption keys.
Passwords and other authorization or access control information must be maintained and distributed.
Security management is also concerned with monitoring and controlling access to computer networks
and access to all or part of the network management information obtained from the network nodes.
Logs are an important security tool, and therefore security management is very much involved with the
collection, storage, and examination of audit records and security logs, as well as with the enabling and
disabling of these logging facilities.

User Requirements

Security management provides facilities for protection of network resources and user information.
Network security facilities should be available for authorized users only. Users want to know that
the proper security policies are in force and effective and that the management of security facilities
is itself secure.

Accounting Management
Accounting management enables charge for the use of managed objects to be measured and the cost for
such use to be determined. The measure may include the resources consumed, the facilities used to
collect accounting data, and set billing parameters for the services used by customers, the maintenance
of the databases used for billing purposes, and the preparation of resource usage and billing reports.
In many enterprise networks, individual divisions or cost centers, or even individual project accounts,
are charged for the use of network services. These are internal accounting procedures rather than actual
cash transfers, but they are important to the participating users nevertheless. Furthermore, even if no

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 44


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

such internal charging is employed, the network manager needs to be able to track the use of network
resources by user or user class for a number of reasons, including the following:
o A user or group of users may be abusing their access privileges and burdening the network at the
expense of other users.
o Users may be making inefficient use of the network, and the network manager can assist in
changing procedures to improve performance.
o The network manager is in a better position to plan for network growth if user activity is known in
sufficient detail

User Requirements
The network manager needs to be able to specify the kinds of accounting information to be
recorded at various nodes, the desired interval between successive sending of the recorded
information to higher-level management nodes, and the algorithms to be used in calculating the
charging. Accounting reports should be generated under network manager control.
To limit access to accounting information, the accounting facility must provide the capability to
verify users‘ authorization to access and manipulate that information.

Performance Management
Performance management is concerned with evaluating and reporting the behavior and the effectiveness
of the managed network objects. A network monitoring system can measure and display the status of
the network, such as gathering the statistical information on traffic volume, network availability,
response times, and throughput.
Modern data communications networks are composed of many and varied components, which must
intercommunicate and share data and resources. In some cases, it is critical to the effectiveness of an
application that the communication over the network be within certain performance limits. Performance
management of a computer network comprises two broad functional categories—monitoring and
controlling. Monitoring is the function that tracks activities on the network. The controlling function
enables performance management to make adjustments to improve network performance. Some of the
performance issues of concern to the network manager are as follows:
o What is the level of capacity utilization?
o Is there excessive traffic?

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 45


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Has throughput been reduced to unacceptable levels?


o Are there bottlenecks?
o Is response time increasing?
To deal with these concerns, the network manager must focus on some initial set of resources to be
monitored to assess performance levels. This includes associating appropriate metrics and values with
relevant network resources as indicators of different levels of performance. For example, what count of
retransmissions on a transport connection is considered to be a performance problem requiring
attention? Performance management, therefore, must monitor many resources to provide information in
determining network operating level. By collecting this information, analyzing it, and then using the
resultant analysis as feedback to the prescribed set of values, the network manager can become more
and more adept at recognizing situations indicative of present or impending performance degradation.

User Requirements
Before using a network for a particular application, a user may want to know such things as
the average and worst-case response times and the reliability of network services. Thus,
performance must be known in sufficient detail to respond to specific user queries. End users
expect network services to be managed in such a way as to afford their applications
consistently good response time. Network managers need performance statistics to help them
plan, manage, and maintain large networks. Performance statistics can be used to recognize
potential bottlenecks before they cause problems to end users. Appropriate corrective action
can then be taken. This action can take the form of changing routing tables to balance or
redistribute traffic load during times of peak use or when a bottleneck is identified by a rapidly
growing load in one area. Over the long term, capacity planning based on such performance
information can indicate the proper decisions to make, for example, with regard to expansion
of lines in that area.

Network Management Protocols


Simple Network Management Protocol (SNMP) is the most widely used data network management
protocol. Most of the network components used in enterprise network systems have built-in network
agents that can respond to an SNMP network management system. This enables new components to be
automatically monitored. Remote network monitoring (RMON) is, on the other hand, the most
important addition to the basic set of SNMP standards. It defines a remote network monitoring MIB

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 46


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

that supplements MIB-2 and provides the network manager with vital information about the
internetwork.
o Simple Network Management Protocol Version 1 (SNMPv1)
SNMP was developed for use as a network management tool for networks and internetworks operating
TCP/IP. It has since been expanded for use in all types of networking environments. The term simple
network management protocol (SNMP) is actually used to refer to a collection of specifications for
network management that include the protocol itself, the definition of a database, and associated
concepts. The model of network management that is used for SNMP includes the following key
elements:
o Management station, or manager
o Agent
o Management information base
o Network management protocols
3.2. Introduction to Network Administration Approach
3.3. TCP/IP Basics

The term ―TCP/IP‖ is shorthand for a large collection of protocols and services that are used for
internetworking computer systems. In any given implementation, TCP/IP encompasses operating
system components, user and administrative commands and utilities, configuration files, and device
drivers, as well as the kernel and library support upon which they all depend. Many of the basic TCP/IP
networking concepts are not operating system specific.
Figure 3.3 depicts an example TCP/IP network including several kinds of network connections.
Assuming that these computers are in reasonably close physical proximity to one another, this network
would be classed as a local area network (LAN).
A more typical example occurs when a higher-level protocol passes more data than will fit into a lower-
level protocol packet. The data in a UDP packet can easily be larger than the largest IP datagram, so the
data would need to be divided into multiple datagrams for transmission.
These are some of the most important lower-level protocols in the TCP/IP family:
ARP (Address Resolution Protocol)
The Address Resolution Protocol specifies how to determine the corresponding MAC address for an IP
address. It operates at the Network Access layer. While this protocol is required by TCP/IP networking,
it is not actually part of the TCP/IP suite.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 47


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

IP (Internet Protocol)
The Internet Protocol manages low-level data transmission, routing, and fragmentation/reassembly. It
operates at the Internet layer.
TCP (Transmission Control Protocol)
The Transmission Control Protocol provides reliable network communication sessions between
applications, including flow control and error detection and correction. It operates at the Transport
layer.
UDP (User Data Protocol)
The User Datagram Protocol provides ―connectionless‖ communication between applications. In
contrast to TCP, data transmitted using UDP is not delivery-verified; if expected data fails to arrive, the
application simply requests it again. UDP operates at the Transport layer.
Network operations are performed by a variety of network services, consisting of the software and other
facilities needed to perform a specific type of network task. For example, the ftp service performs file
transfer operations using the FTP protocol; the software program that does the actual work is the FTP
daemon (whose actual name varies).
A service is defined by the combination of a transport protocol TCP or UDP and a port: a logical
network connection endpoint identified by a number. The TCP and UDP port numbering schemes are
part of the definition of these protocols.
Configuring Mail Transfer Agent (postfix)
The concept of Mail Transfer Agents
Within Internet message handling services (MHS), a message transfer agent or mail transfer
agent (MTA) or mail relay is software that transfers electronic mail messages from one computer to
another using a client–server application architecture. An MTA implements both the client (sending)
and server (receiving) portions of the Simple Mail Transfer Protocol.
The terms mail server, mail exchanger, and MX host may also refer to a computer performing the
MTA function. The Domain Name System (DNS) associates a mail server to a domain with an MX
record containing the domain name of the host(s) providing MTA services.
A mail server is a computer that serves as an electronic post office for email. Mail exchanged across
networks is passed between mail servers that run specially designed software. This software is built

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 48


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

around agreed-upon, standardized protocols for handling not only mail messages, but also any data
files (such as images, multimedia or documents) that might be attached to them.
A message transfer agent receives mail from either another MTA, a mail submission agent (MSA),
or a mail user agent (MUA). The transmission details are specified by the Simple Mail Transfer
Protocol (SMTP). When a recipient mailbox of a message is not hosted locally, the message is
relayed, that is, forwarded to another MTA. Every time an MTA receives an email message, it adds a
Received trace header field to the top of the header of the message, thereby building a sequential
record of MTAs handling the message. The process of choosing a target MTA for the next hop is
also described in SMTP, but can usually be overridden by configuring the MTA software with
specific routes.

An MTA works in the background, while the user usually interacts directly with a mail user agent.
One may distinguish initial submission as first passing through an MSA – port 587 is used for
communication between an MUA and an MSA while port 25 is used for communication between
MTAs, or from an MSA to an MTA;[5] this distinction is first made in RFC 2476.
For recipients hosted locally, the final delivery of email to a recipient mailbox is the task of a
message delivery agent (MDA). For this purpose the MTA transfers the message to the message
handling service component of the message delivery agent. Upon final delivery, the Return-Path field
is added to the envelope to record the return path. The function of an MTA is usually complemented
with some means for email clients to access stored messages. This function typically employs a
different protocol. The most widely implemented open protocols for the MUA are the Post Office
Protocol (POP3) and the Internet Message Access Protocol (IMAP), but many proprietary systems
exist for retrieving messages (e.g. Exchange, Lotus Domino/Notes). Many systems also offer a web
interface for reading and sending email that is independent of any particular MUA.
At its most basic, an MUA using POP3 downloads messages from the server mailbox onto the local
computer for display in the MUA. Messages are generally removed from the server at the same time
but most systems also allow a copy to be left behind as a backup. In contrast, an MUA using IMAP
displays messages directly from the server, although a download option for archive purposes is
usually also available. One advantage this gives IMAP is that the same messages are visible from

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 49


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

any computer accessing the email account, since messages aren't routinely downloaded and deleted
from the server. If set up properly, sent mail can be saved to the server also, in contrast with POP
mail, where sent messages exist only in the local MUA and are not visible by other MUAs accessing
the same account.
The IMAP protocol has features that allow uploading of mail messages and there are
implementations that can be configured to also send messages like an MTA, which combine sending
a copy and storing a copy in the Sent folder in one upload operation.
The reason for using SMTP as a standalone transfer protocol is twofold:
 To cope with discontinuous connections. Historically, inter-network connections were not
continuously available as they are today and many readers didn't need an access protocol, as they
could access their mailbox directly (as a file) through a terminal connection. SMTP, if configured to
use backup MXes, can transparently cope with temporary local network outages. A message can be
transmitted along a variable path by choosing the next hop from a preconfigured list of MXes with no
intervention from the originating user.
 Submission policies. Modern systems are designed for users to submit messages to their local servers
for policy, not technical, reasons. It was not always that way. For example, the original Eudora email
client featured direct delivery of mail to the recipients' servers, out of necessity. Today, funneling
email through MSA systems run by providers that in principle have some means of holding their users
accountable for the generation of the email is a defense against spam and other forms of email abuse.
Configuring a Proxy Caches (Squid)
Squid is a caching and forwarding HTTP web proxy. It has a wide variety of uses, including
speeding up a web server by caching repeated requests, caching web, DNS and other computer
network lookups for a group of people sharing network resources, and aiding security by filtering
traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other
protocols including Internet Gopher, SSL, TLS and HTTPS. Squid does not support the SOCKS
protocol.
Squid was originally designed to run as a daemon on Unix-like systems. A Windows port was
maintained up to version 2.7. New versions available on Windows use the Cygwin environment.
Squid is free software released under the GNU General Public License.
After a Squid proxy server is installed, web browsers can be configured to use it as a proxy HTTP
server, allowing Squid to retain copies of the documents returned, which, on repeated requests for

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 50


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

the same documents, can reduce access time as well as bandwidth consumption. This is often useful
for Internet service providers to increase speed to their customers, and LANs that share an Internet
connection.
A client program (e.g. browser) either has to specify explicitly the proxy server it wants to use
(typical for ISP customers), or it could be using a proxy without any extra configuration:
"transparent caching", in which case all outgoing HTTP requests are intercepted by Squid and all
responses are cached. The latter is typically a corporate set-up (all clients are on the same LAN) and
often introduces the privacy concerns mentioned above.
Squid has some features that can help anonymize connections, such as disabling or changing specific
header fields in a client's HTTP requests.
Another setup is "reverse proxy" or "webserver acceleration" (using http_port 80 accel vhost). In this
mode, the cache serves an unlimited number of clients for a limited number of—or just one—web
servers.
As an example, if slow.example.com is a "real" web server, and www.example.com is the Squid
cache server that "accelerates" it, the first time any page is requested from www.example.com, the
cache server would get the actual page from slow.example.com, but later requests would get the
stored copy directly from the accelerator (for a configurable period, after which the stored copy
would be discarded). The end result, without any action by the clients, is less traffic to the source
server, meaning less CPU and memory usage, and less need for bandwidth. This does, however,
mean that the source server cannot accurately report on its traffic numbers without additional
configuration, as all requests would seem to have come from the reverse proxy.
It is possible for a single Squid server to serve both as a normal and a reverse proxy simultaneously. For
example, a business might host its own website on a web server, with a Squid server acting as a reverse
proxy between clients (customers accessing the website from outside the business) and the web server.
The same Squid server could act as a classical web cache, caching HTTP requests from clients within
the business (i.e., employees accessing the internet from their workstations), so accelerating web access
and reducing bandwidth demands.
TCP/IP Troubleshooting
TCP/IP troubleshooting utilities are essential to diagnose most TCP/IP problems and begin working on solutions.
The top 7 tools for troubleshooting the TCP/IP problems are: Ping, Tracert, ARP, Netstat, Nbtstat, NSLookup,

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 51


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

and IPconfig. These tools will help you to check the status of your network and allow you to troubleshoot and
test connectivity to remote hosts.
1. Ping
The PING utility tests connectivity between two
hosts. PING uses a special protocol called the
Internet Control Message Protocol (ICMP) to
determine whether the remote machine (website,
server, etc.) can receive the test packet and reply.
Also a great way to verify whether you have TCP/IP
installed and your Network Card is working.
Example
Pinging the loopback address (127.0.0.1) to verify
that TCP/IP is installed and configured correctly on the local computer.
Type: PING 127.0.0.1
This ping result tells that TCP/IP is working as well as Network Card. To test out connectivity to a
website all you have to do is type: ping espn.com
The results should tell you if the connection was successful or if you had any lost packets. Packet
loss describes a condition in which data packets appear to be transmitted correctly at one end of a
connection, but never arrive at the other. Why? Well, there are a few possibilities.
The network connection might be poor and packets get damaged in transit or the packet was dropped
at a router because of internet congestion. Some Internet Web servers may be configured to
disregard ping requests for security purposes.
Note the IP address of espn.com is 199.181.132.250. You can also ping this address and get the
same result. However, Ping is not just used to test websites. It can also test connectivity to various
servers: DNS, DHCP, your Print server, etc. As you get more into networking you'll realize just how
handy the Ping utility can be.
2. Tracert
Tracert is very similar to Ping, except that
Tracert identifies pathways taken along
each hop, rather than the time it takes for
each packet to return (ping). If there is

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 52


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

trouble connecting to a remote host, use Tracert to see where that connection fails. Any information
sent from a source computer must travel through many computers / servers / routers (they're all the
same thing, essentially) before it reaches a destination.
It may not be your computer but something that is down along the way. It can also tell you if
communication is slow because a link has gone down between you and the destination.
If you know there are normally 4 routers but Tracert returns 8 responses, you know your packets are
taking an indirect route due to a link being down.
3. ARP
The ARP utility helps diagnose problems associated
with the Address Resolution Protocol (ARP). TCP/IP
hosts use ARP to determine the physical (MAC) address
that corresponds with a specific IP address. Type arp
with the – a option to display IP addresses that have
been resolved to MAC addresses recently.

4. Netstat
Netstat (Network Statistics) displays network connections (both incoming and outgoing), routing
tables, and a number of network interface statistics. Netstat –s provides statistics about incoming
and outgoing traffic.
5. Nbtstat
Nbtstat (NetBios over TCP/IP) enables you to check information about NetBios names. It helps us
view the NetBios name cache (nbtstat -c) which shows the NetBios names and the corresponding IP
address that has been resolved (nbtstat -r) by
a particular host as well as the names that
have been registered by the local system
(nbtstat –n).
6. NSLookup
NSLookup provides a command-line utility
for diagnosing DNS problems. In its most
basic usage, NSLookup returns the IP address
with the matching host name.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 53


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

7. IPconfig
IPconfig is not part of the TCP/IP utilities but it is useful to show current TCP/IP settings. The
IPConfig command line utility will show detailed information about the network you are connected
to. It also helps with reconfiguration of your IP
address through release and renew. Let's say you
want to know what your IP address is. IPconfig is
what you type in the command prompt.
IPconfig will give a quick view of you IP address,
your subnet mask and default gateway.
IPconfig /all will give you more detailed
information. Through IPconfig /all it can find DNS severs, if DHCP enabled, MAC Address, along
with other helpful information. Other IPConfig tools that are helpful include IPconfig /release and
IPconfig /renew.
Unless knowing the static IP address it is better to set the option for automatically obtaining the IP
address. This is achieved through a DHCP server. Dynamic Host Configuration Protocol (DHCP) is
a network protocol that enables a server to automatically assign an IP address to a computer from a
defined range of numbers (i.e., a scope) configured for a given network. Let's look at what happens
when release IP address. Assume the internet connection
is lost and the IP address is 0.0.0.0. Now type IPconfig
/renew this option re-establishes TCP/IP connections on
all network adapters and resume the internet surfing.
Note: IPconfig /release renew won't work if you
manually assigned your IP addresses.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 54


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

CHAPTER FOUR:-HOST AND NETWORK SECURITY


4.1. Security Planning & System Audits
 Introduction Security
Securing the modern business network and IT infrastructure demands an end-to-end approach and a
firm grasp of vulnerabilities and associated protective measures. While such knowledge cannot thwart
all attempts at network incursion or system attack, it can empower network engineers to eliminate
certain general problems, greatly reduce potential damages, and quickly detect breaches. With the ever-
increasing number and complexity of attacks, vigilant approaches to security in both large and small
enterprises are a must.
o Network Security Design
Following a structured set of steps when developing and implementing network security will help you
address the varied concerns that play a part in security design. Many security strategies have been
developed in a haphazard way and have failed to actually secure assets and to meet a customer's
primary goals for security. Breaking down the process of security design into the following steps will
help you effectively plan and execute a security strategy:
1. Identify network assets.
2. Analyse security risks.
3. Analyse security requirements and trade-offs.
4. Develop a security plan.
5. Define a security policy.
6. Develop procedures for applying security policies.
7. Develop a technical implementation strategy.
8. Achieve buy-in from users, managers, and technical staff.
9. Train users, managers, and technical staff.
10. Implement the technical strategy and security procedures.
11. Test the security and update it if any problems are found.
12. Maintain security.
1. Identifying Network Assets
Network assets can include network hosts (including the hosts' operating systems, applications, and
data), internetworking devices (such as routers and switches), and network data that traverses the

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 55


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

network. Less obvious, but still important, assets include intellectual property, trade secrets, and a
company's reputation.
2. Analysing Security Risks
Risks can range from hostile intruders to untrained users who download Internet applications that have
viruses. Hostile intruders can steal data, change data, and cause service to be denied to legitimate users.
Denial-of-service (DoS) attacks have become increasingly common in the past few years.
3. Analysing Security Requirements and Trade-offs
Although many customers have more specific goals, in general, security requirements boil down to the
need to protect the following assets:
 The confidentiality of data, so that only authorized users can view sensitive information
 The integrity of data, so that only authorized users can change sensitive information
 System and data availability, so that users have uninterrupted access to important computing
resources
One old truism in security is that the cost of protecting yourself against a threat should be less than the
cost of recovering if the threat were to strike you. Cost in this context should be remembered to include
losses expressed in real currency, reputation, trustworthiness, and other less obvious measures.
As is the case with most technical design requirements, achieving security goals means making trade-
offs. Trade-offs must be made between security goals and goals for affordability, usability,
performance, and availability. Also, security adds to the amount of management work because user
login IDs, passwords, and audit logs must be maintained.
Security also affects network performance. Security features such as packet filters and data encryption
consume CPU power and memory on hosts, routers, and servers. Encryption can use upward of 15% of
available CPU power on a router or server. Encryption can be implemented on dedicated appliances
instead of on shared routers or servers, but there is still an effect on network performance because of the
delay that packets experience while they are being encrypted or decrypted.
Another trade-off is that security can reduce network redundancy. If all traffic must go through an
encryption device, for example, the device becomes a single point of failure. This makes it hard to meet
availability goals.
Security can also make it harder to offer load balancing. Some security mechanisms require traffic to
always take the same path so that security mechanisms can be applied uniformly. For example, a
mechanism that randomizes TCP sequence numbers (so that hackers can't guess the numbers) won't

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 56


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

work if some TCP segments for a session take a path that bypasses the randomizing function due to
load balancing.
4. Developing a Security Plan
One of the first steps in security design is developing a security plan. A security plan is a high-level
document that proposes what an organization is going to do to meet security requirements. The plan
specifies the time, people, and other resources that will be required to develop a security policy and
achieve technical implementation of the policy. As the network designer, you can help your customer
develop a plan that is practical and pertinent. The plan should be based on the customer's goals and the
analysis of network assets and risks.
A security plan should reference the network topology and include a list of network services that will be
provided (for example, FTP, web, email, and so on). This list should specify who provides the services,
who has access to the services, how access is provided, and who administers the services.
As the network designer, you can help the customer evaluate which services are definitely needed,
based on the customer's business and technical goals. Sometimes new services are added unnecessarily,
simply because they are the latest trend. Adding services might require new packet filters on routers and
firewalls to protect the services, or additional user-authentication processes to limit access to the
services, adding complexity to the security strategy. Overly complex security strategies should be
avoided because they can be self-defeating. Complicated security strategies are hard to implement
correctly without introducing unexpected security holes.
One of the most important aspects of the security plan is a specification of the people who must be
involved in implementing network security:
 Will specialized security administrators be hired?
 How will end users and their managers get involved?
 How will end users, managers, and technical staff be trained on security policies and procedures?
For a security plan to be useful, it needs to have the support of all levels of employees within the
organization. It is especially important that corporate management fully support the security plan.
Technical staff at headquarters and remote sites should buy into the plan, as should end users.
5. Developing a Security Policy
According to RFC 2196, "Site Security Handbook:"
A security policy is a formal statement of the rules by which people who are given access to an
organization's technology and information assets must abide.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 57


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

A security policy informs users, managers, and technical staff of their obligations for protecting
technology and information assets. The policy should specify the mechanisms by which these
obligations can be met. As was the case with the security plan, the security policy should have buy-in
from employees, managers, executives, and technical personnel.
Developing a security policy is the job of senior management, with help from security and network
administrators. The administrators get input from managers, users, network designers and engineers,
and possibly legal counsel. As a network designer, you should work closely with the security
administrators to understand how policies might affect the network design.
After a security policy has been developed, with the engagement of users, staff, and management, it
should be explained to all by top management. Many enterprises require personnel to sign a statement
indicating that they have read, understood, and agreed to abide by a policy.
A security policy is a living document. Because organizations constantly change, security policies
should be regularly updated to reflect new business directions and technological shifts. Risks change
over time also and affect the security policy.
Components of a Security Policy
In general, a policy should include at least the following items:
 An access policy that defines access rights and privileges. The access policy should provide
guidelines for connecting external networks, connecting devices to a network, and adding new
software to systems. An access policy might also address how data is categorized (for example,
confidential, internal, and top secret).
 An accountability policy that defines the responsibilities of users, operations staff, and management.
The accountability policy should specify an audit capability and provide incident-handling
guidelines that specify what to do and whom to contact if a possible intrusion is detected.
 An authentication policy that establishes trust through an effective password policy and sets up
guidelines for remote-location authentication.
 A privacy policy that defines reasonable expectations of privacy regarding the monitoring of
electronic mail, logging of keystrokes, and access to users' files.
 Computer-technology purchasing guidelines that specify the requirements for acquiring,
configuring, and auditing computer systems and networks for compliance with the policy.
Well thought out security policies can significantly increase the security of a network. While policies
can be both complex and cumbersome or basic and straight forward, it is often the simple aspects

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 58


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

that prove most useful. Consider the combination of a centrally managed anti-virus update system
and a host scanner to detect new or out of date systems. While this system would entail setup, central
administration and software deployment capabilities, these are all generally available with today‘s
Operating Systems. In general, policies and ideally automatic enforcement tools help reduce the
obvious holes in system security so that one can concentrate on the more complex issues. The
following would typically be part of an enterprise network security policy:
o Firewalls at all public-private network transit points
o Version controlled and centrally deployed firewall rule sets
o External resources placed in dual firewall, DMZ protected networks
o All network hosts lock down unneeded network ports, turn off unneeded services
o All network hosts include centrally managed anti-virus software
o All network hosts utilize central security updates
o Secure central authentication such as Radius, Windows/Kerberos/Active Directory
o Centrally managed user management with password policy (i.e. must change every three months
and must be “secure password”
o Proactive network scanning for new hosts, out of date systems
o Network monitoring for suspicious behavior
o Incident response mechanisms (policies, manual, automated, etc.)
6. Developing Security Procedures
Security procedures implement security policies. Procedures define configuration, login, audit, and
maintenance processes. Security procedures should be written for end users, network administrators,
and security administrators. Security procedures should specify how to handle incidents (that is, what to
do and who to contact if an intrusion is detected). Security procedures can be communicated to users
and administrators in instructor-led and self-paced training classes.
7. Maintaining Security
Security must be maintained by scheduling periodic independent audits, reading audit logs, responding
to incidents, reading current literature and agency alerts, performing security testing, training security
administrators, and updating the security plan and policy. Network security should be a perpetual
process. Risks change over time, and so should security. Cisco security experts use the term security
wheel to illustrate that implementing, monitoring, testing, and improving security is a never-ending
process. Many overworked security engineers might relate to the wheel concept. Continually updating

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 59


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

security mechanisms to keep up with the latest attacks can sometimes make an administrator feel a bit
like a hamster on a training wheel.
 System Auditing
After you have established the protection mechanisms on your system, you will need to monitor them. You
want to be sure that your protection mechanisms actually work. You will also want to observe any
indications of misbehaviour or other problems. This process of monitoring the behaviour of the system is
known as auditing.
Various operating systems maintain a number of log files that keep track of what has been happening to the
computer. Log files are an important building block of a secure system: they form a recorded history, or
audit trail, of your computer past, making it easier to track down intermittent problems or attacks. By using
log files, you may be able to piece together enough information to discover the cause of a bug, the source of
a break-in, and the scope of the damage involved. In cases where you cannot stop damage from occurring,
at least you will have some record of it. Those logs could be exactly what you need to build your system,
conduct an investigation, give testimony, recover insurance money, or get accurate field service performed.
Log files also have a fundamental vulnerability: because they are often recorded on the system itself, they
are subject to alteration or deletion.
Events to Audit
Careful consideration should be taken when looking at which events to audit. Auditing can cause potential
performance loss. If all events are audited on a system, the performance of a system will degrade
substantially. The events to be audited are to be chosen carefully depending on what you want to audit.
Operating systems audit a variety of events:
o Logon and logoff information
o System shutdown and restart information
o File and folder access
o Password changes
o Object access
o Policy changes

Most audit logs are able to keep a history or backlog of events. Log files can be set up in various ways.
Some of these ways include:
o Setting the log file to a certain size and then overwriting the events as needed when the log file fills up.
They use the concept of first in-first out.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 60


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Setting the log file to fill up for a certain amount of days.


o Setting the log file to specified size. Once the log file fills up, the log file needs to be cleared manually.
Technologies to Keep the System Running in the Event of a Failure
Computers are not failure proof; you can only make computers more failure resistant. Faulty hardware,
attackers, natural disasters, power failures, and errors from users can corrupt, damage, or delete data from a
system. In the likely event that any of these threats do occur, a disaster recovery plan needs to be in place.
To prevent these disasters from becoming a financial burden on the organization, you should develop plans
for the recovery and restoration of data.
Other components and procedures could be included also; this is just a guideline on how to start going about
setting up a disaster recovery plan. One important step to take is to always try to test what plans you have
implemented. Most administrators know that it takes money, equipment, and time to test recovery
procedures. If plans and procedures are structured and tested correctly, recovery will become easier.
Spares
It's always a good idea to have spares readily available in case of emergency. This includes both hardware
and software spares. The following is a basic inventory that lists the hardware and software components that
should be stored as emergency spares:
o Motherboards, CPUs, memory modules, video cards and screens, and power supplies
o Hard drives, floppy drives, tape drives, CD ROM readers, etc.
o Network interface cards and modems
o Network cables, hubs, switches, bridges, routers, and other networking hardware
o Original copies of currently installed software and service packs
o Original copies of currently installed operating systems and service packs
o Any additional hardware cards like serial cards and printer port cards
o Any peripheral components like printers, scanners, and multimedia devices
Once you decide which hardware and software components to have spares of, general maintenance and
record keeping will help you discover impending errors. Many organizations keep a configuration
management database or record book for each critical system. Configuration databases help to track when
patches and changes are made to a system, or hardware or software changes. Included in the database
should be general system information such as:
o Hardware configuration

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 61


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Software configuration including operating system versions, service packs applied, software packages
installed, and disk configurations such as partition information
o Network configuration such as network cards, protocols, and any physical and logical addresses
Errors and failures should also be logged in the database. This creates a history and often certain patterns
and events appear.
Maintenance schedules should be set up to check general systems. Audit logs and general system and
application logs should be checked on a regular basis. If possible, run defragmentation utilities on disks and
partitions where general data is stored.
Develop an Incident Response Team
Develop an incident response team to help control and recover systems in the event of a disaster. The
incident response team should document:
o Notification plan of who to contact for which kinds of problems or emergencies, and how to notify
them
o Contact information for administrators that need to be notified
o Contact information on certain vendors and consultants support
o Management personnel that need to be notified
o Any other critical users
Fault Tolerance
To minimize the loss of data and allow for the continuity of operations, you can use technologies such as
Redundant Array of Inline Disks (RAID) and Microsoft Cluster Technology. In this section we are going to
concentrate on RAID technologies. RAID is a fault tolerant disk configuration in which part of the physical
storage capacity contains redundant information about data stored on the disks. Redundant information that
is stored on the disks helps to keep the system running in the event of a single disk failure.
RAID technology is either implemented through software or hardware systems. Hardware implementations
of RAID are more expensive than software, but faster. Some hardware implementations of RAID support
hot swapping of disks, which enables administrators to swap failed hard disks while the computer is
running.
There are various types of RAID techniques used. The s two most common techniques: Disk mirroring and
disk striping with parity.
Disk Mirroring

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 62


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

In disk mirroring only two disks are used. Information on one disk is duplicated onto the other disk. When
data is written to one disk, it is duplicated on the other disk. This could cause a slight loss in write
performance. A variation of mirroring is disk duplexing, where each disk has its own controller. This helps
to increase write operations and provide redundancy incase a controller fails. Read operations on disk
duplexing and mirroring.
Advantages of using mirror sets are:
o Read operations are fast.
o Recovery from failure is rapid.
o In software implantations of mirror sets, the system and boot partitions can be mirrored.
Disadvantages of mirror sets are:
o There is a slight loss in performance during write operations.
o Only 50% of the total storage space can be used to store data. For example, two 1GB hard drives. One
drive is used as a backup; the other stores the data.
o If you use software mirror sets, you will be required to create a fault tolerant boot disk.
Disk Striping with Parity
Strips of equal size on each disk in the volume make up a stripe set. A stripe set with parity adds parity to a
stripe set configuration. Data is written across two or more hard drives, while another hard drive holds the
parity information. The data and parity information is written in such a way on the volume so that they are
always on different disks.
This way, if one of the hard drives fails, the two remaining drives can recalculate the lost information using
the parity information from other disks. When the faulty hard drive is replaced, information can be
regenerated back onto a newly installed working hard drive by using the parity information. The minimum
number of hard drives involved in disk striping with parity is 3, and the maximum number is 32 hard drives.
A stripe set with parity works well when large databases are implemented on a system and read operations
are performed more often than write operations. This is because a stripe set with parity has excellent read
operations. Stripe sets with parity should be avoided in situations where applications require high-speed
data collection from a process or database applications, where records are continually being updated. In
write operations, performance degrades as the percentage of write operations increases.
Advantages of using stripe set with parity are:
o Read operations are faster than using a single disk drive. The more drives you put into the system the
faster the read operations.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 63


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Stripe set with parity uses only one disk for parity information. The more disks you insert the more
space there is for data.
o There is not a lot of administrative effort in replacing a faulty disk.
Disadvantages are:
o In software implementations of stripe sets with parity, neither the boot nor the system partition can be
on the strip set.
o Write operations are slower because of the parity information that needs to be generated.
o When a hard disk fails in the stripe set the performance of the system degrades. This is due to the
information having to be recalculated when requests for information occurs.
o Stripe sets with parity consume more memory than mirror sets because of the parity information that
needs to be generated.
Cluster Server Technology
Certain organizations would like to keep computer systems operational continuously, 24 hours a day, 7 days
a week, 365 days a year. One way to do this is by implementing cluster server technology. A cluster is an
interconnected group of servers that act as a single unit in sharing some resource or responsibility. Cluster
server technology allows users to view a group of clustered computers as one entity. Both cables and cluster
server software connect the computers in into a cluster. Microsoft Windows 2000 advanced server has
cluster software readily available that will allow you to manage clustering. Microsoft cluster service and
network load balancing offer availability and scalability to organizations that build applications using a
multi-tier model. Cluster server technology allows features such as:
o Fault tolerance. In the event of a computer or node failure in a cluster, the other computers keep
running. Fault-tolerant systems employ redundant hardware and operating systems that work together at
every level in exact synchronization across two server units.
o High availability. This focuses on maximizing uptime by implementing automated response to failure
and failover systems. To enhance availability, you add on more servers and backup systems to the
cluster in order to take over responsibility in the event of a failure. The servers need to keep monitoring
each other's activities, and must maintain consistency every few milliseconds. This is usually
implemented by a high-speed interconnect directly between the servers.
o Resource sharing. Resource sharing involves making server components, such as disk storage and
printers, available across all the nodes in the cluster. This is especially important for database servers,
which need to share large volumes between machines while maintaining consistency of data.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 64


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

o Load sharing. Load sharing involves balancing application processing across the various nodes in the
cluster. This can be implemented by distributing new logins to different servers, based on their load at
the moment. It could also involve directly moving a running application from one server to another
o High throughput. High throughput focuses on the ability to process network requests or packets
quickly. This becomes most important in applications like Web or FTP servers, whose primary job is to
push out data. This kind of clustering focuses on improving the network interfaces and the routing of
network requests to servers. It can be built into the cluster nodes themselves, or may be a property of an
external balancing device.
Standby Servers
It is possible to set up a standby server in case the production server fails. The standby server should mirror
the production server. You can use the standby server to replace the production server in the event of a
failure or as a read-only server.
Create the standby server by loading the same operating system and applications as on the production
server. Make backups of the data on the production server and restore these backups on the standby server.
This also helps to verify backups that are performed. The standby server will have a different IP address and
name if it is connected to the network. You will have to change the IP address and name of the standby
server if the production server fails and the standby server needs to become the production server.
To maintain the standby server, regular backups and restorations need to be performed. For example, let's
say you make a full back-up on Mondays and incremental backups every other day of the week. You would
restore the full back-up on the standby server and subsequent incremental backups thereafter on the days
that the backups are performed.
4.2. Security standards and Levels (ISO 15408 standard)
With the rise of security breaches and the running of technology at its highest gear on the information
superhighway, protection of confidential and vital information never has been more crucial.
The security assurance that user required can come from various method; rely upon the word of
manufacturer/service provider, test the system themselves, or rely on an impartial assessment by an
independent body (evaluation). Therefore, the evaluation criteria can be a yardstick for users to assess
systems or products, a guarantee for manufacturers of secure systems or products and a basis for specifying
security requirements.
The Common Criteria (CC) was developed to facilitate consistent evaluations of security products and
systems. It is an international effort to define an IT Security evaluation methodology, which would receive

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 65


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

mutual recognition between customers and vendors throughout the global economy. The theory behind CC,
is that CC will advance the state of security by encouraging various parties to write Protection Profiles
outlining their needs and desires, in return it will push vendors to meet the resulting Protection Profiles. The
theory proposes that, as users profile desired capabilities that are not currently available, the vendors will
attempt to gain market share by taking up the challenge.
The CC permits comparability between the results of independent security evaluations. The CC does so by
providing a common set of requirements for the security functionality of IT products and for assurance
measures applied to these IT products during a security evaluation. These IT products may be implemented
in hardware, firmware or software.
Key terminology and concepts
 Protection Profile Implementation independent statement of security requirements for a category of TOEs
(target of evaluation) that meet specific customer needs to address a specified security environment.
 Security Targets are a basis against which an evaluation is performed. It’s contains the TOE security threat,
objectives, requirements, and summary specification of security functions and assurance measures.
 Package is an intermediate combination of security requirement components is termed a package. The
package permits the expression of a set of either functional or assurance requirements that meet some
particular need, expressed as a set of security objectives.
 Target of Evaluation is an IT product or system to be evaluated, the security characteristics of which are
described in specific terms by a corresponding ST, or in more general terms by a PP. In CC philosophy, it is
important that a product or system be evaluated against the specific set of criteria expressed in the ST.
This evaluation consists of rigorous analysis and testing performed by an accredited, independent
laboratory. The scope of a TOE evaluation is set by the EAL and other requirements specified in the ST.
CC Building Blocks
 Security Functional Requirements
Security functional requirement are grouped into classes. Classes are the most general grouping of security
requirements, and all members of a class share a common focus. There are 11 functionality classes within
CC. These are as follows:
Audit, Identification and Authentication, Resource Utilizations, Cryptographic support, Security
management, TOE Access, Communications, Privacy, Trusted Path/Channels, User
Data Protection, Protection of the TOE Security Functions.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 66


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Each of these classes contains a number of families. The requirements within each family share security
objectives, but differ in emphasis or rigor.
 Security Assurance Requirements
Security assurance requirement are grouped into classes. Classes are the most general grouping of security
requirements, and all members of a class share a common focus. There are 8 assurance classes within CC.
These are as follows:
Configuration management, Guidance documents, Vulnerability assessment, Delivery and operation, Life
cycle support, Assurance maintenance, Development, and Test.
The CC has provided 7 predefined assurance packages known as Evaluation Assurance Levels (EALs).
These are:
1. EAL1 - Functionally Tested: This where the applicable where threat to security is not serious, however
some confidence in current operation is required. In the evaluation, there is no assistance from TOE
developer. The requirements are: Configuration Management, Delivery and Operation, Development,
Guidance documents and Tests.
2. EAL2: Structurally Tested. This assurance level is applicable where low to moderate level of
independently assured security is required. Here, it requires some cooperation from the developer. It
will definitely require no more than good vendor commercial practices. To add to the previous
requirements are developer testing, vulnerability analysis, and more extensive independent testing.
3. EAL3: Methodically Tested and Checked. It is applicable where moderate level of independently
assured security is required. The cooperation from the developer is requires. It places additional
requirements on testing, development environment controls and configuration management. The
additional requirement is the Life Cycle support.
4. EAL4: Methodically Designed, Tested, and Reviewed. This is applicable where moderate to high level
of independently assured security is required. It is to ensure that there is some security engineering
added to commercial development practices. This currently the highest level likely for retrofit of an
existing product. There are additional requirements on design, implementation, vulnerability analysis,
development and configuration management.
5. EAL5: Semi formally Designed and Tested. It is applicable where high level of independently assured
security is required. It requires rigorous commercial development practices and moderate use of
specialist engineering techniques with additional requirements on specification, design, and their
correspondence.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 67


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

6. EAL6: Semi formally Verified Design and Tested. This evaluation level is applicable where assets are
valuable and risks are high and do requires a rigorous development environment. The additional
requirements are on analysis, design, development, configuration management, and
vulnerability/covert channel analysis.
7. EAL7: Formally Verified Design and Tested. This is applicable where assets are highly valuable and risks
are extremely high. However, practical use is functionally limited for amenability to formal analysis. The
assurance is gained through application of formal methods.
4.3. Password Security
Passwords are the first line of defense against cyber criminals. Hackers have different ways of attempting
to gain access to accounts, but the ability to avoid attack is highly dependent on the strength of the
password. Each password must possess the following element.
1. Strong password/pass phrases
Too often, it is better to create and remember passwords that looked something like this: 1Wn$kf*r@!!
The days of solely focusing on password complexity are over. Experts agree using pass phrase or a series
of random dictionary words is a smarter way to approach password strength. The following tips will
helps in creating strong password, memorable password or phrase:
 Longer is stronger. The best passwords are at least 10 characters in length, including some
capitalization.
 Use a phrase. Pass phrases are easy to remember, but difficult to guess. If the service will allow, use
spaces a special character for added strength. This also makes the phrases easier to type.
 Misspell a word or two. Make a note of what was misspelled until typing the pass phrases become a
habit (usually within few days). A good password is one that is easy to remember, but difficult to
guess.
2. Protect password
 Never share the password with other.
 Change the password periodically, if the password have bee compromised, change the password
immediately.
 Don‘t enable the ―remember password‖ function on the website
 Use a unique password for each of the important accounts. Choosing the same password for each of
online accounts is like using the same key to lock the home, car and office: if criminal gains access
to one, all them are compromised. It may be less convenient, but picking multiple keeps safer.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 68


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

3. Use password manager


Having to memorize numerous, long , complex passwords can be difficult. Password manager are great
tool that can generate and store passwords.
4. Enable two-tire authentication
Any time a service like Gmail, Facebook, or the bank offers two step or two-factor authentications use it.
Passwords are no longer enough protection especially for sensitive information. Two-factor
authentications puts an extra layer of security on the accounts by requiring something you know (e.g.
password) and something you have (e.g. cell phone receiving text message )
4.4. Access Control and Monitoring: Wrappers
4.4.1. Access Control List
Many types of equipment or hosts can be configured with access lists. These lists define hostnames
or IP addresses that are valid for accessing the device in question. It is typical, for instance, to
restrict access to network equipment from the inside of an organization‘s network. This would then
protect against any type of access that might breach an external firewall. These types of access lists
serve as an important last defence and can be quite powerful on some devices with different rules for
different access protocols.
o Securing Access to Devices and Systems
Since data networks cannot always be assumed to be protected from the possibility of intrusion or
data ―sniffing‖, protocols have been created to increase the security of attached network devices. In
general there are two separate issues to be concerned about, authentication and non-disclosure
(encryption). There are a variety of schemes and protocols to address these two requirements in
secure systems and communication. The basics of authentication are discussed first and then
encryption.
 User authentication for network devices
Authentication is necessary when one wants to control access to network elements, in particular
network infrastructure devices. Authentication has two sub concerns, general access authentication
and functional authorization. General access is the means to control whether or not a particular user
has ANY type of access right to the element in question. Usually we consider these in the form of a
―User account‖. Authorization is concerned with individual user ―rights‖. What, for example, can a
user do once authenticated? Can they configure the device or only see data. Table 1 is a summary of
the major authentication protocols, their features, and their relevant applications.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 69


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Restricting access to devices is one of the most important aspects of securing a network. Since
infrastructure devices are supporting both the network and computing equipment ipso facto,
compromising these can potentially bring down an entire network and its resources. Paradoxically,
many IT departments go through great pains to protect servers, institute firewalls and secure access
mechanisms, but leave some basic devices with rudimentary security. At a minimum, all devices
should have username password authentication with non-trivial (10 character, mixed alpha, numbers
and symbols). Users should be restricted in both numbers and type of authorization.
Care should be taken when using remote access methods that are not secure, i.e. usernames and
passwords passed in the clear over the network. Passwords should also be changed with some
reasonable frequency, perhaps every three months and when employees leave, if group passwords
are used.
 Centralized authentication methods
Appropriate authentication methods are important at a minimum, however, centralized
authentication methods are even better when either a) large numbers of users for devices are
involved or, b) large numbers of devices are in the network. Traditionally centralized authentication
was used to solve problems found in situation (a); the most common was remote network access. In
remote access systems such as dial-up RAS, the administration of users on the RAS network units
themselves was just not possible. Potentially any user of the network could attempt to use any of the
existing RAS access points. Placing all user information in all RAS units and then keeping that
information up-to-date would exceed the abilities of RAS units in any large enterprise of users and
be an administrative nightmare. Centralized authentication systems such as RADIUS and Kerberos
solve this problem by using centralized user account information that the RAS units, or other types
of equipment, can access securely. These centralized schemes allow information to be stored in one
place instead of many places. Instead of having to manage users on many devices, one location of
user management can be used. If user information needs to be changed, such as a new password, one
simple task can accomplish this. If a user leaves, the deletion of the user account prevents access for
all equipment using centralized authentication. A typical problem with non-centralized
authentication in larger networks is remembering to delete accounts in all places. Centralized
authentication systems such as RADIUS can usually be seamlessly integrated with other user
account management schemes such as Microsoft‘s Active Directory or LDAP directories. While
these two directory systems are not themselves authentication systems, they are used as centralized

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 70


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

account storage mechanisms. Most RADIUS servers can communicate with RAS or other network
devices in the normal RADIUS protocol and then securely access account information stored in the
directories. This is exactly what Microsoft‘s IAS server does to bridge RADIUS and Active
Directory. This approach means that not only is centralized authentication being provided for the
users of RAS and devices, but also the account information is unified with the Microsoft domain
accounts. Figure 1 shows a Windows Domain controller operating as both an Active Directory
server and a RADIUS server for network elements to authenticate into an Active Directory domain.

Figure 1:- Windows Domain Controller


 Securing network data with encryption and authentication
In some cases it is important to be concerned about disclosing information that is exchanged
between network elements, computers or systems. Certainly it is not desirable that someone could
access a bank account that is not theirs or capture personal information that may be transmitted over
a network. When one wishes to avoid data disclosure over a network, encryption methods must be
employed that make the transmitted data unreadable to someone who might somehow capture the
data as it traverses a network.
There are many methods to ―encrypt‖ data and some of the major methods are described. With
respect to network devices such as UPS systems, the concern is not traditionally about the value of
protecting data such as UPS voltages and power strip currents; however, there is a concern with
controlling access to these elements.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 71


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

The non-disclosure of authentication credentials such as usernames and passwords is critical in any
system where access is done over non-secure networks, the Internet for example. Even within
organizations‘ private networks, protection of these credentials is a best practice. While it is less
common, many organizations are starting to implement policies that ALL management traffic be
secure (encrypted) not just authentication credentials. In either case, some form of cryptographic
methods must be employed.
Encryption of data is usually accomplished by the combination of plaintext data (the input) with a
secret key using a particular encryption algorithm (i.e. 3DES, AES, etc.). The result (output) is
cipher text. Unless someone (or a computer) has the secret key, they cannot convert the cipher text
back to plaintext. This basic methodology is at the core of any of the secure protocols. Another basic
building block of cryptographic systems is the ―hash‖. Hash methods take some plaintext input and
perhaps key input and then compute a large number called a hash. This number is a fixed length
(number of bits) regardless of the size of the input. Unlike the encryption methods that are
reversible, where one can go back to plaintext with the key, hashes are one way. It is not
mathematically feasible to go from a hash back to plaintext. Hashes are used as special IDs in
various protocol systems because they can provide a check mechanism on data similar to a CRC
(cyclic redundant check) on a disk file to detect data alteration. The hashes are used as a data
authentication method (different than user authentication). Anyone trying to secretly alter data in
transit across a network will alter the hash values thus causing detection. Table 2 provides a basic
comparison of cryptographic algorithms and their uses.
Secure Access Protocols
There are a variety of protocols such as SSH and SSL that employ various cryptographic
mechanisms to provide security through authentication and encryption methods. The level of
security provided is dependent upon many things such as the cryptographic methods used, the access
to the transmitted data, algorithm key lengths, server and client implementations and most
importantly, the human factor. The most ingenious crypto scheme is thwarted if a user‘s access
credential, such as a password or certificate, is obtained by a third party. The classic case mentioned
earlier is the password on a Post-It note on a person‘s monitor.
 The SSH protocol

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 72


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

The Secure Shell (SSH) client-server protocol was developed in the mid-1990s in order to provide a
secure mechanism to access computer consoles or shells remotely over unprotected or ―non-secure‖
networks.
The protocol provides ―secure‖ methods by addressing user and server authentication and full
encryption of all traffic exchanged between the client and server. The protocol has two versions, V1
and V2, which differ slightly in the cryptographic mechanisms provided. Additionally, V2 is
superior in its ability to protect against certain types of ―attacks‖. (An attempt by a ―non-
participating‖ third party to intercept, forge or otherwise alter exchanged data is considered an
attack.)
While SSH has been used as a secure access protocol to computer consoles for years, it has
traditionally been less employed in secondary infrastructure equipment such as UPS and HVAC
equipment. However, since networks and the network infrastructure that support them are becoming
more and more critical to the business practices of enterprises, using such a secure access method to
all equipment is becoming more common.
 The SSL\TLS protocol
While SSH has been the common secure protocol for console access for command-line like
management, the Secure Socket Layer (SSL) and later the Transport Layer Security (TLS) protocol
have become the standard method of securing web traffic and other protocols such as SMTP (mail).
TLS is the most recent version of SSL and SSL is still commonly used interchangeably with the term
TLS. SSL and SSH differ mostly with respect to the client and server authentication mechanisms
built into the protocols. TLS was also accepted as an IETF (Internet Engineering Task Force)
standard while SSH never became a full IETF standard even though it is very widely deployed as a
draft standard. SSL is the secure protocol that protects http web traffic, also referred to as https for
―http secure‖. Both Netscape and Internet Explorer support both SSL and TLS. When these
protocols are used, a formal authentication of the server is made to the client in the form of a server
certificate. Certificates are described subsequently. The client can also be authenticated with
certificates, though usernames and passwords are most typically used. Because the SSL ―sessions‖
are all encrypted, the authentication information and any data on web pages is secure. SSL is always
used on web sites that wish to be secure for banking and other commercial purposes since clients
usually access these sites over the public Internet. Since web based management of network devices

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 73


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

(embedded web servers) has become the most common method of basic configuration and point user
access, protecting this management method is very important.
Enterprises that wish to have all network management done securely, but still take advantage of
graphical interfaces such as http, should use SSL based systems. As mentioned before, SSL can also
protect other non-http communication. Should non-http based device clients be used, these systems
should also employ SSL for their access protocols to insure security. Using SSL in all of these cases
also has the advantage of using standard protocols with common authentication and encryption
schemes.
4.5. Firewalls
Many organizations have connected or want to connect their private LANs to the Internet so that their
users can have convenient access to Internet services. Since the Internet as a whole is not trustworthy, their
private systems are vulnerable to misuse and attack. A firewall is a safeguard that one can use to control
access between a trusted network and a less trusted one. A firewall is not a single component; it is a
strategy for protecting an organization's Internet-reachable resources. A firewall serves as the gatekeeper
between the untrustworthy Internet and the more trustworthy internal networks.
The main function of a firewall is to centralize access control. If outsiders or remote users can access the
internal networks without going through the firewall, its effectiveness is diluted. For example, if a
traveling manager has a modem connected to his office computer that he or she can dial into while
traveling, and that computer is also on the protected internal network, an attacker who can dial into that
computer has circumvented the firewall. If a user has a dial-up Internet account with a commercial ISP,
and sometimes connects to the Internet from his or her office computer via modem, he or she is opening an
unsecured connection to the Internet that circumvents the firewall. Firewalls provide several types of
protection:
 They can block unwanted traffic.
 They can direct incoming traffic to more trustworthy internal systems.
 They hide vulnerable systems that cannot easily be secured from the Internet.
 They can log traffic to and from the private network.
 They can hide information such as system names, network topology, network device types, and
internal user IDs from the Internet.
 They can provide more robust authentication than standard applications might be able to do.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 74


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

As with any safeguard, there are trades-offs between convenience and security. Transparency is the
visibility of the firewall to both inside users and outsiders going through a firewall. A firewall is
transparent to users if they do not notice or stop at the firewall in order to access a network.
Firewalls are typically configured to be transparent to internal network users (while going outside
the firewall); on the other hand, firewalls are configured to be non-transparent for outside network
coming through the firewall. This generally provides the highest level of security without placing an
undue burden on internal users. Types of firewalls include packet filtering gateways, application
gateways, and hybrid or complex gateways.
4.5.1. Packet Filtering Gateways
Packet filtering firewalls use routers with packet filtering rules to grant or deny access based on
source address, destination address, and port. They offer minimum security but at a very low cost,
and can be an appropriate choice for a low-risk environment. They are fast, flexible, and transparent.
Filtering rules are not often easily maintained on a router, but there are tools available to simplify the
tasks of creating and maintaining the rules.
Filtering gateways do have inherent risks, including:
 The source and destination addresses and ports contained in the IP packet header are the only
information that is available to the router in making decision whether or not to permit traffic
access to an internal network.
 They do not protect against IP or DNS address spoofing.
 An attacker will have a direct access to any host on the internal network once access has been
granted by the firewall.
 Strong user authentication isn't supported with some packet filtering gateways.
 They provide little or no useful logging.
4.5.2. Application Gateways
An application gateway uses server programs (called proxies) that run on the firewall. These proxies
take external requests, examine them, and forward legitimate requests to the internal host that
provides the appropriate service. Application gateways can support functions such as user
authentication and logging.
Because an application gateway is considered as the most secure type of firewall, this configuration
provides a number of advantages to the medium-high risk site:

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 75


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

 The firewall can be configured as the only host address that is visible to the outside network,
requiring all connections to and from the internal network to go through the firewall.
 The use of proxies for different services prevents direct access to services on the internal
network, protecting the enterprise against insecure or badly configured internal hosts.
 Strong user authentication can be enforced with application gateways.
 Proxies can provide detailed logging at the application level.
4.5.3. Hybrid or Complex Gateways
Hybrid gateways combine two or more of the above firewall types and implement them in series
rather than in parallel. If they are connected in series, then the overall security is enhanced; on the
other hand, if they are connected in parallel, then the network security perimeter will be only as
secure as the least secure of all methods used. In medium to high-risk environments, a hybrid
gateway may be the ideal firewall implementation.
Besides the basic physical security of a site, the next most important aspect is controlling digital
access into and out of the organization‘s network. In most cases this means controlling the points of
connectivity to the outside world, typically the Internet. Almost every medium and large-scale
company has a presence on the Internet and has an organizational network connected to it. In fact
there is a large increase in the number of smaller companies and homes getting full time Internet
connectivity. Partitioning the boundary between the outside Internet and the internal intranet is a
critical security piece. Sometimes the inside is referred to as the ―trusted‖ side and the external
Internet as the ―un-trusted‖ side. As a generality this is all right, however, as will be described, this is
not specific enough.
A firewall is a mechanism by which a controlled barrier is used to control network traffic into AND
out of an organizational intranet. Firewalls are basically application specific routers. They run on
dedicated embedded systems such as an internet appliance or they can be software programs running
on a general server platform. In most cases these systems will have two network interfaces, one for
the external network such as the Internet and one for the internal intranet side. The firewall process
can tightly control what is allowed to traverse from one side to the other. Firewalls can range from
being fairly simple to very complex.
As with most aspects of security, deciding what type of firewall to use will depend upon factors such
as traffic levels, services needing protection and the complexity of rules required. The greater the
number of services that must be able to traverse the firewall the more complex the requirement

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 76


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

becomes. The difficulty for firewalls is distinguishing between legitimate and illegitimate traffic.
What do firewalls protect against and what protection do they not provide? Firewalls are like a lot of
things; if configured correctly they can be a reasonable form of protection from external threats
including some denial of service (DOS) attacks. If not configured correctly they can be major
security holes in an organization.
The most basic protection a firewall provides is the ability to block network traffic to certain
destinations. This includes both IP addresses and particular network service ports. A site that wishes
to provide external access to a web server can restrict all traffic to port 80 (the standard http port).
Usually this restriction will only be applied for traffic originating from the un-trusted side. Traffic
from the trusted side is not restricted. All other traffic such as mail traffic, FTP, SNMP, etc. would
not be allowed across the firewall and into the intranet. An example of a simple firewall is shown in
Figure 2.

Figure 2:- Simple Firewall to network


An even simpler case is a firewall often used by people with home or small business cable or DSL
routers. Typically these firewalls are setup to restrict ALL external access and only allow services
originating from the inside. A careful reader might realize that in neither of these cases is the firewall
actually blocking all traffic from the outside. If that were the case how could one surf the web and
retrieve web pages? What the firewall is doing is restricting connection requests from the outside. In

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 77


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

the first case all connection requests from the inside are passed to the outside as well as all
subsequent data transfer on that connection. From the exterior, only a connection request to the web
server is allowed to complete and pass data, all others are blocked. The second case is more stringent
as connections can only be made from the interior to the exterior.
More complex firewall rules can utilize what is called ―stateful inspection‖ techniques. This
approach adds to the basic port blocking approach by looking at traffic behaviours and sequences to
detect spoof attacks and denial of service attacks. The more complex the rules, the greater the
computing power of the firewall required.
One problem most organizations face is how to enable legitimate access to ―public‖ services such as
web, ftp and e-mail while maintaining tight security of the intranet. The typical approach is to form
what is known as a DMZ (demilitarized zone), a euphemism from the cold war applied to the
network. In this architecture there are two firewalls: one between the external network and the DMZ,
and another between the DMZ and the internal network. All public servers are placed in the DMZ.
With this setup, it is possible to have firewall rules which allow public access to the public servers
but the interior firewall can restrict all incoming connections. By having the DMZ, the public servers
are still provided more protection than if they were just placed outside a single firewall site. Figure 3
illustrates the use of a DMZ.

Figure 3:- Dual Firewalls with DMZ

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 78


Hawasa Institutes Of Technology
University Faculty Of Informatics
Department Of Computer Science

Using internal firewalls at various intranet boundaries can also help limit damage from internal
threats and things like worms that have managed to traverse the border firewalls. These can even be
run in standby so that normal traffic patterns are not blocked, but tight rules turned on in a problem
situation.
Workstation firewalls
There is an important network security factor that most people are only now becoming aware of and
that is that EVERY node or workstation on a network could be a potential security hole. In the past,
basic attention was paid to firewalls and servers; however, with the advent of the web and the
proliferation of new classes of nodes such as internet appliances, there are several more dimensions
to protecting networks. Varieties of worm virus programs hijack computers and use them to both
further spread themselves as well as sometimes harm systems. Many of these worms would be
stopped or greatly hindered if organizations had internal systems more ―locked down‖. Workstation
firewall products can block all port accesses into and out of individual hosts that are not part of the
normal needs of the host. Additionally firewall rules on the INTERNAL side that block suspicious
connections out of the organization can help prevent worms spreading back out of an organization.
Between the two, both internal and external replication can be reduced. For the most part, all
systems should be able to block all ports that are not required for use.

COURSE TITLE:- SYSTEM AND NETWORK ADMINISTRATION 79

You might also like