Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Chapter 2: Operating System

2.1 Introduction
An operating system (OS) is a program that manages the computer hardware. It also provides a basis for
application programs and acts as an intermediary between the computer user and the computer
hardware. An amazing aspect of operating systems is how varied they are in accomplishing these tasks.
Mainframe operating systems are designed primarily to optimize utilization of hardware. Personal
computer (PC) operating systems support complex games, business applications, and everything in
between. Operating systems for handheld computers are designed to provide an environment in which
a user can easily interface with the computer to execute programs. Thus, some operating systems are
designed to be convenient, others to be efficient, and others some combination of the two.

Because an operating system is large and complex, it must be created piece by piece. Each of these
pieces should be a well-delineated portion of the system, with carefully defined inputs, outputs, and
functions. In this chapter, a general overview of the major components of an operating system will be
discussed.

Objective of an operating system

A computer system can be divided roughly into four components: the hardware/ the operating system,
the application programs/and the users.

The hardware, the central processing unit (CPU) the memory and the input/output (IO) devices-provides
the basic computing resources for the system. The application programs such as word processors,
spreadsheets, compilers and Web browsers-define the ways in which these resources are used to solve
users' computing problems. The operating system controls the hardware and coordinates its use among
the various application programs for the various users.

We can also view a computer system as consisting of hardware, software and data. The operating
system provides the means for proper use of these resources in the operation of the computer system.
An operating system is similar to a government. Like a government, it performs no useful function by
itself. It simply provides an environment within which other programs can do useful work.

To understand more fully the operating systems role, we next explore operating systems from two
viewpoints: that of the user and that of the system.

User View

The user's view of the computer varies according to the interface being used. Most computer users sit in
front of a PC, consisting of a monitor, keyboard, mouse and system unit. Such a system is designed for

Computational course chapter 2 Page 1


one user to monopolize its resources. The goal is to maximize the work (or play) that the user is
performing. In this case/ the operating system is designed mostly for ease of use with some attention
paid to performance and none paid to resource allocation i.e. how various hardware and software
resources are shared. Performance is, of course, important to the user; but such systems are optimized
for the single-user experience rather than the requirements of multiple users.

In other cases, a user sits at a terminal connected to a mainframe or a mini computer. Other users are
accessing the same computer through other terminals. These users share resources and may exchange
information. The operating system in such cases is designed to maximize resource utilization -to assure
that all available CPU time, memory, and I/0 are used efficiently and that no individual user takes more
than her fair share.

In still other cases, users sit at workstation connected to networks of other workstations and servers.
These users have dedicated resources at their disposal, but they also share resources such as
networking and servers-file, compute, and print servers. Therefore, their operating system is designed
to compromise between individual usability and resource utilization.

Recently, many varieties of handheld computers have come into fashion. Most of these devices are
standalone units for individual users. Some are connected to networks, either directly by wire or (more
often) through wireless modems and networking. Because of power, speed, and interface limitations,
they perform relatively few remote operations. Their operating systems are designed mostly for
individual usability, but performance per unit of battery life is important as well.

Some computers have little or no user view. For example, embedded computers in home devices and
automobiles may have numeric keypads and may turn indicator lights on or off to show status, but they
and their operating systems are designed primarily to run without user intervention.

System View

From the computer's point of view, the operating system is the program most intimately involved with
the hardware. In this context, we can view an operating system as a resource allocator. A computer
system has many resources that may be required to solve a problem: CPU time, memory space, file-
storage space, I/0 devices, and so on. The operating system acts as the manager of these resources.
Facing numerous and possibly conflicting requests for resources, the operating system must decide how
to allocate them to specific programs and users so that it can operate the computer system efficiently
and fairly. As we have seen, resource allocation is especially important where many users access the
same mainframe or minicomputer.

A slightly different view of an operating system emphasizes the need to control the various I/0 devices
and user programs. An operating system is a control program. A control program manages the execution
of user programs to prevent errors and improper use of the computer. It is especially concerned with
the operation and control of I/O devices.

Computational course chapter 2 Page 2


Defining Operating Systems

We have looked at the operating system's role from the views of the user and of the system. How,
though, can we define what an operating system is? In general, we have no completely adequate
definition of an operating system. Operating systems exist because they offer a reasonable way to solve
the problem of creating a usable computing system. The fundamental goal of computer systems is to
execute user programs and to make solving user problems easier. Toward this goal, computer hardware
is constructed. Since bare hardware alone is not particularly easy to use, application programs are
developed. These programs require certain common operations, such as those controlling the I/O
devices. The common functions of controlling and allocating resources are then brought together into
one piece of software: the operating system.

In addition, we have no universally accepted definition of what is part of the operating system. A simple
viewpoint is that it includes everything a vendor ships when you order "the operating system." The
features included, however, vary greatly across systems. Some systems take up less than 1 megabyte of
space and lack even a full-screen editor, whereas others require gigabytes of space and are entirely
based on graphical windowing systems. A more common definition, and the one that we usually follow,
is that the operating system is the one program running at all times on the computer-usually called the
kernel. (Along with the kernel, there are two other types of programs: system programs which are
associated with the operating system but are not part of the kernel, and application programs which
include all programs not associated with the operation of the system.)

Computer system organization

A modern general-purpose computer system consists of one or more CPUs and a number of device
controllers connected through a common bus that provides access to shared memory (Figure 1.2). Each
device controller is in charge of a specific type of device (for example, disk drives, audio devices, and
video displays). The CPU and the device controllers can execute concurrently, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller is provided whose function
is to synchronize access to the memory.

For a computer to start running-for instance, when it is powered up or rebooted-it needs to have an
initial program to run. This initial program, or tends to be simple. Typically, it is stored in read-only
memory (ROM) or electrically erasable programmable read-only memory (EEPROM) known by the
general term firmware within the computer hardware. It initializes all aspects of the system, from CPU
registers to device controllers to memory contents. The bootstrap program must know how to load the
operating system and how to start executing that system. To accomplish this goal, the bootstrap
program must locate and load into memory the operating system kernel. The operating system then
starts executing the first process, such as "init," and waits for some event to occur.

The occurrence of an event is usually signaled by an interrupt from either the hardware or the software.
Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by way of the

Computational course chapter 2 Page 3


system bus. Software may trigger an interrupt by executing a special operation called a system call (also
called a monitor cell). When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location. The fixed location usually contains the starting address where the service
routine for the interrupt is located. The interrupt service routine executes; on completion, the CPU
resumes the interrupted computation

Interrupts are an important part of computer architecture. Each computer design has its own interrupt
mechanism, but several functions are common. The interrupt must transfer control to the appropriate
interrupt service routine.

2.2 Types of operating system

Single- and multi-tasking

A single-tasking system can only run one program at a time, while a multi-tasking operating system
allows more than one program to be running in concurrency. This is achieved by time-sharing, dividing
the available processor time between multiple processes which are each interrupted repeatedly in time-
slices by a task scheduling subsystem of the operating system. Multi-tasking may be characterized in
preemptive and co-operative types. In preemptive multitasking, the operating system slices the CPU
time and dedicates a slot to each of the programs. UNIX-like operating systems, for example,
Solaris, Linux, as well as AmigaOS support preemptive multitasking. Cooperative multitasking is achieved
by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of
Microsoft Windows used cooperative multi-tasking. 32-bit versions of both Windows NT and Win9x,
used preemptive multi-tasking.

Single- and multi-user

Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to
run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities
that identify processes and resources, such as disk space, belonging to multiple users, and the system
permits multiple users to interact with the system at the same time. Time-sharing operating systems
schedule tasks for efficient use of the system and may also include accounting software for cost
allocation of processor time, mass storage, printing, and other resources to multiple users.

Distributed

A distributed operating system manages a group of distinct computers and makes them appear to be a
single computer. The development of networked computers that could be linked and communicate with
each other gave rise to distributed computing. Distributed computations are carried out on more than
one machine. When computers in a group work in cooperation, they form a distributed system.

Templated

Computational course chapter 2 Page 4


In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine
image as a guest operating system, then saving it as a tool for multiple running virtual machines. The
technique is used both in virtualization and cloud computing management, and is common in large
server warehouses.

Embedded

Embedded operating systems are designed to be used in embedded computer systems. They are
designed to operate on small machines like PDAs with less autonomy. They are able to operate with a
limited number of resources. They are very compact and extremely efficient by design. Windows CE and
Minix 3 are some examples of embedded operating systems.

Real-time

A real-time operating system is an operating system that guarantees to process events or data within a
certain short amount of time. A real-time operating system may be single- or multi-tasking, but when
multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is
achieved. An event-driven system switches between tasks based on their priorities or external events
while time-sharing operating systems switch tasks based on clock interrupts.

Library

A library operating system is one in which the services that a typical operating system provides, such as
networking, are provided in the form of libraries. These libraries are composed with the application and
configuration code to construct unikernels — which are specialized, single address space, machine
images that can be deployed to cloud or embedded environments.

2.3 Functions of operating system

Process management

The microprocessor (or central processing unit (CPU), or just processor) is the central component of the
computer, and is in one way or another involved in everything the computer does. A computer program
consists of a series of machine code instructions which the processor executes one at a time. This means
that, even in a multi-tasking environment, a computer system can, at any given moment, only execute
as many program instructions as there are processors. In a single-processor system, therefore, only one
program can be running at any one time. The fact that a modern desktop computer can be downloading
files from the Internet, playing music files, and running various applications all at (apparently) the same
time, is due to the fact that the processor can execute many millions of program instructions per
second, allowing the operating system to allocate some processor time to each program in a transparent
manner. In recent years, the emphasis in processor manufacture has been on producing multi-core

Computational course chapter 2 Page 5


processors that enable the computer to execute multiple processes or process threads at the same time
in order to increase speed and performance.

What is a process?

Essentially, a process is what a program becomes when it is loaded into memory from a secondary
storage medium like a hard disk drive or an optical drive. Each process has its own address space, which
typically contains both program instructions and data. Despite the fact that an individual processor or
processor core can only execute one program instruction at a time, a large number of processes can be
executed over a relatively short period of time by briefly assigning each process to the processor in
turn. While a process is executing it has complete control of the processor, but at some point the
operating system needs to regain control, such as when it must assign the processor to the next
process. Execution of a particular process will be suspended if that process requests an I/O operation, if
an interrupt occurs, or if the process times out.

When a user starts an application program, the operating system's high-level scheduler (HLS) loads all or
part of the program code from secondary storage into memory. It then creates a data structure in
memory called a process control block (PCB) that will be used to hold information about the process,
such as its current status and where in memory it is located. The operating system also maintains a
separate process table in memory that lists all the user processes currently loaded. When a new process
is created, it is given a unique process identification number (PID) and a new record is created for it in
the process table which includes the address of the process control block in memory. As well as
allocating memory space, loading the process, and creating the necessary data structures, the operating
system must also allocate resources such as access to I/O devices and disk space if the process requires
them. Information about the resources allocated to a process is also held within the process control
block. The operating system's low-level scheduler (LLS) is responsible for allocating CPU time to each
process in turn.

Process states

The simple process state diagram below shows three possible states for a process. They are shown
as ready (the process is ready to execute when a processor becomes available), running (the process is
currently being executed by a processor) and blocked (the process is waiting for a specific event to occur
before it can proceed). The lines connecting the states represent possible transitions from one state to
another. At any instant, a process will exist in one of these three states. On a single-processor computer,
only one process can be in the running state at any one time. The remaining processes will either
be ready or blocked, and for each of these states there will be a queue of processes waiting for some
event.

Computational course chapter 2 Page 6


A simple three-state process state diagram

Note that certain rules apply here. Processes entering the system must initially go into the ready state. A
process can only enter the running state from the ready state. A process can normally only leave the
system from the running state, although a process in the ready or blocked state may be aborted by the
system (in the event of an error, for example), or by the user. Although the three-state model shown
above is sufficient to describe the behavior of processes generally, the model must be extended to allow
for other possibilities, such as the suspension and resumption of a process. For example, the process
may be swapped out of working memory by the operating system's memory manager in order to free up
memory for another process. When a process is suspended, it essentially becomes dormant until
resumed by the system (or by a user). Because a process can be suspended while it is either ready or
blocked, it may also exist in one of two further states - ready suspended and blocked suspended (a
running process may also be suspended, in which case it becomes ready suspended).

Computational course chapter 2 Page 7


A five-state process state diagram

The queue of ready processes is maintained in priority order, so the next process to execute will be the
one at the head of the ready queue. The queue of blocked process is typically unordered, since there is
no sure way to tell which of these processes will become unblocked first (although if several processes
are blocked awaiting the same event, they may be prioritized within that context). To prevent one
process from monopolizing the processor, a system timer is started each time a new process starts
executing. The process will be allowed to run for a set period of time, after which the timer generates an
interrupt that causes the operating system to regain control of the processor. The operating system
sends the previously running process to the end of the ready queue, changing its status
from running to ready, and assigns the first process in the ready queue to the processor, changing its
status from ready to running.

Process scheduling

Process scheduling is a major element in process management, since the efficiency with which processes
are assigned to the processor will affect the overall performance of the system. It is essentially a matter
of managing queues, with the aim of minimizing delay while making the most effective use of the
processor's time. The operating system carries out four types of process scheduling:

 Long-term (high-level) scheduling


 Medium-term scheduling
 Short-term (low-level) scheduling
 I/O scheduling

Computational course chapter 2 Page 8


The long-term scheduler determines which programs are admitted to the system for processing, and as
such controls the degree of multiprogramming. Before accepting a new program, the long-term
scheduler must first decide whether the processor is able to cope effectively with another process. The
more active processes there are, the smaller the percentage of the processor's time that can be
allocated to each process. The long-term scheduler may limit the total number of active processes on
the system in order to ensure that each process receives adequate processor time. New processes may
subsequently be created, as existing processes are terminated or suspended. If several programs are
waiting for the long-term scheduler the decision as to which job to admit first might be done on a first-
come-first-served basis, or by using some other criteria such as priority, expected execution time, or I/O
requirements.

Medium-term scheduling is part of the swapping function. The term "swapping" refers to transferring a
process out of main memory and into virtual memory (secondary storage) or vice-versa. This may occur
when the operating system needs to make space for a new process, or in order to restore a process to
main memory that has previously been swapped out. Any process that is inactive or blocked may be
swapped into virtual memory and placed in a suspend queue until it is needed again, or until space
becomes available. The swapped-out process is replaced in memory either by a new process or by one
of the previously suspended processes.

The task of the short-term scheduler (sometimes referred to as the dispatcher) is to determine which
process to execute next. This will occur each time the currently running process is halted. A process may
cease execution because it requests an I/O operation, or because it times out, or because a hardware
interrupt has occurred. The objectives of short-term scheduling are to ensure efficient utilization of the
processor and to provide an acceptable response time to users. Note that these objectives are not
always completely compatible with one another. On most systems, a good user response time is more
important than efficient processor utilization, and may necessitate switching between processes
frequently, which will increase system overhead and reduce overall processor throughput.

Memory management

The memory management function keeps track of the status of each memory location,
either allocated or free. It determines how memory is allocated among competing processes, deciding
which gets memory, when they receive it, and how much they are allowed. When memory is allocated it
determines which memory locations will be assigned. It tracks when memory is freed or unallocated and
updates the status.

Memory management techniques

 Single contiguous management


 Partitioned management
 Paged memory management (paging)
 Segmented memory management (segmentation)

Computational course chapter 2 Page 9


Single contiguous allocation

Single allocation is the simplest memory management technique. All the memory, usually with the
exception of a small portion reserved for the operating system, is available to the single application. MS-
DOS is an example of a system which allocates memory in this way. An embedded system running a
single application might also use this technique.

A system using single contiguous allocation may still multitask by swapping the contents of memory to
switch among users. Early versions of the Music operating system used this technique.

Partitioned allocation

Partitioned allocation divides primary memory into multiple memory partitions, usually contiguous
areas of memory. Each partition might contain all the information for a specific job or task. Memory
management consists of allocating a partition to a job when it starts and unallocating it when the job
ends.

Partitions may be either static that is defined at Initial Program Load (IPL) or boot time or by the
computer operator or dynamic, that is automatically created for a specific job. Multiprogramming with
a Fixed Number of Tasks (MFT) is an example of static partitioning, and Multiprogramming with a
Variable Number of Tasks (MVT) is an example of dynamic. MVT and successors use the term region to
distinguish dynamic partitions from static ones in other systems.

Partitions may be relocatable using hardware typed memory. Relocatable partitions are able to
be compacted to provide larger chunks of contiguous physical memory. Compaction moves "in-use"
areas of memory to eliminate "holes" or unused areas of memory caused by process termination in
order to create larger contiguous free areas

Some systems allow partitions to be swapped out to secondary storage to free additional memory. Early
versions of IBM's Time Sharing Option (TSO) swapped users in and out of a single time-sharing partition.

Paged memory management

Paged allocation divides the computer's primary memory into fixed-size units called page frames, and
the program's virtual address space into pages of the same size. The hardware memory management
unit maps pages to frames. The physical memory can be allocated on a page basis while the address
space appears contiguous.

Usually, with paged memory management, each job runs in its own address space. However, there are
some single address space operating systems that run all processes within a single address space.

Segmented memory management

Computational course chapter 2 Page 10


Segmented memory is the only memory management technique that does not provide the user's
program with a 'linear and contiguous address space. Segments are areas of memory that usually
correspond to a logical grouping of information such as a code procedure or a data array. Segments
require hardware support in the form of a segment table which usually contains the physical address of
the segment in memory, its size, and other data such as access protection bits and status (swapped in,
swapped out, etc.)

Segmentation allows better access protection than other schemes because memory references are
relative to a specific segment and the hardware will not permit the application to reference memory not
defined for that segment.

It is possible to implement segmentation with or without paging. Without paging support the segment is
the physical unit swapped in and out of memory if required. With paging support the pages are usually
the unit of swapping and segmentation only adds an additional level of security.

Addresses in a segmented system usually consist of the segment id and an offset relative to the segment
base address, defined to be offset zero.

File management

Another part of the operating system is the file manager. While the memory manager is responsible for
the maintenance of primary memory, the file manager is responsible for the maintenance of secondary
storage (e.g., hard disks).

Each file is a named collection of data stored in a device. The file manager implements this abstraction
and provides directories for organizing files. It also provides a spectrum of commands to read and write
the contents of a file, to set the file read/write position, to set and use the protection mechanism, to
change the ownership, to list files in a directory, and to remove a file...The file manager provides a
protection mechanism to allow machine users to administer how processes executing on behalf of
different users can access the information in files. File protection is a fundamental property of files
because it allows different people to store their information on a shared computer, with the confidence
that the information can be kept confidential.

In addition to these functions, the file manager also provides a logical way for users to organize files in
secondary storage. For the convenience of the machine's users, most file managers allow files to be
grouped into a bundle called a directory or
folder. This approach allows a user to
organize his or her files according to their
purpose by placing related files in the
same directory. Moreover, by allowing
directories to contain other directories,
called subdirectories, a hierarchical

Computational course chapter 2 Page 11


organization can be constructed. For example, a user may create a directory called Records that contains
subdirectories called Financial Records, Medical Records, and Household Records. Within each of these
subdirectories could be files that fall within that particular category. A sequence of directories within
directories is called a directory path.

While users may need to store complex data structures in secondary storage, most storage devices
including hard disks "are capable of storing only linearly addressed blocks of bytes." Thus, the file
manager needs to provide some way of mapping user data to storage blocks in secondary storage and
vice versa. These blocks can be managed in at least three different ways: "as a contiguous set of blocks
on the secondary storage device, as a list of blocks interconnected with links, or as a collection of blocks
interconnected by a file index”. These three methods are commonly referred to as Contiguous
Allocation, Linked Allocation, and Indexed Allocation.

Device management

All modern operating systems have a subsystem called the device manager. The device manager is
responsible for detecting and managing devices, performing power management, and exposing devices
to user space. Since the device manager is a crucial part of any operating system, it's important to make
sure it's well designed.

Device drivers

Device drivers allow user applications to communicate with a system's devices. They provide a high-level
abstraction of the hardware to user applications while handling the low-level device-specific I/O and
interrupts. Device drivers can be implemented as loadable kernel modules (for a Monolithic Kernel) or
user-mode servers (for Microkernels).

The main role of the device manager is detecting devices on the system. Usually, devices are organized
in a tree structure, with devices enumerating their children. The root bus driver sits at the root of the
device tree. It detects the buses present on the system as well as devices directly connected to the
motherboard. Each bus is then recursively enumerated, with its children continuing to enumerate their
children until the bottom of the device tree is reached.

Each device that is detected should contain a list of resources for the device to use. Examples of
resources are I/O, memory, IRQs (Interrupt request), DMA channels, and configuration space. Devices
are assigned resources by their parent devices. Devices should just use the resources they’re given,
which provide support for having the same device driver work on different machines where the resource
assignments may be different, but the programming interface is otherwise the same.

Drivers are loaded for each device that's found. When a device is detected, the device manager finds the
device's driver. If not loaded already, the device manager loads the driver. It then calls the driver to
initialize that device.

Computational course chapter 2 Page 12


How the device manager matches a device to a device driver is an important choice. The way devices are
identified is very bus specific. On PCI, a device is identified through a combination of its vendor and
device IDs. USB has the same scheme as PCI, using a vendor and product ID. ACPI ( Advanced
Configuration and Power Interface) uses PNP IDs to identify devices in the ACPI namespace. With this
information, it's possible to build a database using matching IDs to drivers. This information is best
stored in a separate file.

Inter Process Communication (IPC)

The device manager needs to implement some form of IPC between it and device drivers. IPC will be
used by the device manager to send I/O requests to device drivers, and by drivers to respond to these
requests. It is usually implemented with messages that contain data about the request, such as the I/O
function code, buffer pointer, device offset, and buffer length. To respond to these I/O requests, every
device driver needs dispatch functions used to handle each I/O function code. Each device needs a
queue of these IPC messages for it to handle. On Windows NT, this IPC is done with I/O Request Packets.

2.4 Examples of operating system

Windows OS

Computer operating system (OS) developed by Microsoft Corporation to run personal computers (PCs).
Featuring the first graphical user interface (GUI) for IBM-compatible PCs, the Windows OS soon
dominated the PC market. Approximately 90 percent of PCs run some version of Windows.

The first version of Windows, released in 1985, was simply a GUI offered as an extension of Microsoft’s
existing disk operating system, or MS-DOS. Based in part on licensed concepts that Apple Inc. had used
for its Macintosh System Software, Windows for the first time allowed DOS users to visually navigate a
virtual desktop, opening graphical “windows” displaying the contents of electronic folders and files with
the click of a mouse button, rather than typing commands and directory paths at a text prompt.

Subsequent versions introduced greater functionality, including native Windows File Manager, Program
Manager, and Print Manager programs, and a more dynamic interface. Microsoft also developed
specialized Windows packages, including the networkable Windows for Workgroups and the high-
powered Windows NT, aimed at businesses. The 1995 consumer release Windows 95 fully integrated
Windows and DOS and offered built-in Internet support, including the World Wide Web browser
Internet Explorer.

With the 2001 release of Windows XP, Microsoft united its various Windows packages under a single
banner, offering multiple editions for consumers, businesses, multimedia developers, and others.
Windows XP abandoned the long-used Windows 95 kernel (core software code) for a more powerful
code base and offered a more practical interface and improved application and memory management.
The highly successful XP standard was succeeded in late 2006 by Windows Vista, which experienced a

Computational course chapter 2 Page 13


troubled rollout and met with considerable marketplace resistance, quickly acquiring a reputation for
being a large, slow, and resource-consuming system. Responding to Vista’s disappointing adoption rate,
Microsoft developed Windows 7, an OS whose interface was similar to that of Vista but was met with
enthusiasm for its noticeable speed improvement and its modest system requirements.

Linux

Linux is, in simplest terms, an operating system. It is the software on a computer that enables
applications and the computer operator to access the devices on the computer to perform desired
functions. The operating system (OS) relays instructions from an application to, for instance, the
computer's processor. The processor performs the instructed task and then sends the results back to the
application via the operating system. Explained in these terms, Linux is very similar to other operating
systems, such as Windows and OS X.

As an open operating system, Linux is developed collaboratively, meaning no one company is solely
responsible for its development or ongoing support. Companies participating in the Linux economy
share research and development costs with their partners and competitors. This spreading of
development burden amongst individuals and companies has resulted in a large and efficient ecosystem
and unheralded software innovation.

Linux looks and feels much like any other UNIX system; indeed, UNIX compatibility has been a major
design goal of the Linux project. However, Linux is much younger than most UNIX systems. Its
development began in 1991, when a Finnish student, Linus Torvalds, wrote and christened Linux, a small
but self-contained kernel for the 80386 processor, the first true 32-bit processor in Intel's range of PC-
compatible CPUs. Early in its development, the Linux source code was made available free on the
Internet. As a result, Linux's history has been one of collaboration by many users from all around the
world, corresponding almost exclusively over the Internet. From an initial kernel that partially
implemented a small subset of the UNIX system services, the Linux system has grown to include much
UNIX functionality.

In its early days, Linux development revolved largely around the central operating-system kernel-the
core, privileged executive that manages all system resources and that interacts directly with the
computer hardware.

We need much more than this kernel to produce a full operating system of course. It is useful to make
the distinction between the Linux kernel and a Linux system. The Linux kernel is an entirely original piece
of software developed from scratch by the Linux community. The Linux system as we know it today
includes a multitude of components, some written from scratch, others borrowed from other
development projects, and still others created in collaboration with other teams.

The basic Linux system is a standard environment for applications and user programming, but it does
not enforce any standard means of managing the available functionality as a whole. As Linux has

Computational course chapter 2 Page 14


matured, a need has arisen for another layer of functionality on top of the Linux system. This need has
been met by various Linux distributions. A Linux distribution includes all the standard components of the
Linux system, plus a set of administrative tools to simplify the initial installation and subsequent
upgrading of Linux and to manage installation and removal of other packages on the system. A modern
distribution also typically includes tools for management of file systems, creation and management of
user accounts, administration of networks, Web browsers, word processors, and so on.

UNIX

UNIX is a family of multitasking, multiuser computer operating systems that derive from the original
AT&T UNIX, developed in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie,
and others.

Originally, UNIX was meant to be a programmer's workbench to be used for developing software to be
run on multiple platforms, more than to be used to run application software. The system grew larger as
the operating system started spreading in the academic circle, as users added their own tools to the
system and shared them with colleagues.

UNIX was designed to be portable, multi-tasking and multi-user in a time-sharing configuration. UNIX
systems are characterized by various concepts: the use of plain text for storing data; a hierarchical file
system; treating devices and certain types of inter-process communication (IPC) as files; and the use of a
large number of software tools, small programs that can be strung together through a command-line
interpreter using pipes, as opposed to using a single monolithic program that includes all of the same
functionality. These concepts are collectively known as the "UNIX philosophy".

By the early 1980s users began seeing UNIX as a potential universal operating system, suitable for
computers of all sizes. UNIX operating systems are widely used in servers, workstations, and mobile
devices. The UNIX environment and the client–server program model were essential elements in the
development of the Internet and the reshaping of computing as centered in networks rather than in
individual computers.

Both UNIX and the C programming language were developed by AT&T and distributed to government
and academic institutions, which led to both being ported to a wider variety of machine families than
any other operating system.

Under UNIX, the operating system consists of many utilities along with the master control program,
the kernel. The kernel provides services to start and stop programs, handles the file system and other
common "low-level" tasks that most programs share, and schedules access to avoid conflicts when
programs try to access the same resource or device simultaneously. To mediate such access, the kernel
has special rights, reflected in the division between user space and kernel space.

Computational course chapter 2 Page 15


The microkernel concept was introduced in an effort to reverse the trend towards larger kernels and
return to a system in which most tasks were completed by smaller utilities. In an era when a standard
computer consisted of a hard disk for storage and a data terminal for input and output (I/O), the UNIX
file model worked quite well, as most I/O was linear. However, modern systems include networking and
other new devices. As graphical user interfaces developed, the file model proved inadequate to the task
of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking
I/O and the set of inter-process communication mechanisms were augmented with UNIX domain
sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions
such as network protocols could be moved out of the kernel, while conventional (monolithic) UNIX
implementations have network protocol stacks as part of the kernel.

Computational course chapter 2 Page 16

You might also like