Professional Documents
Culture Documents
Operating System Aassignment
Operating System Aassignment
Arrivals occur at rate λ according to a poisson process and move the process from
state i to i + 1.
Service times have an exponential distribution with rate parameter μ in the M/M/1 queue,
where 1/μ is the mean service time.
A single server serves customers one at a time from the front of the queue, according to
a firstcome first serve discipline. When the service is complete the customer leaves the
queue and the number of customers in the system reduces by one.
The buffer is of infinite size, so there is no limit on the number of customers it can contain.
The model can be described as a continuous time Markov chain with transition rate matrix
•Device Management
Process usually require several resources to execute, if these resources are
available, they will be granted and control returned to the user process. These
resources are also thought of as devices. Some are physical, such as a video
card, and others are abstract, such as a file.
•Information Management
Some system calls exist purely for transferring information between the user
program and the operating system. An example of this is time, or date.
The OS also keeps information about all its processes and provides system calls
to report this information.
•Communication
There are two models of interprocess communication, the message-passing model and
the shared memory model.
The main difference between Multiprogrammed Batch Systems and Time-Sharing Systems is
that in case of Multiprogrammed batch systems, the objective is to maximize processor use,
whereas in Time-Sharing Systems, the objective is to minimize response time.
Multiple jobs are executed by the CPU by switching between them, but the switches occur so
frequently. Thus, the user can receive an immediate response. For example, in a transaction
processing, the processor executes each user program in a short burst or quantum of
computation. That is, if nusers are present, then each user can get a time quantum. When the
user submits the command, the response time is in few seconds at most.
The operating system uses CPU scheduling and multiprogramming to provide each user with a
small portion of a time. Computer systems that were designed primarily as batch systems have
been modified to time-sharing systems.
Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
Distributed operating System:
Distributed systems use multiple central processors to serve multiple real-time applications and
multiple users. Data processing jobs are distributed among the processors accordingly.
The processors communicate with one another through various communication lines (such as
high-speed buses or telephone lines). These are referred as loosely coupled systems or
distributed systems. Processors in a distributed system may vary in size and function. These
processors are referred as sites, nodes, computers, and so on.
With resource sharing facility, a user at one site may be able to use the resources
available at another.
Speedup the exchange of data with one another via electronic mail.
If one site fails in a distributed system, the remaining sites can potentially continue
operating.
Better service to the customers.
Reduction of the load on the host computer.
Reduction of delays in data processing.
Real-time systems are used when there are rigid time requirements on the operation of a
processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail. For example, Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
Layered Approach explained below
The modularization of the system can be done in number
of ways. As shown in Figure in the layered approach the OSis broken up into the number
of layers or levels each built on top of
the lower layer. The foundation layer is thehardware and
the highest layer (layer N) is the user interface. A typical OS layer (layer-M) comprises of
data structures and the set
of routines which can be invoked by higher- level layers. Layer M in turn can invoke
the operations onlower level layers.
Conceptually a computer system is made up of
the layers. The hardware is the lowest level in all this kind of
systems.The kernel running at the next level uses the hardware instructions to generate a set of
system call for use by the outerlayers. The system programs above the kernel are then able to u
se either the system calls or hardware instructionsand in some ways
these programs do not distinguish between these two. System programs the hardware and thes
ystem calls as though they were both at the same level. In some of
the systems the application programs can callsystem programs. The application programs view
everything under them in hierarchy as though the latter were part ofmachine
itself. This layered approach is taken to the logical conclusion in the concept of the virtual mac
hine (VM).The VM operating system for the IBM systems is the best illustration of VM concept.
The virtual machine approach on the other hand does not give any additional functionality but
rather it gives an interface which is identical to the underlying bare hardware. Each process is
given with a virtual copy of underlying computer. The physical computer shares the resources to
create the virtual machines. Figure illustrates the concepts of the virtual machines by a diagram.
Figure The layered structure
The chief advantage of the layered approach is the modularity. The layers are selected
so that each uses functionsand services of only lower layers. This approach simplifies de
bugging and the system verification.
The major difficulty with layered approach is cautious definition of layers, because a laye
r can only use layers below it. It also it tends to be less efficient than the other
approaches. Each layer adds overhead to the system call (which istrapped when progra
m executes a I/O operation, for example). This results in a system call which takes
longer than does one on the non- layered system. The operating system by Dijkstra and
IBM’s OS/2 are the examples of the layered operating systems.
Micro kernels are discussed below
This method/technique structures the operating system by removing all the non-essential
components from kernel andimplementing as the system and user level programs. The r
esult is the smaller kernel. Micro kernels typically giveminimum process and memory
management in addition to the communication facility. The major function of the micro
kernel is to give a communication service between the client program and the various se
rvices which are also runningin user space.
The benefits of the micro kernel approach comprises
of the ease of extending the OS. All the
new services are addedto the user space and consequently do not require
the modification of kernel. When the kernel has to be modified, thechanges tend to be ve
ry
few because the micro kernel is the smaller kernel. The resulting OS is easier to port fro
m the one hard ware design to another hardware
design. It also gives more security and reliability since most services are
running as the user rather than kernel processes. Mach, OS/2, MacOS X Server, QNX,
and Windows NT are the examples of microkernel based operating systems. As shown i
n
the Figure many types of services can be run on topof Windows NT microkernel thereby
allowing applications developed for the different platforms to run under the Windows NT.
Resident monitor:
In computing, a resident monitor is a type of system software program that was used in many
early computers from the 1950s to 1970s. It can be considered a precursor to the operating
system. The name is derived from a program which is always present in the computer's memory
thus being "resident".[2] Because memory was very limited on these systems the resident
monitor was often little more than a stub which would gain control at the end of a job and load a
non-resident portion to perform required job cleanup and setup tasks.
On a general-use computer using punched card input, the resident monitor governed the
machine before and after each job control card was executed, loaded and interpreted each
control card, and acted as a job sequencer for batch processing operations.The functions that
the resident monitor could perform were: clearing memory from the last used program (with the
exception of itself), loading programs, searching for program data and maintaining standard IO
routines in memory.[2]
Bare Machine:
Bare machine (or bare metal), in computer parlance, means a computer executing
instructions directly on logic hardware without an intervening operating system. Modern
operating systems evolved through various stages, from elementary to the present day
complex, highly sensitive systems incorporating many services. After the development of
programmable computers (which did not require physical changes to run different
programs) but prior to the development of operating systems, sequential instructions
were executed on the computer hardware directly using machine language without any
system software layer. This approach is termed the "bare machine" precursor to modern
operating systems. Today it is mostly applicable to embedded
systems and firmware generally with time-critical latency requirements, while
conventional programs are run by a runtime system overlaid on an operating system.
The Producer-Consumer problem is a dilemma whose solution, for reasons discussed later,
forms a central role in any non-trivial Operating System that allows concurrent process activity.
The best way to characterise the problem is by example. Imagine a scenario in which there
exists two Destinct processes both operating on a single shared data area. One process, the
Producer inserts information into the data area whilst the other process, the Consumer,
removes information from that same area. In order for the Producer to insert information into the
data area, there must be enough space. The Producer's sole function is to insert data into the
data-area, it is not allowed to remove any data from the area. Similarly, for the Consumer to be
able to remove information from the data area, there must be information there in the first place.
Once again, the sole function of the Consumer is to remove data from the data area. If no data
is present then the Consumer is not allowed to insert some data of it's own to later be removed
by itself.
In short, the Producer relies on the Consumer to make space in the data-area so that it may
insert more information whilst at the same time, the Consumer relies on the Producer to insert
information into the data area so that it may remove that information. It therefore follows that a
mechanism is required to allow the Producer and Consumer to communicate so that they know
when it is safe to attempt to write or read information from the data-area.
Therefore, the solution of the Producer-Consumer problem lies with devising a suitable
communication protocol through which the two processes may exchange information.
The definition of such a protocol is the main factor that makes the Producer-Consumer problem
interesting in terms of concurrent systems. Not all the processes in a concurrent system operate
alone, co-operating processes need a way to communicate. For example, resource
management is a fundamental concern in any operating system and can be facilitated in this
instance by a suitable Producer/Consumer protocol.
Without such a protocol, processes would be stand alone and a lot of the benefits of abstraction
and component virtualisation would be lost.