Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

EXERCISE I:

1.1:
-Personal Computers
PCs are general purpose computers; those that are independent and can be used
by a single user. Sharing of resources, communication is not possible in PCs. They
have all the necessary resources local to the machine and are efficient in
processing all the requests locally.
-Network Computers
Network computers are the computers that are connected to each other through a
network. It is possible to share resources and communicate with other computers
in the network. They have very less resources locally and minimal operating system
too. They rely on the server for all their required resources..
-In network computer people can copy files from onemachine to another, hence it
is not secure like the personal computer. They are preferred atplaces where
sharing of resources and data is required.

1.2:
a. A dormitory floor -A LAN - because it is a small area and two or more computers
connected together within the dormitory floor through hardware and software
should be sufficient. With this configuration, computers can share files, resources,
and if preferred, an Internet connection.
b. A university campus - A LAN, A WAN if it is a very large campus
c. A state -A WAN - since it is a large geographic area and will require the use of the
internet for proper network configuration.
d. A nation -A WAN

1.3:
Caches are useful when two or more components need to exchange data, and the
components perform transfers at differing speeds. Caches solve the transfer
problem by providing a buffer of intermediate speed between the components. If
the fast device finds the data it needs in the cache, it need not wait for the slower
device. The data in the cache must be kept consistent  with the data in the
components. If a component has a data value change, and the datum is also in the
cache, the cache must also be updated. This is especially a problem on
multiprocessor systems where more than one process may be accessing a datum.
A component may be eliminated by an equal-sized cache, but only if :
-The cache and the component have equivalent state-saving capacity ( that is, if the
component retains its data when electricity is removed, the cache must retain data
as well) , and

-The cache is affordable, because faster storage tends to be more expensive.

1.4:
- When there are few other users, the task is large, and the hardware is fast,
timesharing makes sense. The full power of the system can be brought to bear on
the user’s problem. The problem can be solved faster than on a personal computer.
Another case occurs when lots of other users need resources at the same time. A
personal computer is best when the job is small enough to be executed reasonably
on it and when performance is sufficient to execute the program to the user’s
satisfaction.

1.5:
The four steps are:

a. Reserve machine time.

b. Manually load program into memory.

c. Load starting address and begin execution

d. Monitor and control execution of program from console.

1.6:
The distinction between kernel mode and user mode provides a rudimentary form
of protection in the following manner. Certain instructions could be executed only
when the CPU is in kernel mode. Similarly, hardware devices could be accessed only
when the program is executing in kernel mode. Control over when interrupts could
be enabled or disabled is also possible only when the CPU is in kernel mode.
Consequently, the CPU has very limited capability when executing in user mode,
thereby enforcing protection of critical resources.

1.7:
a. One such problem in this environment is one user copying another user's data,
alter it, steal it, or write over it. Another problem would be one user using another
user's resources like memory space or printers at the expense of the first user.
b. We cannot ensure the same degree of security in a time-shares machine
because they are less secure and the buffers in such systems overload easily.
Additionally, a protection method designed by a human being can and will be
broken into eventually, probably by another human being.

1.8:
The processor could keep track of what locations are associated with each process
and limit access to locations that are outside of a program’s extent. Information
regarding the extent of a program’s memory could be maintained by using base
and limits registers and by performing a check for every memory access.

1.9:
Handheld computers are much smaller than traditional desktop PC’s. This results
in smaller memory, smaller screens, and slower processing capabilities than a
standard desktop PC. Because of these limitations, most handhelds currently can
perform only basic tasks such as calendars, email, and simple word processing.
However, due to their small size, they are quite portable and, when they are
equipped with wireless access, can provide remote access to electronic mail and
the World Wide Web.

1.10:
The client-server model firmly distinguishes the roles of the client and server.
Under this model, the client requests services that are provided by the server. The
peer-to-peer model doesn’t have such strict roles. In fact, all nodes in the system
are considered peers and thus may act as either clients or servers - or both. A node
may request a service from another peer, or the node may in fact provide such a
service to other peers in the system. For example, let’s consider a system of nodes
that share cooking recipes. Under the client-server model, all recipes are stored
with the server. If a client wishes to access a recipe, it must request the recipe from
the specified server. Using the peer-to-peer model, a peer node could ask other
peer nodes for the specified recipe. The node (or perhaps nodes) with the
requested recipe could provide it to the requesting node. Notice how each peer
may act as both a client (i.e. it may request recipes) and as a server (it may provide
recipes.)
1.11:
 An operating system for a machine of this type would need to remain in control at
all times. This could be accomplished by two methods:

a. Software interpretation of all user programs. The software interpreter would


provide, in software, what the hardware does not provide.

b. Require meant that all programs be written in high-level languages so that all
object code is compiler-produced. The compiler would generate the protection
checks that the hardware is missing.

1.12:
 Batch systems do not have to be concerned with interacting with a user as much as
apersonal computer. As a result, an operating system for a PC must be concerned
with responsetime for an interactive user usually at the expense of efficiency.
Personal computer operatingsystems are not concerned with maximal use.
Mainframe computers have higher I/O capacityand they need more complex
algorithms to keep the system components busy for maximumoutput.

1.13
The following operations need to be privileged: Set value of timer, clear memory,
turn off interrupts, modify entries in device-status table, access I/O device. The rest
can be performed in user mode.

1.14:
 In single-processor systems, the memory needs to be updated when a processor
issues updates to cached values. These updates can be performed immediately or
in a lazy manner. In a multiprocessor system, different processors might be caching
the same memory location in its local caches. When updates are made, the other
cached locations need to be invalidated or updated. In distributed systems,
consistency of cached memory values is not an issue. However, consistency
problems might arise when a client caches file data.
1.15:

Advantages: Open-source operating system is free to use, distribute, and modify. It


has lower costs, and in most cases this is only a fraction of the cost of their
proprietary counterparts. Open-source operating system is more secured as the
code is accessible to everyone. Anyone can fix bugs as they are found, and users do
not have to wait for the next release. The fact that is continuously analyzed by a
large community produces secure and stable code. Open source is not dependent
on the company or author that originally created it. Even if the company fails, the
code continues to exist and be developed by its users. Also, it uses open standards
accessible to everyone; thus, it does not have the problem of incompatible formats
that exist in proprietary software.

Disadvantages: The main disadvantage of open-source operating system is not


being straightforward to use. Open-source operating systems like Linux cannot be
learned in a day. They require effort and possibly training from your side before
you are able to master them. You may need to hire a trained person to make things
easier, but this will incur additional costs. There is a shortage of applications that
run both on open source; therefore, switching to an open-source platform involves
a compatibility analysis of all the other software used that run on proprietary
platforms. In addition, there are many ongoing parallel developments on open
source operating system or software. This creates confusion on what functionalities
are present in which versions.

1.16:
Clustered systems are typically constructed by combining multiple computers into a
single system to perform a computational task distributed across the cluster.
Multiprocessor systems on the other hand could be a single physical entity
comprising of multiple CPUs. A clustered system is less tightly coupled than a
multiprocessor system. Clustered systems communicate using messages, while
processors in a multiprocessor system could communicate using shared memory.
In order for two machines to provide a highly available service, the state on the two
machines should be replicated and should be consistently updated. When one of
the machines fails, the other could then take-over the functionality of the failed
machine.
1.17:
The main difficulty is keeping the operating system within the fixed time constraints
of a real-time system, If the system does not complete a task in a certain time
frame, it may cause a breakdown of the entire system it is running. Therefore when
writing an operating system for a real-time system, the writer must be sure that his
scheduling schemes don’t allow response time to exceed the time constraint.

1.18:
The CPU can initiate a DMA operation by writing values into special registers that
can be independently accessed by the device. The device initiates the
corresponding operation once it receives a command from the CPU. When the
device is finished with its operation, it interrupts the CPU to indicate the completion
of the operation. Both the device and the CPU can be accessing memory
simultaneously. The memory controller provides access to the memory bus in a fair
manner to these two entities, A CPU might therefore be unable to issue memory
operations at peak speeds since it has to compete with the device in order to obtain
access to the memory bus.

1.19:
For real-time systems, the operating system needs to support virtual memory and
time sharing in a fair manner. For handheld systems, the operating system needs to
provide virtual memory, but does not need to provide time-sharing. Batch
programming is not necessary in both settings.

1.20:
 Although most systems only distinguish between user and kernel modes, some
CPUs have supported multiple modes. Multiple modes could be used to provide a
finer-grained security policy. For example, rather than distinguishing between just
user and kernel mode, you could distinguish between different types of user mode.
Perhaps users belonging to the same group could execute each other’s code. The
machine would go into a specified mode when one of these users was running
code. When the machine was in this mode, a member of the group could run code
belonging to anyone else in the group. Another possibility would be to provide
different distinctions within kernel code. For example, a specific mode could allow
USB device drivers to run. This would mean that USB devices could be serviced
without having to switch to kernel mode, thereby essentially allowing USB device
drivers to run in a quasi-user/kernel mode.

1.21:

a. Batch. Jobs with similar needs are batched together and run through the
computer as a group by an operator or automatic job sequencer. Performance is
increased by attempting to keep CPU and I/O devices busy at all times through
buffering, off-line operation, spooling, and multiprogramming. Batch is good for
executing large jobs that need little interaction; it can be submitted and picked up
later.

b. Interactive. This system is composed of many short transactions where the


results of the next transaction may be unpredictable. Response time needs to be
short (seconds) since the user submits and waits for the result.

c. Time sharing. This systems uses CPU scheduling and multiprogramming to


provide economical interactive use of a system. The CPU switches rapidly from one
user to another. Instead of having a job defined by spooled card images, each
program reads its next control card from the terminal, and output is normally
printed immediately to the screen.

d. Real time. Often used in a dedicated application, this system reads information
from sensors and must respond within a fixed amount of time to ensure correct
performance.

e. Network. Provides operating system features across a network such as file


sharing.

f. Parallel. Used in systems where there is multiple CPU’s each running the same
copy of the operating system. Communication takes place across the system bus.

g. Distributed. This system distributes computation among several physical


processors. The processors do not share memory or a clock. Instead, each
processor has its own local memory. They communicate with each other through
various communication lines, such as a high-speed bus or local area network.

h. Clustered. A clustered system combines multiple computers into a single system


to perform computational task distributed across the cluster.
I. Handheld .A small computer system that performs simple tasks such as
calendars, email, and web browsing. Handheld systems differ from traditional
desktop systems with smaller memory and display screens and slower processors.

1.22:
 Symmetric multiprocessing treats all processors as equals and I/O can be
processed on any CPU. Asymmetric multiprocessing has one master CPU and the
remainder CPUs are slaves. The master distributes tasks among the slaves, and I/O
is usually done by the master only. Multiprocessors can save money by not
duplicating power supplies, housings, and peripherals. They can execute programs
more quickly and can have increased reliability. They are also more complex in both
hardware and software than uniprocessor systems.

1.23:

 a. Mainframes: memory and CPU resources, storage, network bandwidth.

b. Workstations: memory and CPU resources.

c. Handheld computers: power consumption, memory resources.

1.24:
An interrupt is a hardware-generated change-of-flow within the system. An
interrupt handler is summoned to deal with the cause of the interrupt; control is
then returned to the interrupted context and instruction. A trap is a software-
generated interrupt. An interrupt can be used to signal the completion of an I/O to
obviate the need for device polling. A trap can be used to call operating system
routines or to catch arithmetic errors.

1.25:

SMP means that all processors are peers; no master slave relationship exists
between processors. A typical SNP architecture is that each processor has its own
register, as well as a private or local cache; however, all processors share physical
memory. The benefit of this model is that many processes can run simultaneously
N processes can run if there are N CPUs-without causing a significant deterioration
of performance. However, we must carefully control I/0 to ensure that the data
reach the appropriate processor. Also, since the CPUs are separate, one may be
sitting idle while another is overloaded, resulting in inefficiencies.

1.26:
Consider the following two alternatives: asymmetric clustering and parallel
clustering. With asymmetric clustering, one host runs the database application with
the other host simply monitoring it. If the server fails, the monitoring host becomes
the active server. This is appropriate for providing redundancy. However, it does
not utilize the potential processing power of both hosts. With parallel clustering, the
database application can run in parallel on both hosts. The difficulty implementing
parallel clusters is providing some form of distributed locking mechanism for files
on the shared disk.
EXERCISE II
2.1:
-Creating and Deleting Files
-Creating and Deleting Directories
-File Manipulation Instructions
-Mapping to Permanent Storage
-Backing Up Files

2.2:
Keeping track of which parts of memory are currently being used and by whom

Deciding which processes are to be loaded into memory when memory space
becomes available

Allocating and deallocating memory space as needed

whenever one of the components change, and every part of the system enjoys the
same optimisations, regardless of how or when they’re linked together.

2.3:
Java made the choice of being cross-platform by using a VM. In essence, this
means that Java code is often compiled to bytecode, then bytecode is loaded in
a virtual machine that interprets it on the target architecture. Even a simple
(non-optimising) JIT compiler can be a win here by removing the interpretation
overhead of the emulation layer.
Java is an object-oriented programming language (OOPL). In OOPLs,
method calls are not known at compile time—that is, method calls are
dynamically dispatched/depend entirely on runtime values. This is a tricky thing
for compilers because they can’t optimise what they don’t know. By moving the
compiler to the runtime, the compiler has access to this information, and can
do things like resolving methods and inlining them instead of going through all
of the overhead of a virtual method call. For a language that relies on this for
almost everything, this kind of optimisation is really important!
Java has open-world assumptions. That is, when compiled, Java programs don’t
really assume that the entire program is fixed at that time. In fact, loading, creating,
and replacing classes while the program is running is a very common thing to do—
hot patching and deploys depend on this. Without a JIT, Java would need to set
strict boundaries on what code the compiler knows about, and how these
components are linked. The compiler wouldn’t be able to do anything about
interactions between these components because they could be anything at
runtime. Static languages solve this in DLLs, and every call to a DLL will need to pay
a huge price, since it can’t be optimised[2] . With a JIT compiler, the JVM can just
recompile the affected parts

2.4:
One class of services provided by an operating system is to enforce protection
between different processes running concurrently in the system. Processes are
allowed to access only those memory locations that are associated with their
address spaces. Also, processes are not allowed to corrupt files associated with
other users. A process is also not allowed to access devices directly without
operating system intervention. The second class of services provided by an
operating system is to provide new functionality that is not supported directly by
the underlying hardware. Virtual memory and file systems are two such examples
of new services provided by an operating system.

2.5:
Mechanism and policy must be separate to ensure that systems are 
easy to modify. No two system installations are the same, so each installation 
may want to tune the operating system to suit its needs. With mechanism and 
olicy separate, the policy may be changed at will while the mechanism stays 
nchanged. This arrangement provides a more flexible system.

2.6:
Yes, it is possible to develop a new command interpreter using the system-call
interface on those OS’s where the interpreter is not tightly integrated into the
system

2.7:
The purpose of a command interpreter is to provide a simple environment in which
users may perform common computing tasks. The command interpreter should be
thought of as a software layer that sits between the user and the other systems
programs. The command interpreter does its job by reading commands from the
user (or from a file, in the case of batch systems) and then performing the
commands. It performs the commands by directly making system calls itself, or by
executing other programs to perform the requested task.

2.8:
The system is easy to debug, and security problems are easy to solve. Virtual
machines also provide a good platform for operating system research since many
different operating systems may run on one physical system

2.9:
The layered approach break up the operating system into different layer which Aloe
implementers to change in the inner workings and increase modularity. in the
layered approach the bottom layer and while the years layer is the user interface
generally the layered approach needs to have some sort of separation between
structures in order to divide them into layer as a result the major difficult with the
layered approach in was approximate defining the various layers let consider the
virtual memory and storage.

which animal was developed at the time when physical memory was expensive and
his memory management capability of a operating system that uses of hardware
and software to all of the computer to compensate for physical memory storage
when using virtual memory the data is temporary transfer from Paytm to disk
storage the virtual memory and storage system related to each other during
program execution we might need to map file into the virtual space on the other
and virtual memory normally used as a storage system to provide back of the data
is not currently residing in memory sometime subsidy to the file system or
temporarily stored in physical memory before being sent to the disk as a result we
need to carefully planned the storage section the virtual memory section and the
file system also so it can interact with them
2.10:
With layers, changes made to one layer do not have to affect another layer.
When one is working to debug or modify the system, they will only change the layer
they are currently working on. A certain section of the code can be changed without
the need to understand or know the details of the other layer. The information is
only stored where it will be used and accessible in only a few ways. Therefore, the
bugs will only affect a certain area.

The disadvantages are: There is data overhead because of the appending of


multiple headers to the data. Another possible disadvantage is that there must be
at least one protocol standard per layer. With so many layers, it takes a long time
to develop and promulgate the standards."

2.11:
A host operating system is the operating system that is in direct communication
with the hardware. It has direct hardware access to kernel mode and all of the
devices on the physical machine. The guest operating system runs on top of a
virtualization layer and all of the physical devices are virtualized. A host operating
system should be as modular and thin as possible to allow the virtualization of the
hardware to be as close to the physical hardware as possible, and so that
dependencies that exist in the host operating don't restrict operation in the guest
operating system.

2.12:

- Pass parameters in registers


- Registers pass starting addresses of blocks of parameters
- Parameters can be placed, or pushed, onto the stack by the program, and popped
off the stack by the operating system

2.13:
Benefits typically include the following:
(a) adding a new service does not require modifying the kernel,
(b) it is more secure as more operations are done in user mode than in kernel mode,
and
(c) a simpler kernel design and functionality typically results in a more reliable
operating system. User programs and system services interact in a microkernel
architecture by using interprocess communication mechanisms such as messaging.
These messages are conveyed by the operating system. The
primary disadvantages of the microkernel architecture are the overheads

2.14:
In Unix systems, a fork system call followed by an exec system call need to be
performed to start a new process. The fork call clones the currently executing
process, while the exec call overlays a new process based on a different executable
over the calling process.

2.15:
The two models of interprocess communication are message-passing model and
the shared-memory model. Message passing is useful for exchanging smaller
amounts of data, because no conflicts need be avoided. It is also easier to
implement than is shared memory for intercomputer communication. Shared
memory allows maximum speed and convenience of communication, since it can
be done at memory transfer speeds when it takes place within a computer.
However, this method compromires on protection and synchronization between
the processes sharing memory.

2.16:
Synthesis is impressive due to the performance it achieves through on-the-fly
compilation. Unfortunately, it is difficult to debug problems within the kernel due to
the fluidity of the code. Also, such compilation is system specific, making Synthesis
difficult to port (a new compiler must be written for each architecture).

2.17:
The modular kernel approach requires subsystems  to interact with  each other
through carefully constucted interfaces that are typically narrow (in terms of  the
functionality that is exposed  to external modules). The layered kernel approach is
similar in that respect. However, the layered kernel imposes a strict ordering of
subsystems such that subsystems at the lower layers are not allowed
to invoke operations corresponding to the upper‐layer subsystems. There are no
such restrictions in the modularkernel approach, wherein modules are free to
invoke each other without any constraints.
2.18:
Consider a system that would like to run both Windows XP and three different
distributions of Linux (e.g., RedHat, Debian, and Mandrake). Each operating system
will be stored on disk. During system boot-up, a special program (which we will call
the boot manager) will determine which operating system to boot into. This means
that rather initially booting to an operating system, the boot manager will first run
during system startup. It is this boot manager that is responsible for determining
which system to boot into. Typically boot managers must be stored at

2.19:
Each device can be accessed as though it was a file in the file system.
Since most of the kernel deals with devices through this file interface,
it is relatively easy to add a new device driver by implementing the
hardware-specific code to support this abstract file interface. Therefore,
this benefits the development of both user program code, which can
be written to access devices and files in the same manner, and device-
driver code, which can be written to support a well-defined API. T h e
disadvantage with using the same interface is that it might be difficult
to capture the functionality of certain devices within the context of the
file access API, thereby resulting in either a loss of functionality or a
loss of performance. Some of this could be overcome by the use of the
ioctl operation that provides a general-purpose interface for processes
to invoke operations on devices.

2.20:
One could issue periodic timer interrupts and monitor what instructions
or what sections of code are currently executing when the interrupts
are delivered. A statistical profile of which pieces of code were active
should be consistent with the time spent by the program in different
sections of its code. Once such a statistical profile has been obtained, the
programmer could optimize those sections of code that are consuming
more of the CPU resources.

2.21:
For certain devices, such as handheld PDAs and cellular telephones, a disk with a
file system may be not be available for the device. In this situation, the operating
system must be stored in firmware.

You might also like