Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Name ID

Mohamed Abdelrahman Anwar 20011634

Operating Systems
Sheet 2
OS Principles

2.1 OPERATING SYSTEM OBJECTIVES AND FUNCTIONS


1.
• Program Development: The OS provides a variety of
facilities and services, such as editors and debuggers, to
assist the programmer in creating programs. Typically,
these services are in the form of utility programs that,
while not strictly part of the core of the OS, are supplied
with the OS, and are referred to as application program
development tools.
• Access to I/O devices: Each I/O device requires its own
peculiar set of instructions or control signals for operation.
The OS provides a uniform interface that hides these
details so programmers can access such devices using
simple reads and writes.
• System access: For shared or public systems, the OS
controls access to the system as a whole and to specific
system resources. The access function must provide
protection of resources and data from unauthorized users
and must resolve conflicts for resource contention.

2. The kernel, or nucleus, contains the most frequently used


functions in the OS and is stored on secondary storage like a
hard drive or an SSD and is loaded in the main memory.

3. I/O, main and secondary memory, and processor execution


time.

2.2 THE EVOLUTION OF OPERATING SYSTEMS


4. A user program executes in a user mode, in which certain
areas of memory are protected from the user’s use, and in
which certain instructions may not be executed. The monitor
executes in a system mode, or what has come to be called
kernel mode, in which privileged instructions may be executed,
and in which protected areas of memory may be accessed.
And they are needed to protect the user from accessing an
illegal instruction or a part of memory that has the monitor
instructions for example which will break the OS apart.
5. Time slicing is a technique in which a system clock generates
interrupts at a constant rate. At each clock interrupt, the OS
regained control and could assign the processor to another
user. Thus, at regular time intervals, the current user would be
preempted, and another user loaded in.

6.
a. Serial System: In a serial system, tasks are executed one after
the other. Each task must be completed before the next task
can begin. The system is not designed for multitasking, and
there is no overlap between tasks.
Essential Properties of a Serial System:
• Only one task is executed at a time.
• The system must wait for the current task to be completed
before the next task can begin.
• The system may be slow and inefficient, as tasks cannot
overlap.
b. Batch System: In a batch system, multiple jobs are executed
together in batches. A batch of jobs is submitted to the system,
and the system executes them one by one without any user
interaction until the batch is complete.
Essential Properties of a Batch System:
• Multiple jobs are executed together in batches.
• The system executes the jobs without any user interaction.
• Once a batch is submitted, the user must wait for the
entire batch to complete before receiving any results.
c. Multiprogramming System: In a multiprogramming system,
multiple programs are loaded into memory simultaneously, and
the CPU switches between them. This allows for overlapping of
CPU and I/O operations, making the system more efficient.
Essential Properties of a Multiprogramming System:
• Multiple programs are loaded into memory
simultaneously.
• The CPU switches between programs, allowing for
overlapping of CPU and I/O operations.
• The system is more efficient than a serial system, as tasks
can overlap.
d. Time-sharing System: In a time-sharing system, multiple
users can access the system simultaneously. Each user is given
a time slice, during which they can execute their programs. The
CPU switches between users, allowing each user to interact
with the system in real-time.
Essential Properties of a Time-sharing System:
• Multiple users can access the system simultaneously.
• Each user is given a time slice to execute their programs.
• The CPU switches between users, allowing each user to
interact with the system in real-time.

e. Real-time System: In a real-time system, tasks must be


completed within a specific time frame. Failure to complete a
task within the allotted time can result in system failure or
other serious consequences.
Essential Properties of a Real-time System:
• Tasks must be completed within a specific time frame.
• Failure to complete a task within the allotted time can
result in system failure or other serious consequences.
• The system must be able to respond quickly to external
events or inputs.
7.
Job 1: I/O for 10ms then CPU for 3ms then I/O for 10ms
Job 2: I/O for 12ms then CPU for 5ms then I/O for 12ms
Job 3: I/O for 5ms then CPU for 4ms then I/O for 5ms

In uniprogramming:
Jobs will execute sequentially so job 1 would take 23ms then
job 2 would start and finish execution at 52ms then job 3 would
start and finish at 66ms so total time is 66ms.

Total CPU time is 12ms so CPU utilization = (3+4+5) /


(23+29+14) * 100% = 18.18%
In multiprogramming:
In a multiprogramming system, all three jobs can be loaded into
memory simultaneously and executed concurrently so
execution would go like this:

Total CPU time is 12ms so CPU utilization = (3+4+5) / 30 * 100%


= 40 %

2.3 MAJOR ACHIEVEMENTS


8.
• Improper synchronization: It is often the case that a
routine must be suspended awaiting an event elsewhere
in the system.
• Failed mutual exclusion: It is often the case that more
than one user or program will attempt to make use of a
shared resource at the same time.
• Nondeterminate program operation: The results of a
particular program normally should depend only on the
input to that program, and not on the activities of other
programs in a shared system. But when programs share
memory, and their execution is interleaved by the
processor, they may interfere with each other by
overwriting common memory areas in unpredictable ways.
Thus, the order in which various programs are scheduled
may affect the outcome of any program.
• Deadlocks: It is possible for two or more programs to be
hung up waiting for each other.

9.
• An executable program.
• The associated data needed by the program (variables,
workspace, buffers, etc.).
• The execution context of the program.
10.
• Process isolation: The OS must prevent independent
processes from interfering with each other’s memory,
both data and instructions.
• Automatic allocation and management: Programs should
be dynamically allocated across the memory hierarchy as
required. Allocation should be transparent to the
programmer. Thus, the programmer is relieved of
concerns relating to memory limitations, and the OS can
achieve efficiency by assigning memory to jobs only as
needed.
• Support of modular programming: Programmers should
be able to define program modules, and to dynamically
create, destroy, and alter the size of modules.
• Protection and access control: Sharing of memory, at any
level of the memory hierarchy, creates the potential for
one program to address the memory space of another.
This is desirable when sharing is needed by applications. At
other times, it threatens the integrity of programs and
even of the OS itself. The OS must allow portions of
memory to be accessible in various ways by various users.
• Long-term storage: Many application programs require
means for storing information for extended periods of
time, after the computer has been powered down.
11.
• Fairness
• Differential responsiveness
• Efficiency
12.
Job 1: CPU for 8s then I/O for 8s
Job 2: CPU for 4s then Disk for 14s
Job 3: CPU for 6s
Job 4: CPU for 4s then Printer for 16s

a.

• Turnaround time = job 1 -> 16, job 2 -> 34, job 3 -> 40, job
4 -> 60.
• Throughput = 4 / 60 = 0.066666667 jobs/sec
• Processor utilization = (8+4+6+4) / 60 * 100% = 36.67%
b.

• Turnaround time = job 1 -> 30, job 2 -> 26, job 3 -> 20, job
4 -> 32.
• Throughput = 4 / 32= 0.125 jobs/sec
• Processor utilization = (8+4+6+4) / 32* 100% = 68.75%

2.4 DEVELOPMENTS LEADING TO MODERN OPERATING


SYSTEMS

13.
Monolithic kernel includes scheduling, file system, networking,
device drivers, memory management, and more Typically, a
monolithic kernel is implemented as a single process, with all
elements sharing the same address space,
Microkernel architecture assigns only a few essential functions
to the kernel, including address space management,
interprocess communication (IPC), and basic scheduling. Other
OS services are provided by processes, sometimes called
servers, that run-in user mode and are treated like any other
application by the microkernel.

14. Multiprogramming refers to the technique of loading


multiple programs into memory and allowing the CPU to switch
between them for execution. The motivation behind the
development of multiprogramming was to improve the overall
system efficiency by maximizing the use of the CPU and other
resources. Multiprogramming enabled overlapping of I/O
operations with CPU processing, leading to improved system
throughput.

Multiprocessing refers to the use of multiple CPUs or cores in a


single computer system. The motivation behind the
development of multiprocessing was to achieve higher
performance and faster processing of large and complex
computations. Multiprocessing enables parallel processing of
multiple tasks, leading to faster execution of programs and
improved overall system performance.
2.5 FAULT TOLERANCE

15.
• Virtual machines: Virtual machines, as will be discussed in
Chapter 14, provide a greater degree of application
isolation and hence fault isolation. Virtual machines can
also be used to provide redundancy, with one virtual
machine serving as a backup for another.

• Process isolation: As was mentioned earlier in this


chapter, processes are generally isolated from one another
in terms of main memory, file access, and flow of
execution. The structure provided by the OS for managing
processes provides a certain level of protection for other
processes from a process that produces a fault.
2.6 OS DESIGN CONSIDERATIONS FOR
MULTIPROCESSOR AND MULTICORE
16.
• Parallelism within Applications: Most applications can be
subdivided into multiple tasks that can be executed in
parallel, with these tasks then being implemented as
multiple processes, perhaps each with multiple threads.
The difficulty is that the developer must decide how to
split up the application work into independently
executable tasks. That is, the developer must decide what
pieces can or should be executed asynchronously or in
parallel. It is primarily the compiler, and the programming
language features that support the parallel programming
design process. But the OS can support this design process,
at minimum, by efficiently allocating resources among
parallel tasks as defined by the developer.
• Virtual Machine Approach: An alternative approach is to
recognize that with the ever-increasing number of cores
on a chip, the attempt to multiprogram individual cores to
support multiple applications may be a misplaced use of
resources. If instead, we allow one or more cores to be
dedicated to a particular process, then leave the processor
alone to devote its efforts to that process, we avoid much
of the overhead of task switching and scheduling
decisions. The multicore OS could then act as a hypervisor
that makes a high-level decision to allocate cores to
applications but does little in the way of resource
allocation beyond that.

You might also like