Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

OS OPERATING SYSTEM

1.)OPERATING SYSTEM:

• An operating system is a program that manages a computer’s hardware.


• It also provides a basis for application programs and acts as an intermediary between the computer user and
the computer hardware.

2.) Process Management:

• A program does nothing unless its instructions are executed by a CPU.


• A program in execution, as mentioned, is a process.
• A time-shared user program such as a compiler is a process.
• A word-processing program being run by an individual user on a PC is a process. A system task, such as sending
output to a printer, can also be a process

3.)SYSTEM TYPES:

1. Batch Operating System –


This type of operating system does not interact with the computer directly.
2.Time-Sharing Operating Systems –
Each task is given some time to execute so that all the tasks work smoothly.
3. Distributed Operating System –
These types of the operating system is a recent advancement in the world of computer technology and are being widely
accepted all over the world and, that too, with a great pace.
4. Network Operating System –
These systems run on a server and provide the capability to manage data, users, groups, security, applications, and other
networking functions.
5. Real-Time Operating System –
These types of OSs serve real-time systems. The time interval required to process and respond to inputs is very small.
This time interval is called response time.
6. Client/server network OS
Client/server network operating systems are those networks that contain two types of nodes, the servers, and clients.

4.)COMPONENTS OF SYSTEM:
The components of an operating system play a key role to make a variety of computer system parts work together.
There are the following components of an operating system, such as:
1. Process Management-The process management component is a procedure for managing many processes
running simultaneously on the operating system.
2. File Management-A file is a set of related information defined by its creator
3. Network Management-Network management is the process of administering and managing computer networks.
4. Main Memory Management-Main memory is a large array of storage or bytes, which has an address.
5. Secondary Storage Management-The most important task of a computer system is to execute programs.
6. I/O Device Management-One of the important use of an operating system that helps to hide the variations of
specific hardware devices from the user.
7. Security Management-The various processes in an operating system need to be secured from other activities.
8. Command Interpreter System-One of the most important components of an operating system is its command
interpreter

5.)
6.)OPERATING SYSTEM SERVERCIES:
The operating system may be implemented with the assistance of several structures. The structure of the operating
system is mostly determined by how the many common components of the OS are integrated and merged into the
kernel. In this article, you will learn the following structure of the OS. Various structures are used in the design of the
operating system. These structures are as follows:
Simple Structure- There are many operating systems that have a rather simple structure. These started as small systems
and rapidly expanded much further than their scope. A common example of this is MS-DOS.
Micro-Kernel Structure- This micro-kernel structure creates the OS by eliminating all non-essential kernel components
and implementing them as user programs and systems. Therefore, a smaller kernel is known as a micro-kernel.
Layered Structure-One way to achieve modularity in the operating system is the layered approach. In this, the bottom
layer is the hardware and the topmost layer is the user interface.

As seen from the image, each upper layer is built on the bottom layer. All the layers hide some structures, operations etc
from their upper layers.
One problem with the layered structure is that each layer needs to be carefully defined

6.)DEADLOCKS:

A Deadlock is a situation where each of the computer process waits for a resource which is being assigned to some
another process. In this situation, none of the process gets executed since the resource it needs, is held by some other
process which is also waiting for some other resource to be released.
Let us assume that there are three processes P1, P2 and P3. There are three different resources R1, R2 and R3. R1 is
assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it can't complete without R2.
P2 also demands for R3 which is being used by P3. P2 also stops its execution because it can't continue without R3. P3
also demands for R1 which is being used by P1 therefore P3 also stops its execution.

7.)PCB(PROCESS CONTROL BLOCK):

As a process executes, it changes state


● new: The process is being created
● running: Instructions are being executed
● waiting: The process is waiting for some event to occur
● ready: The process is waiting to be assigned to a processor
● terminated: The process has finished execution
Information associated with each process
(also called task control block)
● Process state – running, waiting, etc
● Program counter – location of instruction to next execute
● CPU registers – contents of all process-centric registers
● CPU scheduling information- priorities, scheduling queue pointers
● Memory-management information – memory allocated to the process
● Accounting information – CPU used, clock time elapsed since start, time limits
● I/O status information – I/O devices allocated to process, list of open files
PROCESS SCHEDULING:
● Maximize CPU use, quickly switch processes onto CPU for time sharing
● Process scheduler selects among available processes for next execution on CPU
● Maintains scheduling queues of processes
● Job queue – set of all processes in the system
● Ready queue – set of all processes residing in main memory, ready and waiting to execute
● Device queues – set of processes waiting for an I/O device
● Processes migrate among the various queues
SCHEDULERS:
● Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU
● Sometimes the only scheduler in a system
● Short-term scheduler is invoked frequently (milliseconds) ⇒ (must be fast)
● Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue
● Long-term scheduler is invoked infrequently (seconds, minutes) ⇒ (may be slow)
● The long-term scheduler controls the degree of multiprogramming
● Processes can be described as either:
● I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
● CPU-bound process – spends more time doing computations; few very long CPU bursts
● Long-term scheduler strives for good process mix
● Medium-term scheduler can be added if degree of multiple programming needs to decrease
● Remove process from memory, store on disk, bring back in from disk to continue execution: swapping
CONTEXT SWITCH:
● When CPU switches to another process, the system must save the state of the old process and load the saved state for
the new process via a context switch
● Context of a process represented in the PCB
● Context-switch time is overhead; the system does no useful work while switching
INTERPROCESS COMMUNICATION

Processes extecuting conncurently in the operating system may be either independent process or cooperating
processes.

• A process is independent if it cannot affect or be affected by the other processes executing in the system.
• A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly,
any process that shares data with other processes is a cooperating process

Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data
and information. There are two fundamental models of interprocess communication: shared memory and message
passing.

Shared-Memory Systems Interprocess communication using shared memory requires communicating processes to
establish a region of shared memory

Messaging passing system:

Message passing provides a mechanism to allow processes to communicate and to synchronize their actions without
sharing the same address space. It is particularly useful in a distributed environment. A message-passing facility provides
at least two operations:

send(message) receive(message)

Messages sent by a process can be either fixed or variable in size.

If processes P and Q want to communicate, they must send messages to and receive messages from each other: a
communication link must exist between them. This link can be implemented in a variety of ways. We are concerned here
not with the link’s physical implementation but rather with its logical implementation. Here are several methods for
logically implementing a link and the send()/receive() operations:

• Direct or indirect communication

• Synchronous or asynchronous communication

• Automatic or explicit buffering

Naming

Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect
communication. Under direct communication, each process that wants to communicate must explicitly name the
recipient or sender of the communication. In this scheme, the send() and receive() primitives are defined as:

• send(P, message)—Send a message to process P.

• receive(Q, message)—Receive a message from process Q.

A communication link in this scheme has the following properties:

• A link is established automatically between every pair of processes that want to communicate. The processes need to
know only each other’s identity to communicate.

• A link is associated with exactly two processes.

• Between each pair of processes, there exists exactly one link


INDERECT : indirect communication, the messages are sent to and received from mailboxes, or ports

OPERATIONS: create new mailbox->send and receive messages through mailbox->destroy a mailbox

The send() and receive() primitives are defined as follows:

• send(A, message)—Send a message to mailbox A.

• receive(A, message)—Receive a message from mailbox A.

Synchronization

Communication between processes takes place through calls to send() and receive() primitives. There are different
design options for implementing each primitive. Message passing may be either blocking or nonblocking also known as
synchronous and asynchronous.

• Blocking send. The sending process is blocked until the message is received by the receiving process or by the mailbox.

• Nonblocking send. The sending process sends the message and resumes operation.

• Blocking receive. The receiver blocks until a message is available.

• Nonblocking receive. The receiver retrieves either a valid message or a Null.

Buffering:

Buffering Whether communication is direct or indirect, messages exchanged by communicating processes reside in a
temporary queue. Basically, such queues can be implemented in three ways

• Zero capacity. The queue has a maximum length of zero; no message are queue on a link
• Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it.
• Unbounded capacity. The queue’s length is potentially infinite; thus, any number of messages can wait in it. The
sender never blocks.

You might also like