Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Operating Systems UNIT 1 & 2 -

Short Notes by (
www.techbloop.com )

Visit for more information: Tech Bloop ( www.techbloop.com )

Table of Contents
UNIT - 1
Introduction to Operating Systems:
Processes:
Threads:
Processor Scheduling:
UNIT - 2
Process Synchronization:
Memory Organization & Management:
Virtual Memory:

UNIT - 1

Introduction to Operating Systems:


An operating system (OS) is system software that acts as an intermediary between
computer hardware and user applications. It provides an environment for program
execution, manages hardware resources, and ensures the system's efficient and
secure operation.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 1


An operating system (OS) is a fundamental piece of system software that performs
several crucial functions:

1. Mediator Between Hardware and User Applications: The OS serves as a


bridge between the computer's physical hardware and the user applications. It
translates user commands into instructions that the hardware can comprehend,
enabling smooth interaction between the user and the computer.

2. Creates Environment for Program Execution: The OS is responsible for


creating an environment that facilitates the seamless execution of programs. It
provides the necessary services and resources that programs need to run
efficiently.

3. Manages Hardware Resources: The OS oversees the allocation and


management of hardware resources like processor time, memory space, and
input/output devices. It ensures these resources are efficiently utilized,
preventing wastage and promoting optimal system performance.

4. Guarantees System Efficiency: By managing resources effectively, the OS


helps ensure the overall efficiency of the system, contributing to quicker
response times, faster processing, and improved user experience.

5. Ensures System Security: The OS plays a vital role in maintaining the system's
security. It protects data and resources from unauthorized access and potential
threats, ensuring the system remains safe and reliable.

Example: Windows, macOS, Linux, and Unix are popular desktop operating
systems. Android and iOS are operating systems for mobile devices.

Simple Batch Systems:

In a simple batch system, multiple jobs are submitted for processing as a batch.
The OS executes them one by one without user intervention.

Explanation: A user submits jobs to the system. The OS collects and processes
these jobs in batches, one after the other, without user interaction.

Multiprogrammed Batch Systems:

In multiprogrammed batch systems, the OS loads multiple jobs into memory


simultaneously. This allows for better CPU utilization and overlapping of I/O and
CPU operations.

Explanation: Instead of waiting for one job to finish, the OS loads multiple jobs
into memory. While one job is waiting for I/O, the CPU can work on another job,
maximizing resource utilization.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 2


Time-Sharing Systems:

Time-sharing systems enable multiple users to interact with the computer


simultaneously. They allocate CPU time in small slices, providing the illusion of
concurrent execution for each user.

Explanation: In time-sharing, each user gets a small time slice to use the CPU.
The OS rapidly switches between users, giving the illusion of concurrent
execution.

Personal Computer Systems:

Personal computer operating systems like Windows, macOS, and Linux are
designed for single-user, desktop environments, and prioritize user-friendly
interfaces.

Explanation: These operating systems are tailored for individual users and
focus on ease of use and user experience.

Parallel Systems:

Parallel operating systems manage multiple processors or cores, enabling the


execution of tasks in parallel, thus improving performance.

Explanation: Parallel systems distribute tasks across multiple CPUs to


accelerate computation, common in high-performance computing and servers.

Distributed Systems:

Distributed operating systems manage a network of computers as a single


system. They facilitate resource sharing, communication, and collaboration
among networked machines.

Explanation: Distributed systems allow multiple machines to work together as a


single entity, often seen in cloud computing and server clusters.

Real-Time Systems:

Real-time operating systems are designed for time-critical applications where


response times are guaranteed. They are used in embedded systems,
aerospace, and industrial control.

Explanation: Real-time systems must respond to events within a specified time


frame. For example, an anti-lock braking system in a car requires a real-time OS
to ensure timely braking.

OS as a Resource Manager:

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 3


One of the primary roles of an OS is to manage system resources efficiently,
including CPU, memory, devices, and user access.

Explanation: The OS ensures fair and efficient resource allocation, preventing


conflicts and optimizing system performance.

Processes:
Introduction:

A process is an independent program in execution. It includes not only the


program code but also its current activity, the values of program counter,
registers, and variables.

Processes are essential in multitasking operating systems, enabling the


concurrent execution of multiple programs.

Process States:

A process in an operating system can be in one of several states, representing


its current condition and execution status:

1. New: In this state, the process is being created, but it has not yet been
admitted to the system for execution.

2. Ready: A process in the ready state is prepared to run but is waiting for the
CPU to be allocated.

3. Running: When a process is in the running state, it is actively executing


instructions on the CPU.

4. Blocked (or Waiting): A process in this state is unable to proceed because


it is waiting for an event or resource, such as user input or a file to become
available.

5. Terminated (or Exit): A process that has completed its execution is said to
be in the terminated state. After this state, the process is removed from the
process table.

Process Management:

Process management is a critical aspect of an operating system, involving


various activities:

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 4


1. Process Creation: Creating new processes, often involving the duplication
of the current process (forking) or running a different program (exec).

2. Process Scheduling: The OS scheduler determines the order in which


processes or threads are executed on the CPU, ensuring fair utilization of
system resources.

3. Process Synchronization: Processes often need to cooperate and


coordinate their activities. Process synchronization mechanisms, like
semaphores and mutexes, are used to ensure orderly access to shared
resources and avoid race conditions.

4. Interprocess Communication (IPC): Processes may need to communicate


with each other. IPC mechanisms provide a means for processes to
exchange data and coordinate their actions, such as message passing or
shared memory.

5. Process Termination: When a process has completed its task or needs to


be terminated for some other reason, the operating system ensures a
graceful termination, freeing up resources and memory.

6. Process Control Block (PCB): The PCB is a data structure associated with
each process that contains important information about the process, such as
process state, program counter, registers, and scheduling information. The
PCB is used by the operating system to manage and control processes.

7. Context Switching: When the CPU switches from executing one process to
another, a context switch occurs. It involves saving the state of the currently
running process and loading the state of the next process. Context switches
are essential for multitasking.

8. Process Priority: Assigning priorities to processes allows the operating


system to determine which process should be executed next based on
scheduling algorithms.

9. Process Suspension and Resumption: Processes can be temporarily


suspended and later resumed without being terminated. This is often used to
manage system resources efficiently.

10. Process Lifecycle: Processes have a lifecycle, starting from creation and
ending with termination. Understanding this lifecycle helps in effective
process management.

Interrupts:

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 5


Interrupts are signals generated by hardware or software events that request the
CPU's attention. They can be used for device I/O, errors, and process
synchronization.

Interprocess Communication (IPC):

IPC mechanisms allow processes to communicate and share data. Examples


include message passing, shared memory, and sockets.

Threads:
Introduction:

Threads are lightweight, smaller units of a process. They share the same
memory space and resources within a process but can execute independently.
Threads are used to achieve multitasking within a single process.

Thread States:

Threads can be in one of several states, representing their current condition and
execution status:

1. New: In this state, a thread is created but has not yet started executing.

2. Runnable (or Ready): A thread in the runnable state is ready to run but is
waiting for the CPU to be allocated.

3. Running: When a thread is in the running state, it is actively executing


instructions on the CPU.

4. Blocked (or Waiting): A thread in this state is unable to proceed because it


is waiting for an event or resource, such as user input or data from another
thread.

5. Terminated (or Exit): A thread that has completed its execution is in the
terminated state. After this state, the thread is no longer active.

Thread Operation:

Threads within a process can perform various operations, such as:

1. Thread Creation: Creating new threads within a process. Threads often


share the same code and data segments but have their own stack for local
variables and function calls.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 6


2. Thread Synchronization: Threads may need to synchronize their actions to
avoid data races and ensure safe access to shared resources.
Synchronization mechanisms like mutexes and semaphores are used.

3. Thread Termination: Threads can be terminated when they have completed


their tasks or are no longer needed. Proper thread termination includes
cleaning up resources.

4. Thread Joining: A thread can wait for another thread to finish its execution
by joining it. This is useful when one thread depends on the results of
another.

5. Thread Communication: Threads can communicate with each other


through various means, including shared memory, message passing, and
signaling.

Threading Models:

Threading models define how threads are created and scheduled within a
process. Some common threading models include:

1. Many-to-One (M:1): In this model, many user-level threads are mapped to a


single kernel-level thread. It is simple but doesn't provide true parallelism.

2. One-to-One (1:1): Each user-level thread is mapped to a separate kernel-


level thread. It offers true parallelism but can be more resource-intensive.

3. Many-to-Many (M:N): This model combines the advantages of M:1 and 1:1
models. Multiple user-level threads are mapped to a smaller number of
kernel-level threads. It provides some level of parallelism while being
resource-efficient.

4. Hybrid Threading: In this model, a combination of user-level and kernel-


level threads are used. It allows for both fine-grained control (user-level
threads) and efficient system-level operations (kernel-level threads).

Example: The Java programming language uses a Many-to-Many threading


model, where Java threads are mapped to both native threads (kernel-level)
and user-level threads. This model provides flexibility and efficiency.

Processor Scheduling:
Scheduling Levels:

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 7


Processor scheduling occurs at different levels within an operating system. The
primary levels are:

1. Long-term Scheduling: Also known as job scheduling, it selects processes


from the job pool to bring them into memory for execution. It determines
which jobs are admitted to the system.

2. Medium-term Scheduling: This level decides which processes should be


swapped in and out of memory. It helps manage the degree of
multiprogramming and memory utilization.

3. Short-term Scheduling: Often referred to as CPU scheduling, it determines


which process or thread should run next on the CPU. It occurs frequently
and influences system responsiveness.

Preemptive vs. Non-preemptive Scheduling:

Processor scheduling can be either preemptive or non-preemptive:

1. Preemptive Scheduling: In preemptive scheduling, a higher-priority


process can interrupt and suspend the execution of a lower-priority one.
Preemptive scheduling allows for better response times and is common in
multitasking systems.

2. Non-preemptive Scheduling: Non-preemptive scheduling runs a process


until it completes or enters a waiting state. It does not allow other processes
to interrupt it. This can be simpler but may result in slower response times
for high-priority tasks.

Priorities:

Assigning priorities to processes allows the operating system to determine which


process should be executed next based on their importance or urgency. Priority-
based scheduling ensures that more critical tasks are addressed first.

Scheduling Objective:

Scheduling objectives are the goals a scheduler aims to achieve. Common


objectives include:

1. CPU Utilization: Maximizing CPU usage to keep it busy as much as


possible.

2. Throughput: Maximizing the number of processes completed in a given


time.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 8


3. Turnaround Time: Minimizing the time taken to execute a process from
submission to completion.

4. Waiting Time: Reducing the time processes spend waiting in the ready
queue.

5. Response Time: Minimizing the time taken for a process to respond to a


user's input or request.

Scheduling Criteria:

Various criteria are used to evaluate and compare scheduling algorithms.


Common criteria include:

1. CPU Utilization: The percentage of time the CPU is actively executing


processes.

2. Throughput: The number of processes completed per unit of time.

3. Turnaround Time: The total time taken to execute a process, including


waiting and execution time.

4. Waiting Time: The total time a process spends in the ready queue.

5. Response Time: The time taken for a process to start responding after a
user request.

Scheduling Algorithms:

Scheduling algorithms determine how processes are selected for execution.


Some common scheduling algorithms include:

1. First-Come-First-Served (FCFS): Processes are executed in the order they


arrive in the ready queue. Simple but may lead to poor response times for
shorter jobs.

2. Shortest Job First (SJF): Selects the process with the shortest execution
time next. Reduces waiting time but can be challenging to predict.

3. Round Robin: Allocates a fixed time slice to each process in a cyclic


manner. Provides fairness and responsiveness.

4. Priority Scheduling: Processes with higher priorities are executed first. Can
lead to starvation if lower-priority processes are constantly preempted.

5. Multilevel Queue Scheduling: Organizes processes into different priority


queues, each with its own scheduling algorithm. Offers a balance between
priorities and fairness.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 9


Demand Scheduling:

Demand scheduling allows processes to request resources only when they need
them, optimizing resource utilization and reducing contention for resources.

Real-Time Scheduling:

Real-time scheduling guarantees that processes meet deadlines by using


algorithms like Rate Monotonic Scheduling (RMS) and Earliest Deadline First
(EDF).

UNIT - 2

Process Synchronization:
Mutual Exclusion:

Mutual exclusion is a fundamental concept in process synchronization, ensuring


that only one process or thread can access a shared resource at a time to avoid
data corruption and race conditions.

Software Solution to Mutual Exclusion:

Software-based solutions for mutual exclusion include using algorithms like


Peterson's algorithm, Dekker's algorithm, and the bakery algorithm. These
algorithms use shared variables and control structures to achieve mutual
exclusion.

Hardware Solution to Mutual Exclusion:

Hardware-based solutions for mutual exclusion include atomic instructions


provided by modern processors, like test-and-set and compare-and-swap, which
ensure that only one thread can modify a shared variable at a time.

Semaphores:

Semaphores are synchronization mechanisms used to control access to shared


resources. They are implemented as a data structure that includes a count and
operations like wait (decrement) and signal (increment).

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 10


Critical Section Problems:

Critical section problems involve defining a set of rules to ensure that processes
or threads can access shared resources in a mutually exclusive and orderly
manner. The critical section is the part of the code where these rules are applied.
Key problems include:

1. Mutual Exclusion: Ensuring that only one process accesses the critical
section at a time.

2. Progress: Guaranteeing that a process that requests access to the critical


section eventually gets it.

3. Bounded Waiting: Putting a bound on the number of times other processes


are allowed to enter the critical section before a process is granted access.

Case Study on Dining Philosophers Problem:

The Dining Philosophers Problem is a classic synchronization problem that


involves five philosophers sitting around a dining table, where each philosopher
alternates between thinking and eating, using forks placed between them. The
challenge is to avoid deadlocks and contention for forks.

Solution: Several solutions exist, such as the Chandy/Misra solution using


message passing, or using semaphores and a mutex to manage fork
acquisition and release.

Case Study on Barber Shop Problem:

The Barber Shop Problem is another synchronization problem where customers


arrive at a barber shop and need haircuts, but there is only one barber. The
challenge is to ensure that customers get haircuts without deadlocks or
overcrowding in the shop.

Solution: A solution may involve using semaphores to track the number of


available seats in the shop's waiting room. Customers need to wait if the
shop is full and are served in the order they arrive.

These case studies illustrate the practical challenges in process synchronization and
the need for effective solutions to ensure that shared resources are used efficiently
and without conflicts.

Memory Organization & Management:


Memory Organization:

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 11


Memory organization refers to the structure of the computer's memory, which is
divided into different sections for various purposes. This includes the stack,
heap, code segment, data segment, and more.

Memory Hierarchy:

The memory hierarchy is a layered structure of different types of memory in a


computer system. It ranges from registers and caches (fast but small) to RAM
and secondary storage (slower but larger). Caches are closer to the CPU and
store frequently used data.

Memory Management Strategies:

Memory management strategies involve managing memory efficiently, including


allocating and deallocating memory for processes. Common strategies include
first-fit, best-fit, worst-fit, and buddy memory allocation.

Contiguous versus Non-Contiguous Memory Allocation:

Contiguous memory allocation assigns a single, contiguous block of memory to a


process. Non-contiguous allocation divides memory into separate, non-
contiguous blocks, allowing for more flexible allocation but potentially leading to
fragmentation.

Partition Management Techniques:

Partition management techniques involve dividing memory into partitions to


accommodate multiple processes. Techniques include fixed-size partitioning and
dynamic partitioning (where partitions are created and resized as needed).

Logical versus Physical Address Space:

Logical address space refers to the addresses used by a program. Physical


address space corresponds to the actual memory addresses in the hardware.
Memory mapping is required to translate logical addresses to physical
addresses.

Swapping:

Swapping is a memory management technique where parts of a process are


temporarily moved to secondary storage (like a hard disk) when they are not in
use to free up memory for other processes.

Paging:

Paging is a memory management technique where physical memory is divided


into fixed-size blocks (frames), and processes are divided into fixed-size blocks

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 12


(pages). The operating system maps pages to frames using page tables.

Segmentation:

Segmentation divides memory into segments, each of which can be allocated to


a specific part of a process, like the code, data, or stack segment.

Segmentation with Paging:

Segmentation with paging combines the benefits of both techniques, allowing for
flexibility in memory allocation through segmentation and efficient use of memory
through paging.

Virtual Memory:
Demand Paging:

Demand paging is a virtual memory technique where only the parts of a process
that are needed are loaded into physical memory. This reduces the initial
memory requirement for a process.

Page Replacement:

When physical memory is full, page replacement algorithms are used to decide
which page should be evicted to make space for a new page from a process.

Page-replacement Algorithms:

Common page-replacement algorithms include FIFO (First-In-First-Out), LRU


(Least Recently Used), and OPT (Optimal). These algorithms determine which
page to replace based on various criteria.

Performance of Demand Paging:

Demand paging can improve memory utilization but may lead to page faults,
which can degrade performance. Effective page-replacement algorithms help
minimize the impact of page faults.

Thrashing:

Thrashing occurs when the system spends most of its time swapping pages in
and out of memory due to high page-fault rates. It severely degrades system
performance.

Demand Segmentation:

Demand segmentation combines the segmentation and paging techniques,


allowing for efficient use of memory and flexibility in memory allocation.

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 13


Overlay Concepts:

Overlaying is a technique used to manage limited memory resources by loading


different parts of a program into memory as needed. It involves dividing a
program into overlays, each of which can be loaded when required.

Tech Bloop: Navigating The Future ( www.techbloop.com ) 🚀

Operating Systems UNIT 1 & 2 - Short Notes by ( www.techbloop.com ) 14

You might also like