Professional Documents
Culture Documents
Operating Systems UNIT 1 & 2 - Short Notes by
Operating Systems UNIT 1 & 2 - Short Notes by
Short Notes by (
www.techbloop.com )
Table of Contents
UNIT - 1
Introduction to Operating Systems:
Processes:
Threads:
Processor Scheduling:
UNIT - 2
Process Synchronization:
Memory Organization & Management:
Virtual Memory:
UNIT - 1
5. Ensures System Security: The OS plays a vital role in maintaining the system's
security. It protects data and resources from unauthorized access and potential
threats, ensuring the system remains safe and reliable.
Example: Windows, macOS, Linux, and Unix are popular desktop operating
systems. Android and iOS are operating systems for mobile devices.
In a simple batch system, multiple jobs are submitted for processing as a batch.
The OS executes them one by one without user intervention.
Explanation: A user submits jobs to the system. The OS collects and processes
these jobs in batches, one after the other, without user interaction.
Explanation: Instead of waiting for one job to finish, the OS loads multiple jobs
into memory. While one job is waiting for I/O, the CPU can work on another job,
maximizing resource utilization.
Explanation: In time-sharing, each user gets a small time slice to use the CPU.
The OS rapidly switches between users, giving the illusion of concurrent
execution.
Personal computer operating systems like Windows, macOS, and Linux are
designed for single-user, desktop environments, and prioritize user-friendly
interfaces.
Explanation: These operating systems are tailored for individual users and
focus on ease of use and user experience.
Parallel Systems:
Distributed Systems:
Real-Time Systems:
OS as a Resource Manager:
Processes:
Introduction:
Process States:
1. New: In this state, the process is being created, but it has not yet been
admitted to the system for execution.
2. Ready: A process in the ready state is prepared to run but is waiting for the
CPU to be allocated.
5. Terminated (or Exit): A process that has completed its execution is said to
be in the terminated state. After this state, the process is removed from the
process table.
Process Management:
6. Process Control Block (PCB): The PCB is a data structure associated with
each process that contains important information about the process, such as
process state, program counter, registers, and scheduling information. The
PCB is used by the operating system to manage and control processes.
7. Context Switching: When the CPU switches from executing one process to
another, a context switch occurs. It involves saving the state of the currently
running process and loading the state of the next process. Context switches
are essential for multitasking.
10. Process Lifecycle: Processes have a lifecycle, starting from creation and
ending with termination. Understanding this lifecycle helps in effective
process management.
Interrupts:
Threads:
Introduction:
Threads are lightweight, smaller units of a process. They share the same
memory space and resources within a process but can execute independently.
Threads are used to achieve multitasking within a single process.
Thread States:
Threads can be in one of several states, representing their current condition and
execution status:
1. New: In this state, a thread is created but has not yet started executing.
2. Runnable (or Ready): A thread in the runnable state is ready to run but is
waiting for the CPU to be allocated.
5. Terminated (or Exit): A thread that has completed its execution is in the
terminated state. After this state, the thread is no longer active.
Thread Operation:
4. Thread Joining: A thread can wait for another thread to finish its execution
by joining it. This is useful when one thread depends on the results of
another.
Threading Models:
Threading models define how threads are created and scheduled within a
process. Some common threading models include:
3. Many-to-Many (M:N): This model combines the advantages of M:1 and 1:1
models. Multiple user-level threads are mapped to a smaller number of
kernel-level threads. It provides some level of parallelism while being
resource-efficient.
Processor Scheduling:
Scheduling Levels:
Priorities:
Scheduling Objective:
4. Waiting Time: Reducing the time processes spend waiting in the ready
queue.
Scheduling Criteria:
4. Waiting Time: The total time a process spends in the ready queue.
5. Response Time: The time taken for a process to start responding after a
user request.
Scheduling Algorithms:
2. Shortest Job First (SJF): Selects the process with the shortest execution
time next. Reduces waiting time but can be challenging to predict.
4. Priority Scheduling: Processes with higher priorities are executed first. Can
lead to starvation if lower-priority processes are constantly preempted.
Demand scheduling allows processes to request resources only when they need
them, optimizing resource utilization and reducing contention for resources.
Real-Time Scheduling:
UNIT - 2
Process Synchronization:
Mutual Exclusion:
Semaphores:
Critical section problems involve defining a set of rules to ensure that processes
or threads can access shared resources in a mutually exclusive and orderly
manner. The critical section is the part of the code where these rules are applied.
Key problems include:
1. Mutual Exclusion: Ensuring that only one process accesses the critical
section at a time.
These case studies illustrate the practical challenges in process synchronization and
the need for effective solutions to ensure that shared resources are used efficiently
and without conflicts.
Memory Hierarchy:
Swapping:
Paging:
Segmentation:
Segmentation with paging combines the benefits of both techniques, allowing for
flexibility in memory allocation through segmentation and efficient use of memory
through paging.
Virtual Memory:
Demand Paging:
Demand paging is a virtual memory technique where only the parts of a process
that are needed are loaded into physical memory. This reduces the initial
memory requirement for a process.
Page Replacement:
When physical memory is full, page replacement algorithms are used to decide
which page should be evicted to make space for a new page from a process.
Page-replacement Algorithms:
Demand paging can improve memory utilization but may lead to page faults,
which can degrade performance. Effective page-replacement algorithms help
minimize the impact of page faults.
Thrashing:
Thrashing occurs when the system spends most of its time swapping pages in
and out of memory due to high page-fault rates. It severely degrades system
performance.
Demand Segmentation: