Course Module CpE Operating System 1

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 84

UNIVERSITY OF SAINT LOUIS

Tuguegarao City

SCHOOL OF ENGINEERING, ARCHITECTURE, AND


INFORMATION TECHNOLOGY EDUCATION

COMPUTER ENGINEERING DEPARTMENT

COURSE MODULE ON
COME 1123 – OPERATING SYSTEM

Prepared by:

KRISCELLE MAE LEANO


JEROME CALACSAN
LLOYD EMMANUEL CATULIN
JOHN PATRICK VALDERAMA

Reviewed by:

ENGR. JAY M. VENTURA, D.ENG.


Department Head

Recommended by:

ENGR. VICTOR C. VILLALUZ, D.E.M.


Academic Dean

Approved by:

EMMANUEL JAMES P. PATTAGUAN, Ph. D.


VP for Academic

COME 1123 – OPERATING SYSTEM| 1


CHAPTER 01: OPERATING SYSTEMS

1.1 What Operating Systems Do


Computer system components:
 Hardware (CPU, memory, I/O devices)
 Operating System
 Application programs
 Users
Operating System:
 Controls hardware and coordinates its use.
 Provides an environment for other programs to run (like a government).
1.1.1 User View
 User views computers differently based on the interface:
 Desktop/Laptop: Designed for ease of use, maximizing user's work/play.
 Mobile Devices (phones/tablets): Touchscreens and voice recognition for user interaction.
 Embedded Systems (home devices/cars): Minimal or no user interface, designed for autonomous
operation.
1.1.2 System View
o From the computer's perspective, the operating system is a resource manager:
 Allocates resources like CPU time, memory, storage, and I/O devices.
 Decides how to distribute resources efficiently and fairly.
o The operating system also acts as a control program:
 Manages user programs to prevent errors and improper use.
 Controls the operation of I/O devices.
1.1.3 Defining Operating Systems
o Operating system definition is complex due to the many types of computers and their uses.
o Historically, operating systems emerged from the need to manage complex general-purpose computers.
o There is no single definition of an operating system, but it generally refers to software that simplifies using
computer hardware by:
 Controlling and allocating resources (CPU, memory, storage, I/O devices).
 Providing common functions for application programs (like I/O device control).
1.2 Computer-System Organization
o Modern computer system components:
 CPUs
 Device controllers (for specific devices like disk drives)
 Shared memory (accessed through a common bus)
o Device controllers:
 Manage data transfer between peripheral devices and local buffer storage.
 Have special-purpose registers.
o Operating system:
 Has a device driver for each device controller (provides uniform interface).
o Memory access:
 CPU and device controllers can access memory in parallel.
 Memory controller ensures orderly access.
1.2.1 Interrupts
I/O operation communication:
 Device driver instructs device controller (via registers).
 Device controller performs the action (e.g., read from keyboard).

COME 1123 – OPERATING SYSTEM| 2


 Upon completion, device controller uses an interrupt to signal the CPU.
1.2.1.1 Overview
o Hardware can interrupt the CPU using a signal on the system bus.
o This is used for various purposes and is crucial for OS-hardware interaction.
o Interrupt process:
1. CPU halts current work and jumps to a fixed location (starting address of interrupt service routine).
2. Interrupt service routine executes.
3. CPU resumes interrupted task after the routine finishes.
1.2.1.2 Implementation
Basic Process:
1. the device controller raises an interrupt by
asserting a signal on the interrupt request line
2. the CPU catches the interrupt
3. After the it catches, it dispatches it to the
interrupt handler
4. the handler clears the interrupt by servicing the
device.
Figure 1.4 summarizes the interrupt-driven I/O
cycle.
Interrupt handler:
o Saves its state.
o Determines the cause of the interrupt.
o Performs necessary processing.

Modern operating systems require advanced interrupt handling features beyond the basic mechanism. These
features include:
1. Deferring interrupts: The ability to temporarily disable interrupts during critical tasks to ensure their
completion without interruption.
2. Efficient dispatching: Efficient methods to locate the correct interrupt handler for a device, potentially
using techniques like interrupt chaining.
3. Multilevel interrupts: A system that prioritizes interrupts, allowing high-priority interrupts to preempt
lower-priority ones for a more responsive system.
1.2.2 Storage Structure
 Basic Unit:
o Bit: The fundamental unit (0 or 1).
o Byte: A collection of 8 bits (common storage unit).
o Word: A computer-specific unit consisting of multiple bytes (e.g., 64-bit word).
 Storage Measurements:
o Kilobyte (KB): 1,024 bytes (conventionally referred to as 1,000 bytes).
o Megabyte (MB): 1,024 KB (conventionally referred to as 1 million bytes).
o Gigabyte (GB): 1,024 MB (conventionally referred to as 1 billion bytes).
o Terabyte (TB): 1,024 GB.
o Petabyte (PB): 1,024 TB.
 Main Memory (RAM):
o Volatile (loses data on power off).
o Fast access.
o Limited storage capacity.
o Interacts with CPU via load/store instructions.

COME 1123 – OPERATING SYSTEM| 3


 Secondary Storage (NVS):
o Non-volatile (retains data on power off).
o Slower than RAM.
o Larger storage capacity for programs and data.
 Storage Hierarchy:
o Registers (fastest, smallest)
o RAM (faster, medium size)
o Secondary Storage (slower, larger)
o Tertiary Storage (e.g., CD-ROM, slowest, largest)
 Storage Terminology:
o Memory: Volatile storage (includes RAM by default).
o NVS (Non-volatile Storage): Persistent storage.
 Secondary Storage: Mostly hard disk drives (HDDs).
 Mechanical storage: Includes HDDs, optical disks, magnetic tapes.
 NVM (Non-volatile Memory): Electrical storage, faster than HDDs.
 Flash memory: Common type of NVM, used in mobile devices and increasingly
for laptops/desktops.
 Designing a Storage System:
o Balance cost, speed, and capacity.
o Use faster, expensive memory strategically.
o Utilize caches to bridge performance gaps between storage components.
Instruction Fetch Cycle:
1. Instruction fetched from memory and stored in the instruction register.
2. Instruction decoded and operands (if any) retrieved
from memory.
3. Instruction executed on operands.
4. Result may be stored back in memory.
Limitations of Main Memory:
1. Insufficient space to store everything
permanently.
2. Volatile nature necessitates secondary storage for
persistence.

1.2.3 I/O Structure


1. Direct Memory Access (DMA):
o Improves efficiency for bulk data transfer (e.g.,
NVS I/O).
o Reduces CPU involvement by offloading data
transfer to the device controller.
o Device controller:
 Sets up buffers, pointers, and counters.
 Transfers entire data blocks directly between device and main memory.
 Generates a single interrupt upon completion (unlike frequent interrupts for slow
devices).
o This frees the CPU for other tasks while data transfer occurs.

COME 1123 – OPERATING SYSTEM| 4


2. Bus vs. Switch Architecture:
o Traditional systems use a bus where devices
share access, potentially creating
bottlenecks.
o High-end systems may employ switch
architecture, enabling concurrent
communication.
o DMA is even more effective in switch-
based systems due to increased parallelism.

1.3 Computer-System Architecture


In Section 1.2, we introduced the general structure of a typical computer system. A computer system can be
organized in a number of different ways, which we can categorize roughly according to the number of
general-purpose processors used. Summarize.
1.3.1 Single-Processor Systems
 Single CPU with Single Core: These systems have one main CPU with a core for instruction execution
and local data storage.
 General-Purpose Instruction Set: The CPU can execute a wide range of instructions for various tasks.
 Processes: The CPU handles instructions from multiple programs (processes) concurrently (likely
through rapid multitasking).
 Special-Purpose Processors: These processors handle specific tasks and:
o Have limited instruction sets.
o Don't run processes themselves.
o May be managed by the operating system (e.g., disk controller)
 Receive instructions and manage tasks from the main CPU.
o May be low-level hardware components not directly controlled by the OS (e.g., keyboard
controller).
1.3.2 Multiprocessor Systems
The dominance of multiprocessor systems (multiple CPUs) has replaced single-processor systems in modern
computing.
Multicore Systems:
 An evolution of multiprocessors.
 Multiple processing cores reside on a single chip.
 Advantages:
o Faster communication between cores compared to separate chips.
o Lower power consumption (important for mobile devices and laptops).

Multicore
Processors:
 Multiple processing cores reside on a single chip.
 Seen by the operating system as individual CPUs (N cores for N CPUs).
 This puts pressure on developers to optimize software for efficient use of multiple cores (discussed in
Chapter 4).

COME 1123 – OPERATING SYSTEM| 5


 Most modern operating systems support multicore SMP.
Non-Uniform Memory Access (NUMA) Systems:
 Each CPU (or group) has local memory with a fast local bus.
 All CPUs share a physical address space via a system
interconnect.
 Advantages:
o Faster local memory access for each CPU (reduced
contention).
o Scales better with more processors (adding CPUs
doesn't overload the system bus).
 Drawback:
o Slower access to remote memory across the
interconnect (CPU cannot access another CPU's
local memory as quickly as its own).
 Operating systems can mitigate this penalty through
scheduling and memory management (discussed in Chapters
5 and 10).
 NUMA systems are popular for servers and high-performance computing due to their scalability.
1.3.3 Clustered Systems
 Composition: Multiple independent systems (often
multicore) called nodes connected together.
 Loose Coupling: Nodes are linked via a network
(LAN or faster interconnect) compared to tightly
coupled systems within a single machine.
 Cluster Definition: Not strictly defined, but
generally involves shared storage and connection
via a network.
Benefits of Database Clusters:
 Dozens of hosts can share the same database, significantly increasing:
o Performance.
o Reliability.

1.4 Operating-System Operations


Booting Up and Operating System Basics
 OS Environment: Provides the platform for program execution.
 OS Organization: Varies internally but shares common functionalities.
Booting Process:
1. Bootstrap Program (Firmware):
o A simple program stored in hardware.
o Initializes the system (CPU, devices, memory).
o Locates and loads the operating system kernel into memory.
2. Kernel Execution:
o Starts providing services to the system and users.
o May involve system programs (daemons) loaded at boot time (e.g., systemd on Linux).
3. Waiting for Events:
o Once booted, the OS waits for events to occur.
o Events are often signaled by interrupts.
Types of Interrupts:

COME 1123 – OPERATING SYSTEM| 6


Hardware Interrupts: Described in Section 1.2.1 (e.g., device signals).
Software Traps/Exceptions:
o Caused by software errors (division by zero, invalid memory access).
o Or triggered by user program requests for OS services (system calls).
1.4.1 Multiprogramming and Multitasking
Multiprogramming:
 Concept: OS keeps several programs (processes) in memory at once
(Figure 1.12).
 CPU Utilization: When a process waits (e.g., for I/O), the OS
switches to another process, keeping the CPU busy.
Analogy: A lawyer works on multiple cases to avoid idle time.
Multitasking (Extension of Multiprogramming):
 CPU Switching: OS rapidly switches between processes, giving the
illusion of simultaneous execution for a faster user experience.

Benefits of Multitasking:
 Virtual Memory: Allows running programs larger than physical memory (covered in Chapter 10).
o Separates logical memory (user view) from physical memory.
o Frees programmers from memory size limitations.
1.4.2 Dual-Mode and Multimode Operation
 User vs. Kernel Mode:
o Hardware distinguishes between user programs and the operating system itself using a mode bit.
o Kernel mode (privileged) is for core system tasks.
o User mode (less privileged) is for user applications.
 Transitioning Between Modes:
o System boots and runs in kernel mode.
o User applications run in user mode.
o Traps, interrupts, or system calls trigger a switch to kernel mode.
 Protection Mechanisms:
o Certain instructions (e.g., I/O control) are privileged and can only run in kernel mode.
o Attempting a privileged instruction in user mode triggers a trap to the operating system.
 Error Handling:
o Hardware traps mode violations and program
errors (illegal instructions, memory access
issues) to the operating system.
o The operating system terminates the program
abnormally and may provide an error
message or memory dump.
1.4.3 Timer
timers help operating systems maintain control of the CPU:
 Preventing Infinite Loops and Stalled Programs:
o User programs might get stuck in infinite loops or fail to relinquish control to the OS.
 Timer as a Safety Mechanism:
o A timer can be set to interrupt the CPU after a specific time interval.
o This ensures the OS regains control periodically.

1.5 Resource Management

COME 1123 – OPERATING SYSTEM| 7


an operating system is a resource manager. The system’s CPU, memory space, file-storage
space, and I/O devices are among the resources that the operating system must manage.

1.5.1 Process Management


Process Definition:
o An active instance of a program in execution.
o Examples: compiler, word processor, social media app.
o Differs from a passive program stored on disk.
 Process Resources:
o CPU time, memory, files, and I/O devices.
o Allocated by the OS during process execution.
o May include initialization data like input parameters (e.g., URL for a web browser).
o Reclaimed by the OS upon process termination.
 Single-Threaded vs. Multithreaded Processes:
o Single-threaded:
 One program counter tracking the next instruction to execute.
 Sequential execution (one instruction at a time).
o Multithreaded (covered in Chapter 4):
 Multiple program counters for concurrent execution within the
1.5.2 Memory Management
o Main Memory:
 Central to computer operations.
 Large array of bytes (hundreds of thousands to billions).

o Memory Management:
 Keeps multiple programs in memory for efficient CPU utilization and faster user experience.
 Different memory management schemes exist with varying approaches.
o OS Responsibilities in Memory Management:
 Tracking memory usage (which parts are used and by which processes).
 Allocating and deallocating memory space as needed.
1.5.3 File-System Management
 Files: A Logical View of Storage:
o The OS provides a consistent way (files) to view information regardless of physical storage
details.
o Files act as a logical storage unit.
 Physical Storage Devices:
o The OS maps files onto physical storage media (e.g., hard drives).
o File management is a crucial part of the OS.
 File Characteristics:
o Files can hold various data types (programs, numeric data, text, etc.).
o They can be structured (fixed format) or unstructured (free-form).
1.5.4 Mass-Storage Management(Secondary Storage)
o Importance of Secondary Storage:
 Backs up main memory for persistent data storage.
 Stores programs (compilers, web browsers, etc.) until loaded into memory.
1.5.5 Cache Management
Caching

COME 1123 – OPERATING SYSTEM| 8


 Core Idea: Frequently accessed data is copied to a faster, smaller storage system (cache) for quicker
retrieval.
 Process:
1. Information resides in a storage system (e.g., main memory).
2. When used, a copy is placed in the cache (temporary storage).
 Cache Examples:
o Internal registers (controlled by programmers): High-speed cache for main memory.
o Instruction cache and data caches (hardware-managed): Improve CPU performance. (Not
covered in detail here).
 Cache Management:
o Caches have limited size, so choosing the right size and replacement policy is crucial.
o Effective management can significantly improve performance (refer to Figure 1.14 in the text)
1.5.6 I/O System Management
o I/O Subsystem:
 Hides device specifics from the user and the OS itself.
 Components:
1. Memory management (buffering, caching, spooling).
2. General device-driver interface.
o History of Virtualization:
 Started on IBM mainframes to allow concurrent use by multiple users.
 Later used to run multiple Windows applications efficiently on x86 machines Benefits of
Virtualization:
 For Individuals:
o Run multiple operating systems on a single machine
(e.g., Windows on macOS).
o Test and develop software for different operating
systems on one device.
 For Businesses:
o Run multiple servers on a single physical machine,
saving resources and costs.
o Improved server utilization and resource
management in data centers.

1.8 Distributed Systems


o Distributed Systems:
 Multiple separate computers working together as a single system.
Networks:
 Communication paths between computer systems.
 Functionality depends on protocols, distances, and media.
1.9 Kernel Data Structures
We turn next to a topic central to operating-system implementation: the way data are structured in the system.
In this section, we briefly describe several fundamental data structures used extensively in operating systems.
1.9.1 Lists, Stacks, and Queues
o Arrays:
 Efficient for storing fixed-size items with direct access by index.
 Not suitable for items with variable sizes or frequent insertions/deletions.
o Linked Lists:
 Accessing specific items might require traversing the entire list (linear time).

COME 1123 – OPERATING SYSTEM| 9


o Stacks:
 LIFO (Last-In-First-Out) principle: recently added items are removed first.
 Operations: push (add item), pop (remove item).

o Queues:
 FIFO (First-In-First-Out) principle: items are removed in the order they were added.
 Examples: waiting lines, print jobs.
1.9.2 Trees
o Trees:
 Represent data with parent-child relationships, forming a hierarchy.
 General trees: a parent can have any number of children.
o Binary Trees:
 A special type of tree where a parent has at most two children (left and right).
o Binary Search Trees:
 A binary tree with an ordering property: left child <= parent <= right child.
 Used for efficient searching (worst-case O(n) for unbalanced trees).
o Balanced Binary Search Trees:
 Constructed using algorithms to ensure good search performance (worst-case
O(lg n)).

1.9.3 Hash Functions and Maps


o Hash Function:
 Takes data as input and performs a calculation to return a
numeric value (hash).
 This hash is used as an index to retrieve data from a table
(often an array).
o Hash Tables (Hash Maps):
 Use hash functions to map keys to values.
 Keys are hashed to find the corresponding value in
the table.
1.9.4 Bitmaps
o Bitmaps:
 A string of 0s and 1s (binary digits) representing the status of n items.
o Benefits:
 Very space-efficient: 1 bit per item instead of a whole byte (8x smaller).
1.10 Computing Environments
We turn now to a discussion of how operating systems are used in a variety of computing environments.
1.10.1 Traditional Computing
Shifting Office Landscape:
 Traditional setup: PCs connected to a network with file/print servers.
 Modern trends:
1. Web portals for remote server access.
2. Network computers (thin clients) for security or easier maintenance.
Home Network Evolution:
 Past: Single computer with slow modem connection.
 Present:

COME 1123 – OPERATING SYSTEM| 10


1. Faster and more affordable network connections.
2. Home computers hosting web pages and running networks (printers, clients, servers).
Resource Sharing and Time Management:
 Historical context:
1. Limited computing resources.
2. Batch systems processing jobs in bulk.
 Modern take on time-sharing:
1. Same scheduling technique used on various devices.
2. Processes managed to give each a slice of CPU time (user processes, system processes).
1.10.2 Mobile Computing
Mobile Devices: Beyond Basic Communication
 Early limitations: smaller screens, memory, and functionality compared to desktops/laptops.
Connectivity and Limitations:
 Access online services via Wi-Fi or cellular data networks.
 Generally have lower memory capacity and processing power compared to PCs.
1.10.3 Client– Server Computing
o Client-Server Model:
 Specialized distributed system with dedicated servers and client systems.
o Server Types:
 Compute Servers:
o Provide an interface for clients to request actions
(e.g., read data).
o Execute the request and send results back to the
client.
 File Servers:
o Offer a file system interface for managing files
(create, update, read, delete).
o Example: Web server delivering files (web pages, multimedia content) to web browsers
1.10.4 Peer-to-Peer (P2P) Systems Computing
 P2P vs. Client-Server:
1. No distinction between clients and servers.
2. All nodes act as peers, providing or requesting services.
 Joining a P2P Network:
1. A node joins the network of peers.
2. It can then offer services and request them from others.
1.10.5 Cloud Computing
o Cloud Computing Overview:
 Delivers computing services (computing power, storage, applications) over the internet.
 Relies on virtualization to provide scalable resources.
o Types of Cloud Computing:
 Public Cloud: Open to the general public over the internet (pay-per-use model).
 Private Cloud: Designed for internal use within a single company.
 Hybrid Cloud: Combines public and private cloud components.
1.10.6 Real-Time Embedded Systems
 Prevalence:
o Most common type of computer.
o Found in everyday objects (car engines, robots, appliances).
 Operating Systems:

COME 1123 – OPERATING SYSTEM| 11


o Usually limited or custom-made.
o Little to no user interface, focused on hardware control.
 Variations:
o General-purpose computers with specialized OS (e.g., Linux).
o Hardware devices with dedicated embedded OS.
o Application-specific integrated circuits (ASICs) without OS.
 Growing Presence:
o Increasing power and network connectivity.
o Used in smart homes (controlling lights, thermostats, appliances).
 Real-Time Operating Systems (RTOS):
o Common in embedded systems.
o Strict time requirements for processing sensor data and controlling hardware.
11.1 History
o Early Days (1950s):
 Software source code was commonly available.
 Sharing and collaboration were encouraged among enthusiasts and user groups.
o Shift Towards Closed Systems (1970s-1980s):
 Companies sought to control software usage and protect intellectual property.
 By 1980, closed-source software became the standard practice.
Operating Systems
Type Linux BSD UNIX Solaris
License Mostly Free and Open- Open-Source Originally Commercial,
Source (GPL) OpenSolaris (partially)
Development Open-source with Open-source with various Originally closed-source,
Model community contributions independent branches OpenSolaris was open-
source, future unclear
History Developed in 1991 by Descended from AT&T Based on BSD UNIX,
Linus Torvalds, inspired UNIX in 1978, with transitioned from SunOS
by UNIX significant contributions (BSD-based) to System V
from UCB UNIX in 1991
Focus General-purpose OS, Known for stability and Originally commercial server
popular on desktops, security, popular for servers OS, OpenSolaris future
servers, and embedded and workstations uncertain
systems
Distributions Hundreds of distributions FreeBSD, NetBSD, Originally closed-source,
with varying features (e.g., OpenBSD, DragonflyBSD Solaris; OpenSolaris
Red Hat, Ubuntu) derivatives (Illumos)
Learning Extensive online resources Active communities for each Originally closed-source,
Resources and communities distribution some resources for
OpenSolaris and derivatives

IDENTIFICATION

-What is the primary function of an operating system?


To control hardware and coordinate its use.

COME 1123 – OPERATING SYSTEM| 12


-Give a component of a modern computer system as described in the text?
CPUs, Device Controllers, and Shared Memory.

-What is the role of an interrupt in a computer system?


To suspend its current execution of instructions and handle a specific event or request.

TRUE OR FALSE

-True or False: The operating system acts as a resource manager that allocates resources
like CPU time, memory, storage, and I/O devices.
TRUE

-True or False: Non-volatile storage (NVS) loses its data when the power is turned off.
FALSE

-True or False: Interrupts can be temporarily disabled to allow critical tasks to complete
without interruption.
TRUE

MULTIPLE CHOICE

-What is the primary purpose of memory management in operating systems?

 A) To ensure that each program uses the least amount of memory possible.
 B) To manage the computer’s audio-visual systems more efficiently.
 C) To keep multiple programs in memory to enhance CPU utilization and improve user
experience.
 D) To reduce the cost of installing new hardware.

-Which of the following best describes a file in the context of file-system management?
A) A collection of data stored in volatile memory.
 B) A physical section of the hard drive.
 C) A logical storage unit that provides a consistent way to view information, regardless of the
physical storage details.
 D) A program that manages the inputs and outputs of the operating system.

-What distinguishes a real-time operating system (RTOS) from general-purpose operating


systems?
 A) RTOS is used primarily for data analysis.
 B) RTOS manages hardware resources only and has no user interface.
 C) RTOS has strict time requirements for processing and controlling hardware based on sensor
data.
 D) RTOS cannot run on embedded systems.

COME 1123 – OPERATING SYSTEM| 13


CHAPTER 02: OPERATING-SYSTEM STRUCTURES

An operating system serves as a mediator between computer users and hardware, facilitating the execution of
programs efficiently. It manages hardware resources, ensuring proper system operation and preventing program
interference. Operating systems vary in structure and design, with goals guiding their development. They offer
services, interfaces, and internal components that cater to users, programmers, and designers. Understanding
COME 1123 – OPERATING SYSTEM| 14
these aspects involves examining system services, interfaces, debugging methods, design methodologies, and
the process of creating and initializing operating systems.

Operating systems offer a platform for program execution and provide services to both programs and users.
Although the exact services vary between operating systems, common classes can be identified. Figure 2.1
illustrates the interrelation of these services, which also simplify the programming process for developers.

2.1 Operating-System Services

Operating systems provide various services to users and ensure the system's efficient operation.

Services Helpful to Users:

1. User Interface: Operating systems offer graphical user interfaces (GUIs), command-line interfaces (CLIs),
or touch-screen interfaces for user interaction.
2. Program Execution: They load programs into memory, execute them, and handle termination, whether
normal or abnormal.
3. I/O Operations: Operating systems manage input/output operations for programs, including access to files
and I/O devices.
4. File-System Manipulation: They handle file and directory operations, such as reading, writing, creation,
deletion, searching, and permission management.
5. Communications: Operating systems facilitate communication between processes, either locally or across a
network, using shared memory or message passing.
6. Error Detection: They constantly detect and address errors in hardware, I/O devices, and user programs,
ensuring correct computing and taking appropriate actions when errors occur.

Functions for System Efficiency:

COME 1123 – OPERATING SYSTEM| 15


1. Resource Allocation: Operating systems manage resource allocation among multiple processes, including
CPU cycles, memory, file storage, and I/O devices, optimizing system performance.
2. Logging: They maintain records of resource usage for accounting or statistical purposes, aiding system
administrators in improving computing services.
3. Protection and Security: Operating systems enforce access control to system resources, preventing
interference between processes and ensuring security against unauthorized access. They authenticate users,
defend against external threats, and maintain system integrity through comprehensive security measures.

2.2 User and Operating-System Interface

Users interact with operating systems through various interfaces, including command-line interfaces (CLI),
graphical user interfaces (GUI), and touch-screen interfaces. Here's a summary of each approach:

 Command Interpreters (CLI):


- Most operating systems utilize command interpreters like shells, which allow users to directly input
commands.

- Shells such as Bourne-Again shell (bash) provide functionality to execute commands for tasks like file
manipulation.

- Commands can be implemented within the interpreter itself or through system programs, offering flexibility
and ease of adding new commands.

 Graphical User Interfaces (GUI):


- GUI interfaces employ a mouse-based window-and-menu system, offering a user-friendly approach to interact
with the operating system.

- Users navigate through icons and menus to execute programs, select files or directories, or access system
functions.

- GUIs originated in the 1970s and became widespread with systems like the Xerox Alto and Apple Macintosh.

- UNIX systems, traditionally CLI-dominated, now offer various GUI interfaces such as KDE and GNOME
desktops.

 Touch-Screen Interface:
- Mobile systems like smartphones and tablets utilize touch-screen interfaces, allowing users to interact through
gestures on the screen.

- These interfaces simulate keyboards on the touch screen for text input and navigation.

COME 1123 – OPERATING SYSTEM| 16


 Choice of Interface:
- The choice between CLI and GUI interfaces often depends on personal preference.

- System administrators and power users typically prefer CLI for efficiency and programmability.

- Windows and macOS users predominantly use GUI interfaces, although CLI options exist.

- Mobile system users primarily interact through touch-screen interfaces.

 User Interface Design:


- User interfaces vary between systems and users, focusing on providing intuitive interaction rather than
reflecting system structure.

- The book concentrates on providing adequate service to user programs without distinguishing between user
and system programs from the operating system's perspective.

2.3 System Calls

System calls serve as a crucial interface for accessing the services provided by an operating system. These calls
are typically exposed to programmers through functions written in languages like C and C++. However, certain

COME 1123 – OPERATING SYSTEM| 17


tasks, particularly those requiring low-level access to hardware, may necessitate the use of assembly language
instructions.

Consider a simple task, such as copying data from one file to another. This seemingly straightforward operation
involves multiple system calls. For instance, the program needs to obtain the names of the input and output
files. These names can be provided as command-line arguments or interactively through user input prompts,
requiring sequences of system calls to display messages and gather input.

Once the file names are obtained, the program must open the input file and create the output file, each operation
requiring additional system calls. Error handling becomes critical at this stage, as the program must handle
scenarios such as non-existent files or permission issues gracefully.

With both files set up, the program enters a loop to read from the input file and write to the output file, with
each read and write operation being a system call. These operations also necessitate error checking to handle
conditions like reaching the end of the file or encountering hardware failures.

Finally, after copying the entire file, the program may close both files, provide feedback to the user, and
terminate normally, all of which involve further system calls.

Programmers often interact with the operating system through higher-level constructs known as Application
Programming Interfaces (APIs). These APIs define a set of functions, parameters, and return values, shielding
developers from the complexities of direct system calls and ensuring program portability across different
systems.

Behind the scenes, functions in APIs typically map to actual system calls. For example, a function like
`CreateProcess()` in the Windows API might invoke a system call like `NTCreateProcess()` in the Windows
kernel.

The Run-Time Environment (RTE) plays a crucial role in managing system-call interactions. It includes the
necessary software components for executing applications in a specific programming language, providing a
system-call interface to abstract away OS details. Parameter passing to system calls varies depending on the OS,
with methods like passing parameters in registers, storing them in memory blocks, or pushing them onto a stack.

COME 1123 – OPERATING SYSTEM| 18


Overall, system calls abstract the complexities of interacting with the operating system, allowing programmers
to focus on building applications using familiar APIs while the RTE handles the underlying system-call
mechanics.

Application Programming Interface

Application Programming Interfaces (APIs) serve as a crucial abstraction layer between programmers and the
intricacies of system calls in operating systems. These APIs provide a set of functions with well-defined
parameters and return values, shielding developers from the complexities of direct system call invocation.

Programmers commonly design their applications around APIs, such as the Windows API for Windows
systems, the POSIX API for UNIX-based systems (including Linux and macOS), and the Java API for Java-
based applications. Access to these APIs is facilitated through libraries provided by the operating system, like
libc for UNIX and Linux programs written in C. Behind the scenes, functions within APIs typically translate to
actual system calls, such as Windows' `CreateProcess()` function invoking the `NTCreateProcess()` system call
in the Windows kernel.The preference for using APIs over direct system calls is rooted in several factors,
including program portability and ease of use. APIs ensure that applications can compile and run across systems
supporting the same API, abstracting away architectural differences.

Additionally, working with APIs is often simpler and more intuitive than dealing with low-level system calls,
which can be intricate and system-specific. Despite this, APIs often closely align with the underlying system

COME 1123 – OPERATING SYSTEM| 19


calls, with many POSIX and Windows APIs resembling the native system calls of their respective operating
systems.

A crucial component in managing system calls is the run-time environment (RTE), which includes the
necessary software components for executing applications in a given programming language. The RTE provides
a system-call interface that mediates between API function calls and actual system calls in the operating system
kernel.

System calls may require passing parameters in various ways, such as through registers, memory blocks, or the
stack, depending on the operating system and the specific call. These methods ensure that the necessary
information is provided to the system call for its execution, with different OSs employing different strategies
based on their design and requirements.

In system call handling, parameters are often managed through a combination of methods. In Linux, a blend of
register and block approaches is employed. When there are five or fewer parameters, registers are utilized.
However, for cases exceeding this limit, the block method is implemented, where parameters are stored in a
memory block, and the address of this block is passed as a parameter in a register. Additionally, parameters can
also be placed onto the stack by the program and later retrieved by the operating system. This flexibility is
advantageous as it avoids constraints on the number or length of parameters being passed. Figure 2.7: Diagram
illustrating the combination of register and block methods for passing parameters in Linux system calls.

COME 1123 – OPERATING SYSTEM| 20


Types of System Calls

1. Process Control:
 Create Process: Initiates a new process.
 Terminate Process: Halts the execution of a process.
 Load, Execute: Loads and executes another program within a process.
 Get Process Attributes, Set Process Attributes: Retrieves or modifies attributes of a process, such as its
priority or execution time.
 Wait Event, Signal Event: Allows processes to synchronize by waiting for or signaling events.
 Allocate and Free Memory: Manages memory allocation and deallocation for processes.
2. File Management:
 Create File, Delete File: Creates or removes files.
 Open, Close: Opens or closes files for reading or writing.
 Read, Write, Reposition: Reads from, writes to, or repositions within files.
 Get File Attributes, Set File Attributes: Retrieves or sets attributes of files, such as permissions or
timestamps.
3. Device Management:
 Request Device, Release Device: Requests or releases access to devices.
 Read, Write, Reposition: Performs input/output operations on devices, like reading from or writing to
disks.
 Get Device Attributes, Set Device Attributes: Retrieves or sets attributes of devices, such as status or
configuration.
4. Information Maintenance:
 Get Time or Date, Set Time or Date: Retrieves or updates system time or date information.
 Get System Data, Set System Data: Retrieves or updates various system-wide data.
 Get Process, File, or Device Attributes: Retrieves attributes of processes, files, or devices.
COME 1123 – OPERATING SYSTEM| 21
 Set Process, File, or Device Attributes: Modifies attributes of processes, files, or devices.
5. Communications:
 Create, Delete Communication Connection: Establishes or terminates communication connections
between processes.
 Send, Receive Messages: Transmits messages between processes.
 Transfer Status Information: Exchanges status information between processes.
 Attach or Detach Remote Devices: Associates or disassociates remote devices with a system.
6. Protection:
 Get File Permissions, Set File Permissions: Retrieves or modifies permissions associated with files.
 Allow User, Deny User: Grants or denies user access to resources.

These system calls enable processes to interact with the operating system and manage various aspects of the
system, such as processes, files, devices, and communication channels while ensuring the security and
protection of resources.

QUESTIONS:

Multiple Choice Questions:

1. Which of the following is not a category of system calls?

A) Process control

B) File organization

C) Device management

D) Information maintenance

2. In which model of inter-process communication are messages exchanged directly between processes?

A) Message-passing model

B) Shared-memory model

C) Hybrid model

D) Synchronous model

3. What is the primary purpose of system calls related to protection?

A) Obtaining system data

B) Controlling access to system resources

COME 1123 – OPERATING SYSTEM| 22


C) Transferring status information

D) Establishing communication connections

True or False Questions:

1. System calls for file management include operations such as creating, deleting, and closing files.

2. Device management system calls are only relevant for physical devices, not virtual ones.

3. Information maintenance system calls can be used to modify the behavior of processes.

Identification Questions:

1. What system call category deals with operations such as creating, deleting, opening, reading, writing, and
closing files?

2. Which type of system calls involves obtaining time and date information, retrieving system data, and
accessing process, file, or device attributes?

3. What are the primary functions of system calls in the process control category?

Multiple Choice Questions:

1. B) File organization

2. A) Message-passing model

3. B) Controlling access to system resources

True or False Questions:

1. True

2. False

3. False

Identification Questions:

1. File management.

2. Information Maintenance

3. Managing processes

COME 1123 – OPERATING SYSTEM| 23


Chapter 3: Processes

Process Concept
A computer can execute only one program at a time, such as on an embedded device that does not
support multitasking, the operating system may need to support its own internal programmed
activities, such as memory management. In many respects, all these activities are similar, so we call
all of them processes.
Process

- A process is a program in execution.


 Text section -the executable code
 Data section - global variables
 Heap section- memory that is dynamically allocated during program run time.
 Stack section- temporary data storage when invoking functions
A program is a passive entity, such as a file containing a list of instructions stored on disk also called
an executable fil. A process is an active entity, with a program counter specifying the next instruction to
execute and a set of associated resources.

COME 1123 – OPERATING SYSTEM| 24


Process State
 New. The process is being created.
 Running. Instructions are being executed.
 Waiting. The process is waiting for some event to occur (such as an I/O completion or reception
of a signal).
 Ready. The process is waiting to be assigned to a processor.
 Terminated. The process has finished execution.
Process Control Block
- Also called a task control block. It contains information associated with a specific process:
 Process state. The state may be new, ready, running, waiting, halted, and so on.
 Program counter. The counter indicates the address of the next instruction to be executed for
this process.
 CPU registers. The registers vary in number and type, depending on the computer architecture.
 CPU-scheduling information. This information includes a process priority, pointers to
scheduling queues.
 Memory-management information. This information may include such items as the value of
the base and limit registers and the page tables, or the segment tables, depending on the memory
system used by the operating system.
 Accounting information. This information includes the amount of CPU and real time used, time
limits, account numbers, job or process numbers, and so on.
 I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Threads
 Single thread of control allows the process to perform only one task at a time.
Process Scheduling
Process Scheduler selects among available processes for next execution on CPU.
 Degree of multiprogramming- the number of processes currently in memory.
 I/O-bound process- one that spends more of its time doing I/O than it spends doing
computations.
 CPU-bound process- generates I/O requests infrequently, using more of its time doing
computations.
Scheduling Queues
 Ready queue- they are ready and waiting to execute.
 Wait queue- waiting for a certain event to occur.
 Queueing diagram- a common representation of process scheduling.

COME 1123 – OPERATING SYSTEM| 25


 Dispatched- waits in the ready queue until it is selected for execution.
CPU
Scheduling
 CPU

scheduler- select among the processes that are in the


ready queue and allocate a CPU core to one of them.
 Swapping- a process can be “swapped out” from memory to disk.
Context Switch
-Switching the CPU core to another process requires performing a state save of the current process and a
state restore of a different process.

Operations on Processes
Process Creation
 Parent process create children processes, which, in turn create other processes, forming a tree of
processes.
 Process Identifier (PID)- identifying processes.
 Traditional UNIX systems identify the process init as the root of all child processes. init (also known as
System V init).
 The parent continues to execute concurrently with its children.
 The parent waits until some or all its children have terminated.
 The child process is a duplicate of the parent process (it has the same program and data as the parent).
 The child process has a new program loaded into it.
Process Termination
 Cascading Termination- a process terminates (normally or abnormally), then all its children must also be
terminated.
 Zombie Process- A process that has terminated, but whose parent has not yet called wait().
 Orphans- if a parent did not invoke wait() and instead terminated, thereby leaving its child processes.
Android Process Hierarchy

COME 1123 – OPERATING SYSTEM| 26


 Foreground process- the current process visible on the screen, representing the application the user is
currently interacting with.
 Visible process- process that is not directly visible on the foreground but that is performing an activity
that the foreground process is referring to.
 Service process- a process that is like a background process but is performing an activity that is
apparent to the user.
 Background process- process that may be performing an activity but is not apparent to the user.
 Empty process- process that holds no active components associated with any application.

Interprocess Communication

 Independent Process – a process that doesn’t share data with any other processes executing in the
system.
 Cooperating Process - a process which can affect or be affected by the other processes executing in the
system. Any process that shares data with other processes.

Reasons for providing an environment that allows process cooperation:


1. Information sharing – In many cases, multiple applications or processes may need access to the
same data or resources. Enabling concurrent access allows for efficient sharing of information,
enhancing overall system functionality and user experience.
2. Computation speedup – Dividing a task into smaller subtasks that can be executed concurrently
can significantly improve performance, especially on systems with multiple processing cores.
This parallel processing capability allows for faster execution of tasks and overall system
responsiveness.
3. Modularity- Breaking down system functions into separate processes or threads promotes
modularity, which simplifies development, maintenance, and scalability. Modular design
enhances flexibility, allowing for easier updates, replacements, and expansions of system
components without affecting the entire system.

By providing an environment that supports process cooperation, developers can create more efficient, scalable,
and modular systems that better meet the needs of users and applications.

IPC Model

Interprocess communication (IPC) facilitates the exchange of data between cooperating processes. There are
two primary models for IPC: shared memory and message passing.

COME 1123 – OPERATING SYSTEM| 27


a. Shared Memory Model: In this model, cooperating processes share a common region of memory.
Processes can read from and write to this shared memory region to exchange data. Shared memory
provides fast communication since processes can directly access the shared data. However, it requires
careful synchronization to avoid race conditions and ensure data integrity.

b. Message Passing Model: In this model, communication occurs through messages exchanged between
processes. Processes send messages containing data to other processes, which receive and process these
messages. Message passing provides a more structured approach to communication, making it easier to
manage and control data flow between processes. However, it may incur overhead due to message
passing operations and buffer management.

Both models have their advantages and trade-offs, and the choice between them depends on factors such as the
nature of the application, performance requirements, and programming preferences. Figure 3.11 likely
illustrates the differences between these two IPC models in more detail.

IPC in Shared-Memory Systems

Producer - is a process or thread responsible for generating data, items, or resources that are consumed by
another process or thread called the consumer.
Consumer - is a process or thread responsible for consuming or utilizing the items produced by a producer.

Two types of Buffers


Unbounded Buffer - also known as an unbounded queue or an unbounded channel, is a data structure used for
interprocess communication (IPC) in concurrent programming, can accommodate an unlimited number of
items.
Bounded Buffer – a buffer which has a fixed size and can only hold a limited number of items.

COME 1123 – OPERATING SYSTEM| 28


IPC in Message-Passing Systems
1. Shared Memory vs. Message Passing:
 Shared memory requires processes to share a region of memory explicitly managed by the application
programmer.
 Message passing, facilitated by the operating system, allows processes to communicate without sharing
memory.

2. Message Passing:
 Message passing is useful in distributed environments where processes may reside on different
computers connected by a network.
 It involves operations like send(message) and receive(message).

3. Fixed vs. Variable-sized Messages:


 Fixed-sized messages simplify system-level implementation but make programming more complex.
 Variable-sized messages complicate system-level implementation but simplify programming.

4. Naming:
 Processes communicate using direct or indirect communication.
 Direct communication involves explicit naming of sender and receiver.
 Indirect communication involves communication via mailboxes or ports.

5. Synchronization:
 Message passing operations can be blocking or non-blocking (synchronous or asynchronous).
 Blocking operations wait until the message is sent or received, while non-blocking operations continue
immediately.

6. Buffering:
 Messages exchanged by processes reside in temporary queues.
 Queues can be zero-capacity (no buffering), bounded capacity (finite length), or unbounded capacity
(potentially infinite).

Each of these points addresses different aspects of message passing systems, including communication
mechanisms, synchronization options, and buffering strategies, providing a comprehensive understanding of
message passing in operating systems.

QUESTIONS

MC

1. What state is the process where it is in the secondary memory but is available for execution as soon as it
is loaded into main memory?
COME 1123 – OPERATING SYSTEM| 29
a. Blocked
b. Ready/Suspended
c. Ready
d. Blocked/Suspended
2. A ____ is a unit of activity characterized by the execution of a sequence of instructions, a current state,
and an associated set of a system resources.
a. Identifier
b. Process
c. State
d. Kernel
3. The portion of the operating system that selects the next process to run.
a. Trace
b. Process Control Block
c. Dispatcher
d. PSW

TRUE or FALSE

1. The Process Control Block is the key tool that enables the OS to support multiple processes and to
provide for multiprocessing. TRUE
2. A process switch may occur any time that the OS has gained control from the currently running process.
TRUE
3. If a system does not employ virtual memory each process to be executed must be fully loaded into main
memory. TRUE

IDENTIFICATION

1. A process is in the _____ state when it is in secondary memory and awaiting an event.
Ans: Blocked/Suspended
2. A significant point about the ______ is that it contains sufficient information so that it is possible to
interrupt a running process and later resume execution as if the interruption had not occurred.
Ans: Process Control Block
3. It is a layer of software between the application and the computer hardware that supports applications
and utilities.
Ans: Operating System (OS)

COME 1123 – OPERATING SYSTEM| 30


CHAPTER 4: THREADS & CONCURRENCY

4.1 Overview
A thread, consisting of a thread ID, program counter, register set, and stack, is a fundamental element of CPU
utilization. It shares code and resources with other threads within the same process, enabling parallel execution
of tasks. Unlike traditional single-threaded processes, which have a single thread of control, multithreaded
processes can perform multiple tasks simultaneously. This distinction is illustrated in Figure 4.1, highlighting
the efficiency gains of multithreading in modern computing environments.

4.1.1 Motivation
Multithreading is ubiquitous in modern computing, enabling software applications to perform multiple tasks
simultaneously. Examples include photo thumbnail generation, where separate threads process individual
images, and web browsers that concurrently display content while fetching data from the network. Leveraging
multicore systems, applications can execute CPU-intensive tasks in parallel, enhancing performance.

In scenarios like web server management, where multiple clients access the server concurrently, multithreading
offers efficiency over traditional single-threaded processes. Instead of creating separate processes for each client
COME 1123 – OPERATING SYSTEM| 31
request, a multithreaded server creates threads to handle requests, reducing resource overhead and improving
responsiveness.

Moreover, multithreading extends to operating system kernels, where multiple threads manage diverse tasks
such as device management and memory handling. Additionally, many applications benefit from multiple
threads, including sorting algorithms and data processing tasks.

Overall, multithreading is essential for maximizing computing resources, improving responsiveness, and
optimizing performance across various computing domains.

4.1.2 Benefits
The benefits of multithreaded programming can be broken down into four major categories:
1. Responsiveness. Multithreading improves application responsiveness by allowing it to stay active
during time-consuming operations. This is especially important for user interfaces, where uninterrupted
responsiveness is vital. Unlike single-threaded applications, which can become unresponsive during
such tasks, multithreading enables concurrent processing of tasks and user interactions. Thus, in a
multithreaded application, performing a time-consuming operation, like clicking a button, doesn't hinder
user interaction, as the application can continue responding to other inputs simultaneously.

2. Resource sharing. Processes share resources through explicit techniques like shared memory and
message passing, arranged by the programmer. Threads, on the other hand, inherently share the memory
and resources of their process. This sharing enables multiple threads to operate within the same address
space, facilitating efficient code and data sharing within applications.

3. Economy. Thread creation is more economical than process creation due to shared resource allocation
within processes. Threads share memory and resources, making them more efficient to create and switch
between compared to processes. While measuring overhead differences can be challenging, thread
creation generally consumes less time and memory, with faster context switching between threads than
between processes.

4. Scalability. Multithreading offers significant benefits in multiprocessor architectures, where threads can
run concurrently on different cores. Unlike single-threaded processes limited to a single processor,
multithreading maximizes utilization of available processing cores, enhancing performance. Further
exploration of this topic is discussed in the following section.

4.2 Multicore Programming


Single-CPU systems evolved into multi-CPU systems to meet increasing computing demands. A subsequent
trend introduced multicore systems, where multiple cores on a single chip act as separate CPUs to the operating

COME 1123 – OPERATING SYSTEM| 32


system. Multithreaded programming optimizes these multicore systems, enhancing concurrency. In a single-
core system, concurrency involves interleaved thread execution over time, as the core can handle only one
thread at a time (Figure 4.3). However, in multicore systems, concurrency allows threads to run in parallel, with
each core executing a separate thread simultaneously. This distinction underscores the efficiency of multicore
architectures in handling multiple tasks concurrently.

Concurrency involves multiple tasks making progress simultaneously, while parallelism entails actual
simultaneous task execution. Thus, concurrency can exist without parallelism. In single-processor systems
before multiprocessor and multicore architectures, CPU schedulers facilitated concurrency by rapidly switching
between processes, giving the illusion of parallelism. Despite running concurrently, processes were not
executing in parallel.

4.2.1 Programming Challenges


The rise of multicore systems intensifies the demand on system designers and application programmers to
optimize core utilization. Operating system designers must craft scheduling algorithms to leverage multiple
cores for parallel execution. Meanwhile, application programmers face the task of adapting existing programs
and creating new ones that utilize multithreading effectively. Programming for multicore systems poses
challenges in five key areas. Firstly, identifying tasks suitable for concurrent execution is crucial, with
preference given to tasks independent of each other. Secondly, achieving balance in task workload ensures
efficient core utilization, avoiding overburdening certain cores. Thirdly, data splitting among cores parallels
task division, optimizing resource usage. Data dependency, the fourth consideration, necessitates
synchronization to manage dependencies between tasks accessing shared data. Lastly, testing and debugging
pose significant challenges due to the myriad of possible execution paths in parallel programs. Consequently,
many developers advocate for a paradigm shift in software design to accommodate multicore systems, while
educators emphasize the importance of parallel programming in software development curricula.

4.2.2 Types of Parallelism


In parallel computing, two main types of parallelism exist: data parallelism and task parallelism. Data
parallelism entails distributing subsets of data across multiple cores, with each core executing the same
operation on its data. Task parallelism involves distributing distinct tasks (threads) across cores, where each
thread performs a unique operation. While data parallelism focuses on data distribution, task parallelism focuses
on task distribution. However, applications can employ a hybrid approach, combining both strategies for
optimal performance. This distinction between data and task parallelism underscores the flexibility and
versatility of parallel computing paradigms.

COME 1123 – OPERATING SYSTEM| 33


4.3 Multithreading Models
Threads can be supported either at the user level or by the kernel. User threads are managed without kernel
support, while kernel threads are directly managed by the operating system. Most modern operating systems,
like Windows, Linux, and macOS, support kernel threads. Establishing a relationship between user and kernel
threads is essential. Three common models for this relationship are the many-to-one, one-to-one, and many-to-
many models. These models define how user threads are mapped to kernel threads, influencing thread
management and performance.

4.3.1 Many-to-One Model


In the many-to-one threading model, multiple user-level threads are mapped to a single kernel thread. Thread
management occurs in user space, which is efficient, but blocking system calls can cause the entire process to
block. Additionally, because only one thread can access the kernel at a time, parallel execution on multicore
systems is limited. Green threads, utilized in Solaris systems and early Java versions, employed the many-to-
one model. However, this model has become less common due to its inability to leverage multiple processing
cores, which are now standard in most computer systems.

4.3.2 One-to-One Model

COME 1123 – OPERATING SYSTEM| 34


In the one-to-one threading model, each user thread corresponds to a kernel thread. This model enhances
concurrency as it allows other threads to execute when one blocks on a system call, and facilitates parallel
execution on multiprocessors. However, a drawback is the overhead of creating kernel threads for each user
thread, which can impact system performance with a large number of threads. Linux and Windows operating
systems adopt the one-to-one model for threading implementation.

4.3.3 Many-to-Many Model


In the many-to-many threading model, multiple user-level threads are mapped to a smaller or equal number of
kernel threads. The number of kernel threads can vary based on the application or the machine's processing
capabilities. For instance, an application may be allocated more kernel threads on a system with eight
processing cores compared to one with four cores. This model allows for efficient utilization of both user and
kernel resources, adapting thread management to the specific needs of the application and the underlying
hardware.

The many-to-one threading model allows unlimited creation of user threads but lacks parallelism due to single-
threaded kernel scheduling. The one-to-one model enhances concurrency but requires cautious thread creation
due to system limitations. Conversely, the many-to-many model addresses these issues, allowing unrestricted
user thread creation while enabling parallel execution on multiprocessors and efficient handling of blocking
system calls. A variation, the two-level model, combines multiplexing and user-to-kernel thread binding. While
the many-to-many model offers flexibility, its complexity makes implementation challenging. Despite this, with
modern systems having more processing cores, the importance of limiting kernel threads has diminished.
Consequently, most operating systems now favor the one-to-one model, although contemporary concurrency
libraries utilize the many-to-many model for task mapping.

COME 1123 – OPERATING SYSTEM| 35


Questions
I. Multiple choice
1. It entails distributing subsets of data across multiple cores, with each core executing the same operation on its
data?
a) Thread parallelism
b) Data parallelism
c) Task parallelism
d) Subset

2. It combines multiplexing and user-to-kernel thread binding?


a) One-level model
b) Two-level model
c) Three-level model
d) Four-level model

3. This involves examining applications to find areas that can be divided into separate, concurrent tasks.
a) Identifying Tasks
b) Balance
c) Data Splitting
d) Data Dependency

II. True or False


1. Amdahl’s Law is a formula that identifies potential performance gains from adding and multiplying
additional computing cores to an application that has both serial (nonparallel) and parallel components. False

2. The benefits of multithreaded programming are; identifying task, achieving balance, splitting data, data
dependency. False

3. Many applications can also take advantage of multiple threads, including basic sorting, trees, and graph
algorithms. True

III. Identification
1. It provides a mechanism for more efficient use of these multiple computing cores and improved concurrency?
Multithreaded Programming

COME 1123 – OPERATING SYSTEM| 36


2. What is the law that identifies potential performance gains from adding additional computing cores to an
application that has both serial (nonparallel) and parallel components? Amdahl’s Law

3. What type of parallelism involves distributing not data but tasks (threads) across multiple computing core?
Task parallelism

CHAPTER 5: CPU Scheduling

5.1 Basic Concepts

CPU–I/O Burst Cycle: Process execution alternates between CPU bursts and I/O waits. Processes start with
CPU bursts, followed by I/O bursts, and the cycle repeats until termination.

CPU Scheduler: When the CPU is idle, the CPU scheduler selects a process from the ready queue for
execution. The ready queue can be implemented in various ways, such as FIFO, priority queue, etc.

Preemptive and Non-preemptive Scheduling: Scheduling decisions occur when processes switch states, such
as from running to waiting or ready. Non-preemptive scheduling allows a process to keep the CPU until it
voluntarily releases it, while preemptive scheduling forcibly reallocates the CPU, even if the process doesn't
release it voluntarily.

Dispatcher: The dispatcher is responsible for switching control of the CPU to the selected process. It involves
context switching, switching to user mode, and jumping to the appropriate location in the user program.
Dispatch latency refers to the time taken for these operations.

Context Switches: Context switches occur when the CPU switches between processes. They can be system-
wide or specific to individual processes. Context switches can be voluntary (when a process gives up control
due to resource unavailability) or nonvoluntary (when the CPU is taken away from a process).

5.2 Scheduling Criteria


1. CPU Utilization: The goal is to keep the CPU busy to maximize efficiency. CPU utilization typically
ranges from 40% to 90%. It's measured by observing the percentage of time the CPU is active.
2. Throughput: This measures the number of processes completed per unit of time. It reflects the
system's overall productivity and efficiency.
3. Turnaround Time: It's the total time taken for a process from submission to completion. It includes
waiting time in the ready queue, CPU execution time, and I/O time.
4. Waiting Time: It's the total time a process spends waiting in the ready queue. It doesn't include CPU
execution or I/O time.

COME 1123 – OPERATING SYSTEM| 37


5. Response Time: For interactive systems, it's crucial how quickly a process starts producing output
after submission. It's not the time taken to complete the response but the time to start responding.
The objective is to maximize CPU utilization and throughput while minimizing turnaround time, waiting
time, and response time. Usually, the focus is on optimizing the average value of these measures, but in
some cases, optimizing minimum or maximum values might be preferred. For interactive systems,
minimizing variance in response time is often more critical than minimizing average response time.

To evaluate CPU-scheduling algorithms, typically, many processes with sequences of CPU bursts and
I/O bursts are considered. However, for simplicity, examples often focus on a single CPU burst per
process, with the average waiting time being a common metric for comparison. More complex
evaluation mechanisms are discussed in detail in subsequent sections.

5.3 Scheduling Algorithm


1. First-come, First-Served (FCFS) Scheduling
 Simplest algorithm where the process that arrives first is served first.
 Implemented using a FIFO queue.
 Not optimal in terms of average waiting time, prone to convoy effect, and non-preemptive.
2. Shortest-Job-First (SJF) Scheduling
 Allocates CPU to the process with the smallest next CPU burst.
 Optimal for minimizing average waiting time.
 Difficult to implement due to the unpredictability of CPU burst lengths.
 Approximations like exponential averaging are used to estimate burst lengths.
3. Round-Robin (RR) Scheduling
 Preemptive version of FCFS where each process gets a time slice (time quantum) to execute.
 Uses a circular queue to manage processes.
 The average waiting time can be long, depending on the time quantum.
 Context-switch time should be small compared to the time quantum to avoid overhead.
4. Priority Scheduling
 Associates a priority with each process, and the CPU is allocated to the highest priority process.
 Can be preemptive or non-preemptive.
 Risk of indefinite blocking (starvation) of low-priority processes.
 Solutions include aging (gradually increasing priority) and combining with round-robin.
5. Multilevel Queue Scheduling
 Processes are divided into separate queues based on priority or type.
 Each queue has its scheduling algorithm.
 Priority queues can be managed statically or dynamically.
COME 1123 – OPERATING SYSTEM| 38
 Prevents starvation by allowing processes to move between queues.
6. Multilevel Feedback Queue Scheduling
 Allows processes to move between queues based on their CPU burst characteristics.
 Processes with short CPU bursts are given higher priority.
 Processes are promoted or demoted between queues based on their waiting time.
 Complex algorithm with configurable parameters to match specific system requirements. Each
algorithm has its advantages and disadvantages, and the choice of algorithm depends on factors such
as system workload, responsiveness requirements, and fairness considerations.

QUESTIONS:
Multiple Choice
1. Which scheduling algorithm is non-preemptive?
a) First-come, First-Served (FCFS)
b) Shortest-Job-First (SJF)
c) Round-Robin (RR)
Answer: a) First-come, First-Served (FCFS)
2. What is the primary goal of CPU scheduling?
a) Maximizing CPU burst length
b) Minimizing context switches
c) Maximizing CPU utilization
Answer: c) Maximizing CPU utilization
3. Which scheduling algorithm allows processes to move between queues based on their CPU burst
characteristics?
a) Priority Scheduling
b) Multilevel Queue Scheduling
c) Multilevel Feedback Queue Scheduling
Answer: c) Multilevel Feedback Queue Scheduling

True or False
4. Shortest-Job-First (SJF) Scheduling is optimal for minimizing average waiting time.
Answer: True

COME 1123 – OPERATING SYSTEM| 39


5. Multilevel Queue Scheduling prevents starvation by allowing processes to move between queues.
Answer: True
6. Priority Scheduling is always preemptive.
Answer: False
Identification
7. What is the dispatcher's responsibility?
Answer: Switching
8. What is the main issue with FCFS?
Answer: Waiting
9. What is the primary scheduling goal?
Answer: Utilization

COME 1123 – OPERATING SYSTEM| 40


CHAPTER 06: PROCESS SYNCHRONIZATION

TOPIC: 6.1 Synchronization Tools

BACKGROUND:

Synchronization tools play a crucial role in today's interconnected digital landscape, where individuals and
organizations rely on accessing and sharing information across multiple devices and platforms seamlessly.
These tools serve as the backbone for ensuring data consistency, collaboration, and accessibility, offering users
the ability to synchronize files, documents, and other data in real-time or at scheduled intervals.At their core,
synchronization tools leverage various technologies such as cloud storage, peer-to-peer networking, and version
control systems to enable smooth and efficient synchronization processes. Cloud storage-based synchronization
services, like Dropbox, Google Drive, OneDrive, and iCloud, store data in centralized servers accessible from
anywhere with an internet connection. Users can upload, modify, or delete files on one device, and these
changes are automatically propagated to all synchronized devices linked to the same account.

One of the key benefits of synchronization tools is their ability to facilitate collaboration among multiple users
or teams. By granting access permissions and sharing capabilities, collaborators can work on the same
documents simultaneously, track changes, and maintain version history. This fosters productivity, streamlines
workflows, and reduces the risk of version conflicts or data loss.

Moreover, synchronization tools offer robust backup functionalities, serving as a safety net against data loss due
to device failure, accidental deletion, or unforeseen events. By continuously syncing data to the cloud or other
devices, users can ensure that their valuable information remains intact and accessible even in the event of
hardware failures or disasters. Additionally, synchronization tools are instrumental in enabling cross-platform
compatibility, allowing users to seamlessly transition between different devices and operating systems without
sacrificing data integrity or accessibility. Whether accessing files from a desktop computer, laptop, smartphone,
or tablet, users can expect a consistent and synchronized experience across all devices.

However, while synchronization tools offer numerous benefits, they also pose certain considerations, such as
privacy and security concerns. Users must be mindful of the data they choose to synchronize, understand the
terms of service and privacy policies of the chosen synchronization service, and implement appropriate security
measures to safeguard sensitive information.

COME 1123 – OPERATING SYSTEM| 41


In conclusion, synchronization tools have become indispensable in modern digital workflows, empowering
individuals and organizations to collaborate effectively, access their data anytime, anywhere, and ensure data
consistency and integrity across multiple devices and platforms. As technology continues to evolve,
synchronization tools will likely play an increasingly vital role in facilitating seamless connectivity and
productivity in the digital age.

6.2 Critical Section

CRITICAL SECTION

The Critical-Section Problem is a fundamental challenge in computer science, particularly in the context of
synchronization tools and concurrent programming. At its core, the problem arises when multiple concurrent
processes or threads access shared resources, leading to potential conflicts and inconsistencies. Synchronization
tools must address this challenge to ensure the integrity and correctness of data and operations.

In the realm of synchronization tools, the Critical-Section Problem manifests when multiple users or processes
attempt to access or modify shared data or resources simultaneously. Without proper synchronization
mechanisms in place, these concurrent operations can result in race conditions, data corruption, or unpredictable
behavior.

To mitigate the Critical-Section Problem, synchronization tools employ various techniques and synchronization
primitives, such as locks, semaphores, and mutexes. These mechanisms help coordinate access to critical
sections of code or shared resources, ensuring that only one process or thread can execute within the critical
section at a time.

For example, consider a cloud-based synchronization tool that allows multiple users to edit a shared document
simultaneously. Without proper synchronization, two users might attempt to save conflicting changes to the
document simultaneously, leading to data corruption or loss. By employing synchronization techniques such as
locks or version control systems, the synchronization tool can enforce a sequential execution of edits, ensuring
that changes are applied in a coordinated and consistent manner.

However, implementing synchronization mechanisms introduces its own set of challenges, such as deadlock,
livelock, and contention. Deadlock occurs when two or more processes are unable to proceed because each is
waiting for the other to release a resource. Livelock occurs when processes continuously change their states in
response to each other's actions, but none make progress. Contention arises when multiple processes compete
for access to the same resource, potentially leading to performance bottlenecks.

COME 1123 – OPERATING SYSTEM| 42


Addressing these challenges requires careful design, thorough testing, and optimization of synchronization
mechanisms within synchronization tools. Additionally, developers must consider factors such as scalability,
efficiency, and compatibility across different platforms and environments.

6.3 Peterson’s Solution

Peterson's Solution is a classic algorithm used to address the Critical-Section Problem in concurrent
programming. Proposed by Gary L. Peterson in 1981, this solution provides a simple yet effective way to
synchronize concurrent processes or threads accessing shared resources.

At its core, Peterson's Solution relies on two shared variables: a flag array and a turn variable. Each process or
thread that wishes to enter the critical section sets its flag to indicate its desire to access the critical section.
Additionally, it sets the turn variable to indicate that it is its turn to enter the critical section.

The solution works as follows:

1. Each process sets its flag to indicate its intent to enter the critical section and sets the turn variable to signal
its turn.

2. Before entering the critical section, a process checks the flags of other processes and the turn variable to
determine if it can proceed.

3. If a process finds that it is not its turn or that another process also wishes to enter the critical section, it waits
until conditions change.

4. Once a process exits the critical section, it resets its flag, allowing other processes to proceed.

Peterson's Solution ensures that only one process can enter the critical section at a time while avoiding issues
such as deadlock and livelock. However, it has limitations, particularly in scenarios involving more than two
processes or in distributed systems.

Despite its simplicity, Peterson's Solution serves as a foundational concept in concurrent programming and
synchronization, laying the groundwork for more sophisticated synchronization techniques and algorithms.

6.4 Hardware Support for Synchronization

COME 1123 – OPERATING SYSTEM| 43


Hardware support for synchronization refers to the features and mechanisms provided by modern computer
hardware to facilitate efficient and reliable synchronization of concurrent processes or threads. These hardware-
level capabilities are crucial for ensuring the correct and optimal execution of concurrent programs, especially
in multi-core and multi-processor systems.

Some common hardware features that support synchronization include:

1. ATOMIC INSTRUCTIONS: Modern processors often provide atomic instructions, such as compare-and-
swap (CAS) or test-and-set, which allow for indivisible operations on memory locations. These atomic
operations are essential for implementing synchronization primitives like locks, semaphores, and barriers
efficiently.

2. MEMORY BARRIERS: Hardware memory barriers, also known as memory fences, ensure the ordering
and visibility of memory operations across multiple processors or cores. They prevent reordering of memory
accesses and enforce consistency, which is crucial for maintaining the correctness of concurrent programs.

3. CACHE TOLERANCE PROTOCOL: Multi-core processors typically employ cache coherence protocols
to ensure that multiple processor cores have consistent views of shared memory. These protocols manage data
coherence and synchronization between processor caches, ensuring that updates made by one core are visible to
other cores in a timely and coherent manner.

4. TRANSACTIONAL MEMORY: Some modern processors feature support for transactional memory,
which allows programmers to define regions of code as transactions. Transactions provide a higher-level
abstraction for synchronization, enabling atomic and isolated execution of groups of instructions. Hardware-
based transactional memory implementations aim to reduce contention and overhead associated with traditional
locking mechanisms.

5. MEMORY ORDERING GUARANTEES: Hardware architectures define memory ordering guarantees that
specify the ordering constraints for memory accesses performed by different processor cores or threads. These
guarantees ensure that memory operations are observed in a consistent and predictable order, which is essential
for synchronization and data consistency.

6. SPECIALIZED SYNCHRONIZATION INSTRUCTIONS: Some processors offer specialized


instructions designed specifically for synchronization tasks, such as fetch-and-add or load-linked/store-

COME 1123 – OPERATING SYSTEM| 44


conditional instructions. These instructions provide efficient mechanisms for implementing synchronization
primitives and are often used in lock-free and wait-free algorithms.

Overall, hardware support for synchronization plays a crucial role in enabling efficient and scalable concurrent
programming on modern computer systems. By leveraging these hardware-level features, developers can
implement synchronization mechanisms that are both efficient and reliable, ensuring the correct and optimal
execution of concurrent programs.

6.5 Mutex Locks

A mutex (short for mutual exclusion) lock is a synchronization mechanism used to ensure that only one
thread can access a shared resource at a time, preventing race conditions and data corruption. Here's a summary:

1. Exclusive Access: A mutex lock allows only one thread to enter a critical section of code at a time. Other
threads attempting to enter the critical section will be blocked until the mutex is released.

2. Locking and Unlocking: Threads acquire a mutex lock before accessing the shared resource and release it
afterward. This ensures that only one thread can execute the critical section of code at any given time.

3. Blocking: If a thread attempts to acquire a mutex lock that is already held by another thread, it will be
blocked until the lock is released. This prevents multiple threads from accessing the critical section
simultaneously.

4. Deadlocks: Improper use of mutex locks can lead to deadlocks, where two or more threads are waiting for
each other to release resources they need. Careful design and programming practices are necessary to avoid
deadlocks.

5. Performance Considerations: Mutex locks incur some overhead due to context switching and
synchronization, so they should be used judiciously. In some cases, other synchronization mechanisms such as
semaphores or condition variables may be more appropriate.

Overall, mutex locks are a fundamental tool for ensuring thread safety and preventing data corruption in multi-
threaded programs.

COME 1123 – OPERATING SYSTEM| 45


Disadvantages: One drawback of mutex locks is busy waiting, where a process continuously checks the lock's
availability, wasting CPU cycles. This can be inefficient, especially in a system with many processes
contending for the lock.

Spinlocks: Mutex locks that use busy waiting are often referred to as spinlocks. While spinlocks can be
efficient in certain scenarios, they can lead to performance issues if the critical section is held for an extended
period.

Alternatives: To avoid busy waiting, systems may implement other synchronization mechanisms, such as
semaphores or condition variables, which allow processes to sleep and be awakened when the lock becomes
available.

6.6 Semaphores

Semaphores are another synchronization tool used in multi-threaded programming, offering more flexibility
than mutex locks. Here's a summary:

1. Counting Mechanism: Unlike mutex locks, which provide exclusive access to a shared resource,
semaphores can control access to a resource by multiple threads simultaneously. They maintain a count to track
the number of resources available.

2. Two Types: There are two types of semaphores:

- Binary Semaphores: Also known as mutex semaphores, these have a count of either 0 or 1, effectively
behaving like mutex locks, allowing only one thread to access a resource at a time.

- Counting Semaphores: These have a count greater than 1, allowing multiple threads to access a resource
concurrently, up to a specified limit.

3. Operations: Semaphores support two main operations:

- Wait (P) Operation: Decreases the semaphore count by 1. If the count is already zero, the calling thread
may be blocked until the count becomes greater than zero.

COME 1123 – OPERATING SYSTEM| 46


- Signal (V) Operation: Increases the semaphore count by 1. If there are threads blocked on the semaphore,
one of them is unblocked.

4. Flexibility: Semaphores can be used to solve a variety of synchronization problems, including producer-
consumer, readers-writers, and dining philosophers problems.

5. Performance: While semaphores offer more flexibility than mutex locks, they also incur slightly higher
overhead due to maintaining a count and potentially managing multiple threads accessing the same resource
simultaneously.

In summary, semaphores are a powerful synchronization mechanism that can handle more complex scenarios
than mutex locks, making them suitable for a wide range of multi-threaded programming tasks.

6.7 Monitors

Monitors are abstract data types (ADTs) that include a set of programmer-defined operations with mutual
exclusion. They encapsulate variables and functions that operate on those variables, ensuring that only one
process at a time can be active within the monitor.

 Monitors:
- Provide effective process synchronization with mutual exclusion.
- Address timing errors inherent in semaphore and mutex lock usage.
- Introduce higher-level synchronization construct called monitors.

 Monitor Usage:
- Monitors encapsulate data with a set of functions for independent operation.
- Ensure mutual exclusion within monitor, allowing only one active process at a time.
- Monitor's synchronization constraint not explicitly coded by the programmer.
- Incorporate condition variables for more complex synchronization schemes.

 Implementing Monitors with Semaphores:


- Use binary semaphore mutex for mutual exclusion.
- Implement signal-and-wait scheme with an additional binary semaphore next.
- Introduce condition variables using binary semaphores and integer variables.
- Illustrate resuming processes within a monitor using first-come, first-served ordering or conditional-
wait construct.

 Challenges with Monitors:

COME 1123 – OPERATING SYSTEM| 47


- Cannot guarantee correct access sequence.
- Risk of uncooperative processes bypassing the monitor's mutual exclusion mechanism.
- Inspection of program usage necessary to ensure correctness, especially in dynamic systems.
 Resuming Processes within a Monitor
- Process-resumption order within a monitor can be determined by using a first-come, first-served
ordering or a conditional-wait construct.
- The conditional-wait construct has the form x.wait(c) where c is an integer expression evaluated when
the wait() operation is executed.
- Processes suspended on a condition variable can be resumed based on their priority number, stored when
the wait() operation is executed.

6.8 Liveness

 Liveness
- Processes must make progress during their execution life cycle.

- Indefinite waiting violates progress and bounded-waiting criteria.

- Liveness failures result in poor performance and responsiveness.

 Deadlock
- Occurs when processes wait indefinitely for an event caused by another waiting process.

- Illustrated by processes P0 and P1 deadlocked over semaphores S and Q.

- Deadlocked processes wait for events caused by each other.

 Priority Inversion
- Higher-priority processes wait for lower-priority processes to finish with resources.

- Complicated by preemption, where a lower-priority process is preempted by a higher-priority one.

- Priority inheritance protocol temporarily elevates the priority of processes accessing resources needed by
higher-priority processes.

6.9 Evaluation

 Various synchronization tools are available to address the critical-section problem.


 Correct implementation and usage of these tools ensure mutual exclusion and address liveness issues.
 With the rise of concurrent programs on modern multicore systems, there's a growing focus on
synchronization tool performance.
 Strategies are needed to determine when to use specific synchronization tools effectively.
 CAS-based lock-free algorithms, while gaining popularity for their low overhead, can be challenging to
develop and test.
 CAS-based approaches are optimistic, while mutual-exclusion locking is pessimistic.
 Guidelines suggest that CAS protection is faster than traditional synchronization under moderate contention.

COME 1123 – OPERATING SYSTEM| 48


 Choice of synchronization mechanism affects system performance; for instance, atomic integers are lighter
weight than locks.
 Higher-level tools like monitors and condition variables offer simplicity but may have overhead and
scalability issues.
 Ongoing research aims to develop scalable and efficient synchronization tools.
 Examples include designing compilers, developing languages, and improving existing libraries and APIs.
 Uncontended: CAS protection is somewhat faster than traditional synchronization.
 Moderate contention: CAS protection is faster— possibly much faster— than traditional synchronization.
 High contention: Under very highly contended loads, traditional synchronization will ultimately be faster
than CAS-based synchronization.

QUESTIONS

Multiple Choice

1. Who proposed Peterson's Solution for the Critical-Section Problem?

a. Gary Booch

b. Gary Peterson

c. Gary Peters

d. Peter Garson

2. It temporarily elevates the priority of processes accessing resources needed by higher priority processes.

a. Moderate Contention

b. Uncontented

c. High Contention
COME 1123 – OPERATING SYSTEM| 49
d. Priority Inheritance Protocol

3. All are hardware feature that supports synchronization in modern computer systems EXCEPT which one?

a. MEMORY BARRIERS

b. ATOMIC INSTRUCTIONS

c. TRANSACTIONAL MEMORY

d. MEMORY BASED SYSTEM

TRUE OR FALSE

1. Deadlock failures result in poor performance and responsiveness.

2. The Wait (P) operation of a semaphore decreases its count by 1.

3. Both binary and counting semaphores allow multiple threads to access a resource concurrently.

IDENTIFICATION

1. CAS protection is somewhat faster than traditional synchronization.

2. The Classic Algorithm is commonly employed to tackle the Critical-Section Problem in concurrent
programming.

3. It provide a higher-level abstraction for synchronization, enabling atomic and isolated execution of groups of
instructions.

ANSWER KEY

MULTIPLE CHOICE

1. B

2. D

3. D

TRUE OR FALSE

1. False

COME 1123 – OPERATING SYSTEM| 50


2. True

3. False

IDENTIFICATION

1. Uncontented

2. Peterson's Solution

3. Transaction

CHAPTER 07

Synchronization Examples

7.1 Classic Problems of Synchronization

 Bounded-Buffer Problem: This problem involves coordinating a producer process that generates data and
a consumer process that consumes it, both sharing a fixed-size buffer (bounded buffer). The goal is to
ensure that the producer doesn't write data into the buffer when it's full and the consumer doesn't read from
it when it's empty. The solution typically involves using semaphores or mutex locks to control access to the
buffer.

 Readers-Writers Problem: In a scenario where multiple processes access a shared database, some may
only read while others may both read and write. The challenge is to ensure that writers have exclusive
access to the database while they are writing, to avoid data inconsistency. Various versions of this problem
exist, including prioritizing readers or writers, and solutions often involve semaphores or mutex locks to
manage access.

 Dining-Philosophers Problem: This classic synchronization problem involves five philosophers seated
around a circular table, each alternating between thinking and eating. Each philosopher needs two
chopsticks to eat, but there are only five chopsticks available (one between each pair of philosophers). The
challenges in this problem include preventing deadlock, where each philosopher holds one chopstick and
waits indefinitely for the other, and avoiding starvation, where a philosopher may never get a chance to eat.
Solutions include using semaphores to represent chopsticks and implementing strategies such as
asymmetrical chopstick acquisition or utilizing monitors to control access to chopsticks.

- Semaphore Solution: One approach is to represent each chopstick with a semaphore, where
philosophers try to acquire chopsticks through wait() operations and release them through signal()
operations. However, this approach can lead to a deadlock if all philosophers simultaneously pick up
one chopstick each.
COME 1123 – OPERATING SYSTEM| 51
- Monitor Solution: Another solution involves using monitors to manage access to chopsticks.
Philosophers can only pick up both chopsticks if both are available, and they must release both
chopsticks when done eating. This approach ensures deadlock-free execution but may still lead to
starvation, where a philosopher never gets a chance to eat. Further solutions may be required to address
this issue

7.2 Synchronization within the Kernel

The synchronization mechanisms within the kernel of Windows and Linux operating systems are crucial
for ensuring proper coordination and resource management among concurrent threads or processes. Here's a
summary of synchronization in both Windows and Linux kernels:

Synchronization in Windows:

 Single-Processor System: When accessing global resources, Windows temporarily masks interrupts for all
interrupt handlers that may also access the resource.
 Multiprocessor System: Windows uses spinlocks to protect access to global resources. The kernel ensures
that a thread will not be preempted while holding a spinlock for efficiency reasons.
 Thread Synchronization: Windows provides dispatcher objects for thread synchronization, including
mutex locks, semaphores, events, and timers. Threads synchronize by acquiring ownership of these objects,
and shared data is protected accordingly.
 Dispatcher Object States: Dispatcher objects can be in a signaled or non-signaled state. Signaled objects
are available, while non-signaled objects cause threads to block until they become signaled.
 Critical-Section Objects: These are user-mode mutexes that can often be acquired and released without
kernel intervention. On multiprocessor systems, spinlocks are initially used, and if spinning takes too long, a
kernel mutex is allocated.

Synchronization in Linux:

 Kernel Preemption: Linux kernels can be preemptive, allowing tasks to be preempted even when running
in kernel mode. Preemption is controlled using system calls like preempt_disable() and preempt_enable().
 Synchronization Mechanisms: Linux provides various synchronization mechanisms within the kernel,
including atomic integers, mutex locks, spinlocks, and semaphores.
 Atomic Integers: Represented by atomic_t data type, atomic integers ensure that all math operations are
performed without interruption. They are useful for updating shared variables efficiently.
 Mutex Locks: Tasks must acquire mutex locks before entering critical sections and release them afterward.
If a lock is unavailable, the task is put into a sleep state until the lock becomes available.
 Spinlocks: Used on SMP (Symmetric Multiprocessing) machines for short-duration locking. On single-
processor systems, spinlocks are replaced by enabling and disabling kernel preemption.
 Preempt Count: Each task in Linux has a preempt count to indicate the number of locks held. If a task is
holding a lock, kernel preemption is disabled to ensure safety.

COME 1123 – OPERATING SYSTEM| 52


Both Windows and Linux kernels offer comprehensive synchronization mechanisms tailored to their
respective architectures and requirements, ensuring efficient and safe operation in multithreaded and
multiprocessor environments.

QUESTIONS:

Multiple Choice

1. In the classic Dining-Philosophers Problem, which of the following is NOT a challenge faced in
synchronization?

A) Ensuring the philosophers always eat in a specific order.

B) Preventing deadlock.

C) Avoiding starvation.

D) Managing access to chopsticks.

2. Which synchronization mechanism is primarily used in Linux for short-duration locking on SMP machines?

A) Mutex locks

B) Semaphores

C) Spinlocks

D) Atomic integers

3. What type of objects does Windows provide for thread synchronization?

A) Dispatched threads

B) Mutex locks

C) Semaphore events

D) Kernel mutexes

True or False

1. In Windows, critical-section objects are exclusively kernel-mode mutexes.

2. Linux kernels do not support preemptive multitasking.

3. Atomic integers in Linux ensure that all math operations are performed without interruption.

Identification
COME 1123 – OPERATING SYSTEM| 53
1. What classic synchronization problem involves coordinating a producer process and a consumer process
sharing a fixed-size buffer?

2. Which kernel synchronization mechanism in Windows ensures that a thread will not be preempted while
holding a lock?

3. What synchronization mechanism within the Linux kernel is used to protect critical sections by putting tasks
into a sleep state if the lock is unavailable?

ANSWER KEY:

Identification

1. A
2. C
3. C
True or False

1. False
2. False
3. True
Identification

1. Bounder-Buffer Problem
2. Spinlocks
3. Mutex Locks

CHAPTER 8: DEADLOCKS

In a multiprogramming environment, multiple threads may compete for limited resources. When a thread
requests resources that are unavailable, it enters a waiting state. Sometimes, a waiting thread remains stuck
because the requested resources are held by other waiting threads, resulting in a deadlock. Deadlock occurs
when every process in a set is waiting for an event caused only by another process in the same set.

A real-world example of deadlock comes from a Kansas law: “When two trains approach each other at a
crossing, both must come to a full stop and neither can start moving until the other has passed.”

To address deadlocks, application developers and operating-system programmers can employ prevention
techniques. While some applications can identify potential deadlocks, operating systems typically lack built-in
deadlock-prevention features. It remains the responsibility of programmers to design deadlock-free programs.
As demand for increased concurrency and parallelism grows on multicore systems, dealing with deadlock issues
becomes more challenging.

System Model

1. System Resources and Types:


COME 1123 – OPERATING SYSTEM| 54
 A system has finite resources distributed among competing threads.
 Resources can be partitioned into types (or classes), each with identical instances.
 Examples of resource types include CPU cycles, files, and I/O devices (e.g., network interfaces,
DVD drives).
 If a system has four CPUs, the CPU resource type has four instances; similarly, the network re-
source type may have two instances.
2. Resource Allocation and Requests:
 When a thread requests an instance of a resource type, any instance of that type should satisfy the
request.
 If instances do not behave identically, the resource type classes are not properly defined.
3. Synchronization Tools and Deadlock:
 Mutex locks and semaphores are common synchronization tools.
 They can lead to deadlocks, especially on contemporary computer systems.
 Locks are associated with specific data structures (e.g., protecting access to queues or linked
lists).
4. Kernel Resources and Deadlocks:
 Threads may use resources from other processes via inter-process communication.
 Such resource use can also result in deadlocks, but it’s not the kernel’s concern.
5. Resource Usage by Threads:
 Threads must request and release resources.
 A thread can request as many resources as needed for its task.
 The total requested resources cannot exceed the system’s available resources.

Under the normal mode of operation, a thread may utilize a resource in only the following sequence:

1. Request:
o When a process or thread requires access to a resource (such as CPU time, memory, or I/O de-
vices), it makes a request for that resource.
o The request indicates that the process needs to utilize the resource to perform its designated task.
o For example, a process requesting access to a file or a network interface is making a resource re-
quest.
2. Use:
o After obtaining permission (i.e., when the requested resource becomes available), the
process uses the resource.
o During the usage phase, the process performs operations or computations using the resource.
o For instance, a process using CPU cycles to execute instructions or reading data from a file is in
the usage state.
3. Release:
o Once the process completes its work with the resource, it releases the resource.
o Releasing a resource means making it available for other processes to use.
o For example, when a process finishes reading from a file, it releases the file resource.

Deadlock Characterization
 Deadlocks can arise due to improper resource management, leading to situations where processes cannot
proceed. Remember the necessary conditions for deadlock:

1. Mutual Exclusion: Resources are non-shareable (only one process can use them at a time).
2. Hold and Wait: A process holds at least one resource while waiting for others.
3. No Preemption: Resources cannot be forcibly taken from a process unless voluntarily released.
COME 1123 – OPERATING SYSTEM| 55
4. Circular Wait: A set of processes waits for each other in a circular manner.

Banker’s Algorithm
 Ensure safe state transitions by checking if resource allocation leads to a safe state.
 Requires prior knowledge of resource needs.
 Processes request resources incrementally, and the system checks if granting the request will lead to a
safe state.
 A well-known approach for deadlock avoidance.

Resource-allocation Graph
 a graphical representation that helps detect whether a sys- tem is in a
deadlock state. It provides a visual depiction of the resource allocation
and resource requests among processes.
Components of a Resource Allocation Graph:
 Vertices: Represent processes (or threads) and resources.
 Edges: Represent resource requests or resource allocations.
 An edge from a process to a resource indicates that the process has
requested that resource.
 An edge from a resource to a process indicates that the re- source has
been allocated to that process.

Deadlock Prevention

Mutual Exclusion
Deadlock conditions such as mutual exclusion, hold and wait, no preemption, and circular wait are discussed.
Prevention strategies involve ensuring that at least one of these conditions cannot hold. Mutual exclusion
necessitates having at least one nonsharable resource to avert deadlock. However, certain resources like mutex
locks cannot practically be denied mutual exclusion to prevent deadlock.

Hold and Wait


Strategies to prevent hold and wait include ensuring that a thread requesting a resource doesn't hold others.
Protocols such as requesting and allocating all resources before execution or requesting resources only when
none are held are discussed. However, these approaches may lead to low resource utilization and potential
starvation.

No Preemption
Avoiding deadlock involves preventing preemption of already allocated resources. Protocols entail preempting
resources if a thread must wait for a new resource or preempting from waiting threads if necessary. This strategy
is applicable to resources with easily saved/restored states, like CPU registers and database transactions.

Circular Wait
Deadlock prevention options for circular wait are generally impractical. However, the circular-wait condition
can be invalidated by imposing a total ordering of resource types. Threads can then request resources in
increasing order of enumeration, ensuring deadlock cannot occur.

COME 1123 – OPERATING SYSTEM| 56


Deadlock Avoidance
Deadlock avoidance necessitates additional information on resource requests to make decisions. Algorithms
dynamically analyze resource-allocation states to prevent circular waits. A basic model involves each thread
declaring maximum resource needs, enabling construction of algorithms to prevent deadlock.

 Safe State
The system is in a safe state if resources can be allocated to each thread without deadlock. A safe
sequence of threads ensures resource requests can be satisfied without deadlock. Unsafe states may lead
to deadlock, but not all unsafe states result in deadlock. Algorithms aim to keep the system in a safe
state, granting requests only if they don't lead to deadlock.

unsafe
deadlock

safe

 Resource-Allocation-Graph Algorithm
A variant of the resource-allocation graph for deadlock avoidance in systems with one instance of each
resource type is introduced. Claim edges indicating potential future resource requests are utilized.
Resources must be claimed in advance to prevent deadlock.

- Algorithm Description:
When a thread requests a resource, it can be granted only if it doesn't create a cycle in the graph. Cycle
detection ensures the system remains in a safe state. If no cycle is detected, resource allocation leaves
the system safe; otherwise, the thread must wait.

 Banker’s Algorithm
The Banker’s algorithm is applicable to systems with multiple instances of each resource type. Threads
must declare maximum resource needs, and the system determines if resource allocation will leave it in
a safe state.
- Data Structures:
Matrices such as Available, Max, Allocation, and Need define the resource-allocation system state.
These matrices vary in size and value over time.
COME 1123 – OPERATING SYSTEM| 57
Deadlock Detection
This chapter discusses the consequences of not employing deadlock-prevention or deadlock-avoidance
algorithms. It presents the requirements for deadlock detection and recovery in systems.

1. Single Instance of Each Resource Type:


A deadlock-detection algorithm is defined using a wait-for graph derived from the resource-allocation
graph. The maintenance of the wait-for graph and how deadlock is detected are described. The overhead
and potential losses involved in deadlock detection are highlighted, along with the use of the BCC
toolkit for deadlock detection in Pthreads mutex locks.
2. Several Instances of a Resource Type:
A deadlock-detection algorithm applicable to systems with multiple instances of each resource type is
introduced. Data structures and the detection algorithm, similar to the banker’s algorithm, are described.
The rationale behind reclaiming resources and the optimistic assumption made in the algorithm are
explained. An illustrative example demonstrates the operation of the algorithm.
3. Detection-Algorithm Usage:
This section discusses when to invoke the deadlock-detection algorithm based on the frequency of
deadlocks and the number of affected threads. Options for detecting deadlocks at defined intervals or
upon resource request are explored. The implications of invoking the detection algorithm at arbitrary
points in time are considered.

Recovery from Deadlock


Options for recovering from deadlocks, including manual intervention and automatic recovery, are explored.
Two options for breaking deadlocks—aborting processes/threads and resource preemption—are presented.

1. Process and Thread Termination:


Methods for eliminating deadlocks by aborting processes or threads are discussed. The costs and
challenges associated with process termination, including data consistency and resource availability, are
considered. Total abortion and partial abortion methods, along with their respective overheads, are
described.
2. Resource Preemption:
This section explains how deadlocks can be resolved by preempting resources from processes. Issues
such as victim selection, rollback, and prevention of starvation are addressed. Challenges in ensuring
fair resource allocation during preemption are discussed.

Multiple Choice:
1. What is a necessary condition for deadlock to occur?
o A) Mutual exclusion
o B) Hold and wait
o C) No preemption
o D) Circular wait
o Answer: D) Circular wait

COME 1123 – OPERATING SYSTEM| 58


2. Which resource management approach aims to prevent deadlock by eliminating one of the neces-
sary conditions?
o A) Deadlock avoidance
o B) Deadlock detection
o C) Deadlock prevention
o D) Deadlock ignorance
o Answer: C) Deadlock prevention
3. In a Resource Allocation Graph (RAG), what does an edge from a process to a resource represent?
o A) The process has requested the resource
o B) The resource has been allocated to the process
o C) The process owns the resource
o D) The resource is unavailable
o Answer: A) The process has requested the resource

True or False:
1. True or False: Deadlocks can occur when processes compete for resources, leading to situations where
none of them can proceed.
o True
2. True or False: The Banker’s Algorithm is used for deadlock prevention by ensuring safe state transitions.
o True
3. True or False: Deadlock detection algorithms periodically check the system state to identify deadlocks.
o True

Identification:
1. Identify the necessary conditions for deadlock.
o Answer: The necessary conditions for deadlock are mutual exclusion, hold and wait, no
preemption, and circular wait.
2. Identify one method for handling deadlocks other than deadlock prevention or avoidance.
o Answer: Deadlock detection and recovery.
3. Identify the graphical representation used to detect deadlock by analyzing resource requests and alloca-
tions.
o Answer: Resource Allocation Graph.

COME 1123 – OPERATING SYSTEM| 59


CHAPTER 9: CPU SCHEDULING

This Chapter discusses CPU scheduling, which improves CPU utilization and computer response speed by
sharing memory among processes. It explores various memory management algorithms, including primitive
bare-machine and paging strategies, and their advantages and disadvantages. The choice depends on hardware
design, and many systems integrate hardware and operating-system memory management.

CHAPTER OBJECTIVES

 Explain the difference between a logical and a physical address and the role of the memory
management unit (MMU) in translating addresses.
 Apply first-, best-, and worst-fit strategies for allocating memory contiguously.
 Explain the distinction between internal and external fragmentation.
 Translate logical to physical addresses in a paging system that includes a translation look-aside
buffer (TLB).
 Describe hierarchical paging, hashed paging, and inverted page tables.
 Describe address translation for IA-32, x86-64, and ARMv8 architectures.

9.1 Background
This module delves into the fundamentals of memory management in computer systems, highlighting the role of
memory as an array of bytes with each byte having its own address. It explains how CPUs fetch instructions
from memory, including fetching and storing operands, and outlines the typical instruction-execution cycle. It
emphasizes that the memory unit only perceives a stream of memory addresses and is unaware of their origin or
purpose within the program. The module concludes with a discussion on dynamic linking and shared libraries.

9.1.1 Basic Hardware


COME 1123 – OPERATING SYSTEM| 60
The CPU interacts with main memory and registers, which provide rapid access within one clock cycle.
Memory bus access can cause processor stalling, so systems use hardware-managed cache memory to accelerate
memory access without requiring intervention from the operating system. Hardware also enforces protection
mechanisms to shield the operating system from user processes and prevent performance penalties associated
with operating system intervention in memory access.

Figure 9.1 A base and a limit register define a logical address space.

The CPU uses specialized registers, a base register and a limit register, to separate and execute processes. The
base register indicates the starting point of a process's memory area, while the limit register defines the
maximum memory accessible to the process. The CPU scrutinizes every memory access in user mode against
these registers, alerting the operating system if a process attempts to access memory beyond its allotted range.
This prevents processes from maliciously altering each other's or the operating system's memory. Only the
kernel mode operating system can configure these registers, ensuring user programs cannot modify memory
boundaries. This control is crucial for the effective management of memory, requiring unrestricted access to all
memory. This allows the operating system to manage user programs, handle errors, and perform essential tasks
like process switching in multi-process systems.

Figure 9.2 Hardware address protection with base and limit registers.

9.1.2 Address Binding

The CPU uses specialized registers, a base register and a limit register, to separate and execute processes. The
base register indicates the starting point of a process's memory area, while the limit register defines the
maximum memory accessible to the process. The CPU scrutinizes every memory access in user mode against
these registers, alerting the operating system if a process attempts to access memory beyond its allotted range.

COME 1123 – OPERATING SYSTEM| 61


This prevents processes from maliciously altering each other's or the operating system's memory. Only the
kernel mode operating system can configure these registers, ensuring user programs cannot modify memory
boundaries. This control is crucial for the effective management of memory, requiring unrestricted access to all
memory. This allows the operating system to manage user programs, handle errors, and perform essential tasks
like process switching in multi-process systems.

9.1.3 Logical Versus Physical Address Space

The CPU generates a logical address, while the memory unit sees the physical address, which is loaded into the
memory-address register.

Figure 9.5 Dynamic relocation using a relocation register.

COME 1123 – OPERATING SYSTEM| 62


The concept of a logical address space connected to a separate physical address space is crucial for effective
memory management.

9.1.4 Dynamic Loading

Dynamic loading is a memory management technique that loads routines into memory only when needed, rather
than loading the entire program at once. Routines are stored on disk in a relocatable load format, and when
needed, the calling routine checks if it's already in memory. If not, the routine is loaded, and the program's
address tables are updated. This technique enhances memory-space utilization, especially for programs with
extensive codebases.

9.1.5 Dynamic Linking and Shared Libraries

Dynamic linking is a technique in operating systems where system libraries are linked to user programs at
runtime, reducing memory usage. It is commonly used in Windows and Linux systems. Dynamic linked
libraries (DLLs) are shared libraries that can be updated with bug fixes or new versions to ensure compatibility
between programs and libraries. Unlike dynamic loading, DLLs require assistance from the operating system to
manage memory protection and ensure multiple processes can access the same memory addresses.

9.2 Contiguous Memory Allocation

Contiguous memory allocation is a memory management technique where main memory is divided into two
partitions: one for the operating system and one for user processes. The OS's location in memory depends on
factors like the interrupt vector. The goal is to efficiently allocate memory to multiple user processes, each
allocated a single contiguous section. This method simplifies memory management but can lead to
fragmentation issues. It is crucial to address memory protection, ensuring each process has its own memory
space and cannot access memory outside its allocated range without causing errors.

9.2.1 Memory Protection

COME 1123 – OPERATING SYSTEM| 63


Memory protection in a computer system ensures a process can access its own memory
only through a relocation register and a limit register. The Memory Management Unit
(MMU) maps logical addresses by adding the relocation register's value, ensuring they
fall within the limit register's range. The CPU's scheduler loads the relocation and limit
registers with correct values during a context switch, protecting the operating system
and user programs from modifications.

9.2.2 Memory Allocation

Memory allocation assigns processes to memory partitions, initially available as a large hole. Processes are
allocated space based on requirements and when they terminate, memory is returned to the set. Strategies like
first-fit, best-fit, and worst-fit are used to efficiently allocate memory while minimizing fragmentation. First-fit
allocates the first suitable hole, best-fit allocates the smallest suitable hole, and worst-fit allocates the largest
suitable hole. Simulations generally favor first-fit and best-fit over worst-fit due to their efficiency in time and
storage utilization. While neither is definitively superior, first-fit is often faster.

9.2.3 Fragmentation

Memory fragmentation occurs when memory is broken into small, scattered pieces over time, making it difficult
to find contiguous space for new processes. Internal fragmentation occurs when a process receives more
memory than needed, leaving unused space. To address external fragmentation, compaction can be used to
move processes and consolidate free memory into a single block, or processes can be placed wherever space is
available, emphasizing the importance of efficient memory management.

COME 1123 – OPERATING SYSTEM| 64


9.3 Paging

Paging is a memory management technique that ensures a process's physical address space is non-contiguous,
preventing external fragmentation, and is widely used in operating systems for its efficiency, involving
hardware cooperation.

9.3.1 Basic Method

Paging is a process management technique that divides physical memory into fixed frames and logical memory
into equally sized pages. This separation allows a process to have a logical address space larger than physical
memory. Each CPU-generated address consists of a page number and offset, simplifying address translation.
Paging creates the illusion of a single contiguous memory space for the programmer, despite fragmented
physical memory. The operating system controls address-translation hardware, converting logical addresses to
physical addresses, and managing physical memory using a frame table. This process can increase context-
switch time.

The page number is an index in a per-process page table, which stores the base address of each frame in
physical memory. The offset indicates the frame's location. Combining the base address and offset gives the
physical memory address. The process involves using the page number from the logical address, obtaining the

COME 1123 – OPERATING SYSTEM| 65


corresponding frame number, and replacing it with the frame number to get the physical address. The page size
simplifies the translation process, with high-order bits specifying the page number and low-order bits specifying
the page offset.

9.3.2 Hardware Support

Each process in a computer system has its own page table, which maps logical addresses to physical addresses.
This page table is stored in memory, and a register called the page-table base register (PTBR) points to the start
of the current process. When a process is selected for execution, the CPU scheduler updates the PTBR to point
to the process's page table, enabling quick translation of logical addresses to physical addresses. Some systems
store the page table in high-speed hardware registers for faster access. Modern systems with large page tables
store the page table in main memory, with only the PTBR stored in a register.

9.3.2.1 Translation Look-Aside Buffer

The main memory page table requires two memory accesses for data access, making it unacceptable. To speed
up the translation process, a Translation Look-aside Buffer (TLB) is used. TLB stores a subset of page table
entries and checks the TLB when a logical address is generated. If the TLB is full, a replacement policy is used
to select an entry for eviction. Some TLBs allow certain entries to be "wired down" for critical kernel code.
Some TLBs also store an address-space identifier (ASID) for process identification.

COME 1123 – OPERATING SYSTEM| 66


The hit ratio is the percentage of times a page number is found in the Translation Lookaside Buffer (TLB). An
80-percent hit ratio means mapped-memory access takes 10 nanoseconds, while a 99-percent hit ratio results in
10.1 nanoseconds, a 1% slowdown. TLBs are crucial for memory performance in modern CPUs, as they may
have multiple levels. A miss in one level can lead to longer access times as the CPU checks higher levels or
accesses memory directly. Understanding TLBs is important for operating system designers, as changes in TLB
design between CPU generations can impact performance and design of operating systems.

9.3.3 Protection

In a paged memory environment, memory protection is achieved through protection


bits in the page table, which determine if a page is read-write or read-only. The page
table is consulted to find the correct frame number and checks to prevent writes to
read-only pages. If an attempt is made to write to a read-only page, a hardware trap to
the operating system occurs, signaling a memory protection violation. The protection
mechanism can be expanded to provide more nuanced levels, such as read-only, read-
write, or execute-only. Illegal attempts to access memory will result in traps to the
operating system. Each entry in the page table includes a valid-invalid bit, which the
operating system manages for each page.

COME 1123 – OPERATING SYSTEM| 67


Figure 9.13 Valid (v) or invalid (i) bit in a page table.

9.3.4 Shared Pages

Paging is a technique that enables the sharing of common code among multiple
processes, especially in environments with many processes. It is particularly useful for
reentrant code, which does not modify itself during execution. By sharing the same
physical memory pages for this code, multiple processes can execute the same code
simultaneously without interfering with each other. This reduces memory usage
significantly, as only one copy of libc is kept in memory and mapped to each process's
address space. This approach can also be applied to other commonly used programs,
such as compilers, window systems, and database systems, resulting in further memory
savings.

9.4 Structure of the Page Table

This section delves into common techniques for structuring page tables, such as hierarchical paging, hashed
page tables, and inverted page tables.

9.4.1 Hierarchical Paging

Modern computer systems often have a large logical address space, leading to an excessively large page table.
This can result in up to 4 MB of physical address space for each process. To address this, a two-level paging
algorithm can be used to divide the page table into smaller pieces, such as a page number and a page offset,
which are further divided into a 10-bit page number and a 10-bit page offset. This efficient allocation of
physical address space in main memory is crucial for efficient memory usage.

COME 1123 – OPERATING SYSTEM| 68


The address-translation method, also known as a forward-mapped page table, involves indexing into the outer
page table and displacement within the page of the inner page table. However, for 64-bit logical address spaces,
a two-level paging scheme is not suitable, as it would allow inner page tables to be one page long or contain 210
4-byte entries.

9.4.2 Hashed Page Tables

A hashed page table is a method for handling address spaces larger than 32 bits, using a virtual page number as
the hash value. Each entry contains a linked list of elements that hash to the same location. The algorithm
compares the virtual page number with field 1 in the linked list, and if a match is found, the corresponding page
frame (field 2) is used to form the desired physical address.

9.4.3 Inverted Page Tables

Processes typically have a page table with one entry for each virtual address, which the operating system
converts into a physical memory address. This method can consume large amounts of physical memory. To
address this, an inverted page table is proposed, which has one entry for each real page or frame of memory,
containing the virtual address and information about the process that owns the page. This results in only one

COME 1123 – OPERATING SYSTEM| 69


page table in the system and one entry for each page of physical memory. Inverted page tables often require an
address-space identifier to be stored in each entry.

Inverted page tables are a method used to map physical memory to different address spaces, ensuring a logical
page for a process is mapped to the corresponding physical page frame. IBM was the first major company to use
inverted page tables, starting with the IBM System 38 and continuing through the RS/6000 and current IBM
Power CPUs. For the IBM RT, each virtual address in the system consists of a triple: each inverted page-table
entry is a pair where the process-id assumes the role of the address-space identifier. When a memory reference
occurs, part of the virtual address is presented to the memory subsystem, and the inverted page table is searched
for a match. If no match is found, an illegal address access is attempted. This scheme decreases the amount of
memory needed to store each page table but increases the time needed to search the table when a page reference
occurs.

9.4.4 Oracle SPARC Solaris

Solaris, a 64-bit operating system, uses multiple levels of page tables to address virtual memory issues without
consuming all its physical memory. Two hash tables are used for kernel and user processes, mapping memory
addresses from virtual to physical memory. Each hash-table entry represents a contiguous area of mapped
virtual memory, making it more efficient than having separate hash-table entries for each page. The CPU
implements a Translation Language Block (TLB) that holds Translation Table Entry (TTEs) for fast hardware
lookups. A cache of TTEs is stored in a Translation Storage Buffer (TSB).

9.5 Swapping

Process instructions and data must be in memory for execution, but a process can be temporarily swapped to a
backing store and then back into memory, allowing the total physical address space of all processes to exceed
the system's real physical memory.

COME 1123 – OPERATING SYSTEM| 70


COME 1123 – OPERATING SYSTEM| 71
Multiple Choice Question

1. Which register is used to indicate the starting point of a process's memory area?

A) Page Table Base Register (PTBR)

B) Base Register

C) Limit Register

D) Translation Look-Aside Buffer (TLB)

2. What is the purpose of dynamic loading?

A) To load the entire program into memory at once

B) To load routines into memory only when needed

C) To relocate the program in memory when it exceeds its limit

D) To allocate memory to multiple user processes

3. Which memory management technique divides physical memory into fixed frames and logical memory into
equally sized pages?

A) Contiguous Memory Allocation

B) Swapping

C) Paging

D) Hashed Page Tables

True or False Question

4. Dynamic linking reduces memory usage by linking system libraries to user programs at compile time.
Answer: False

5. Inverted page tables have one entry for each virtual address.
COME 1123 – OPERATING SYSTEM| 72
Answer: False

6. Memory protection is achieved through protection bits in the page table.


Answer: True

Identification Question

7. What registers are used by the CPU to separate and execute processes?
Answer: Base Register and Limit Register

8. What memory management technique loads routines into memory only when needed?
Answer: Dynamic Loading

9. What is the purpose of the Translation Look-Aside Buffer (TLB) in memory management?
Answer: TLB caches address translations for faster memory access.

CHAPTER 16: SECURITY AND PROTECTION

COME 1123 – OPERATING SYSTEM| 73


Both protection and security are vital to computer systems. We distinguish between these two concepts in the
following way: Security is a measure of confidence that the integrity of a system and its data will be preserved.
Protection is the set of mechanisms that control the access of processes and users to the resources defined by a
computer system. We focus on security in this chapter and address protection in Chapter 17.
Security involves guarding computer resources against unauthorized access, malicious destruction or alteration,
and accidental introduction of inconsistency. Computer resources include the information stored in the system
(both data and code), as well as the CPU, memory, secondary storage, tertiary storage, and networking that
compose the computer facility. In this chapter, we start by examining ways in which resources may be
accidentally or purposely misused. We then explore a key security enabler— cryptography.

16.1 THE SECURITY PROBLEM


Ensuring computer system security is vital across various applications, particularly in systems housing sensitive
data like financial records or corporate operations information. Attackers may target such systems for theft,
fraud, or disruption. Security breaches, whether intentional or accidental, can severely impact an organization's
functionality and reputation.

Types of Security Violations:


- Breach of Confidentiality: Unauthorized access to data, facilitating theft or exploitation.
- Breach of Integrity: Unauthorized modification of data, leading to liability issues or tampering with
critical applications.
- Breach of Availability: Unauthorized destruction of data, disrupting services or causing damage.
- Theft of Service: Unauthorized use of resources, such as installing a daemon for file sharing.
- Denial of Service (DoS): Preventing legitimate system use, whether intentional or accidental.

Common Attack Methods:


- Masquerading: Pretending to be someone else to breach authentication.
- Replay Attack: Fraudulent repetition of a valid data transmission, often combined with message
modification.
- Man-in-the-Middle Attack: Intercepting and altering communication between sender and receiver.
- Privilege Escalation: Attaining higher privileges than intended, potentially through masquerading or
message modification.

Levels of Security Measures:


1. Physical: Secure physical access to computer systems and terminals.
2. Network: Protect against unauthorized access through networked connections.
3. Operating System: Address vulnerabilities within the operating system and its services.

While absolute protection from malicious abuse is impossible, deterrents and detection measures can minimize
security breaches. Countermeasures include physical security measures, network protection, and addressing
vulnerabilities within the operating system. Ultimately, a layered approach to security is essential for mitigating
risks and protecting valuable assets.
16.2 PROGRAM THREATS
Malware, or malicious software, includes programs designed to exploit, disable, or damage computer systems.
One common type is the Trojan horse, which pretends to be legitimate but carries out harmful actions once

COME 1123 – OPERATING SYSTEM| 74


installed. Spyware secretly monitors user activities and sends data to remote servers. Ransomware encrypts data
and demands payment for decryption.

Malware thrives when systems violate the principle of least privilege, granting excessive user or process
privileges. This allows malware to spread, evade detection, and exploit vulnerabilities. Design flaws in
operating systems and software contribute to these breaches, highlighting the need for strict access control and
robust security measures.

Malware authors may exploit trap doors or back doors intentionally left in software, providing unauthorized
access. Rigorous security testing and code review processes are crucial to detect and mitigate such threats.

Malware poses a pervasive and evolving threat to computer security, requiring proactive measures to prevent
exploitation and minimize damage.

16.2.1 Code Injection


Code injection poses a serious threat to software security, allowing attackers to add or modify executable code.
Vulnerabilities in programs, often stemming from insecure programming practices in languages like C or C++,
can enable code-injection attacks by allowing direct memory access through pointers.

One common form of code injection is a buffer overflow, where a program writes beyond the bounds of a
buffer, potentially corrupting adjacent memory. The consequences of a buffer overflow vary depending on
factors such as the extent of the overflow and the program's memory layout. In some cases, overflows may go
unnoticed, while in others, they can lead to program crashes or enable attackers to execute arbitrary code.

Developers can mitigate buffer overflow risks by using safer functions like `strncpy()` instead of vulnerable
ones like `strcpy()`, and by implementing bounds checking. However, such precautions are often overlooked,
leaving programs vulnerable to exploitation.

To execute code injection, attackers typically craft shellcode, small code segments that perform specific actions,
such as spawning a shell or establishing network connections. By overwriting return addresses or function
pointers with addresses pointing to shellcode, attackers can redirect program execution to their malicious code.

Shellcode can be obfuscated to evade detection, and techniques like NOP sleds (sequences of no-operation
instructions) can facilitate code execution even in the presence of alignment constraints.

While launching code-injection attacks may require programming skills, tools like shellcode compilers and
exploit kits make it accessible even to less experienced attackers, known as "script kiddies." Moreover, code-
injection attacks can bypass traditional security measures like firewalls and may go undetected within
communication protocols.

Buffer overflows are just one avenue for code injection; heap overflows and mismanagement of memory
buffers can also lead to exploitable vulnerabilities. Vigilance in secure coding practices and thorough testing are
essential to mitigate the risks posed by code injection.

16.2.1 Viruses and Worms

COME 1123 – OPERATING SYSTEM| 75


Viruses and worms represent significant threats to computer systems, capable of causing widespread damage
and disruption. Viruses are fragments of code embedded within legitimate programs, designed to self-replicate
and infect other programs. They can modify or destroy files, leading to system crashes and malfunctions.

Viruses primarily target architectures, operating systems, and applications, with PCs being particularly
vulnerable due to their widespread use. UNIX and other multiuser systems are generally less susceptible to
viruses due to better protection mechanisms for executable programs.

Common vectors for virus transmission include spam emails, phishing attacks, and downloading infected files
from the internet. Viruses often exploit macros in programs like Microsoft Office to execute malicious actions,
such as formatting the hard drive.

When a virus reaches a target machine, a virus dropper inserts it into the system. Viruses can belong to various
categories, including file viruses, boot viruses, macro viruses, rootkits, and polymorphic viruses. Each category
has distinct characteristics and methods of propagation.

The proliferation of viruses has led to the development of sophisticated variants, including encrypted, stealth,
multipartite, and armored viruses. These variants aim to evade detection by antivirus software and complicate
disinfection efforts.

The existence of a computing monoculture, particularly in Microsoft products, raises concerns about the
widespread impact of virus attacks. Vulnerability information is traded on the dark web, increasing the value of
attacks that can target multiple systems within a monoculture.

Addressing the threat posed by viruses and worms requires robust security measures, including antivirus
software, regular system updates, and user education to mitigate risks associated with malicious code.

16.3 SYSTEM AND NETWORK THREATS


When connected to a network, systems face heightened security risks due to the potential for worldwide attacks.
Hackers exploit vulnerabilities in open operating systems, leaving behind tracks that are difficult to trace. They
often launch attacks from compromised systems, known as zombies, to conceal their identity.

The widespread use of broadband and WiFi has made tracking attackers more challenging. Even simple desktop
machines can become valuable targets due to their bandwidth or network access. Wireless networks enable
attackers to remain anonymous or target unprotected networks through "WarDriving."

Attacking Network Traffic


Hackers have various options for network attacks. They can intercept network traffic (sniffing), masquerade as
legitimate parties (spoofing), or become a man-in-the-middle, intercepting and possibly modifying transactions
between peers.

Denial of Service (DoS)


Denial-of-service attacks disrupt legitimate system use by overwhelming resources or disrupting the network.
They are challenging to prevent and can be launched from multiple sites simultaneously (DDoS). Some
attackers demand payment to halt these attacks.

COME 1123 – OPERATING SYSTEM| 76


Port scanning is a reconnaissance technique used by hackers to identify system vulnerabilities. Automated
tools like nmap explore networks, identify running services, and even determine the host operating system,
aiding attackers in exploiting known vulnerabilities.

Network intrusion detection systems continually evolve to detect and mitigate port-scanning techniques,
reflecting the ongoing battle between attackers and defenders.

16.3 CRYPTOGRAPHY AS A SECURITY TOOL


Cryptography serves as a fundamental security tool in safeguarding digital communications and data from
unauthorized access, manipulation, or interception. It relies on cryptographic algorithms and keys to encrypt
and decrypt messages, ensuring confidentiality, authenticity, and integrity. Through encryption, cryptography
secures information transmission by encoding messages in such a way that only authorized parties possessing
the corresponding decryption key can decipher them.

encryption algorithms, two primary categories are delineated: symmetric and asymmetric. Symmetric
encryption employs a singular key for both encryption and decryption, exemplified by DES and AES.
Conversely, asymmetric encryption employs distinct keys for these functions, with RSA standing as a
prominent example.

Authentication, serving to validate the identity of message senders, is elucidated as a complementary facet of
cryptography. Authentication algorithms utilize keys to generate authenticators for messages, thereby ensuring
that only legitimate senders can be duly verified.

Key distribution is underscored as a critical facet of cryptography, particularly salient in symmetric encryption
where shared access to the same key is requisite. Asymmetric encryption offers a paradigm shift by leveraging
public-private key pairs, albeit necessitating measures to authenticate public keys.

Implementation of cryptography within network protocols, delineating its placement across various layers of the
protocol stack. It introduces TLS (Transport Layer Security) as a quintessential cryptographic protocol for
secure communication over the Internet. The key exchange process and session key generation mechanism
within TLS are expounded upon, highlighting its pivotal role in ensuring confidentiality, authenticity, and
integrity in online transactions.

16.3 USER AUTHENTICATION


User authentication is a critical aspect of computer security, ensuring that the system can verify the identity of
users before granting access to resources or sensitive information. Passwords are the most common method of
user authentication, where users provide a secret passphrase to prove their identity. However, passwords are
vulnerable to various attacks, including guessing, exposure, and illegal transfer. To enhance security, systems
implement measures such as password aging, password history, and one-time passwords.

One-time passwords provide an additional layer of security by generating unique passwords for each
authentication session, reducing the risk of password exposure. Biometric authentication, such as fingerprint
readers, offers a more robust method of user authentication by utilizing unique physical characteristics for

COME 1123 – OPERATING SYSTEM| 77


identity verification. Multifactor authentication combines multiple authentication factors, such as passwords,
biometrics, and hardware tokens, to further strengthen security.

Secure hashing techniques, such as those used in UNIX systems, protect passwords by storing hashed values
instead of plain text passwords, making it impossible for attackers to reverse-engineer passwords from the
stored values. Salt values are added to passwords during hashing to prevent dictionary attacks and ensure
unique hash values for identical passwords.

16.3 IMPLEMENTING SECURITY DEFENSES


By applying appropriate layers of defense, we can keep systems safe from all but the most persistent attackers.
In summary, these layers may include the following:

 Educate users about safe computing — don’t attach devices of unknown origin to the computer, don’t
share passwords, use strong passwords, avoid falling for social engineering appeals, realize that an e-
mail is not necessarily a private communication, and so on
 Educate users about how to prevent phishing attacks— don’t click on e- mail attachments or links from
unknown (or even known) senders; authenticate (for example, via a phone call) that a request is
legitimate.
 Use secure communication when possible.
 Physically protect computer hardware.
 Configure the operating system to minimize the attack surface; disable all unused services.
 Configure system daemons, privileges applications, and services to be as secure as possible.
 Use modern hardware and software, as they are likely to have up-to-date security features.
 Keep systems and applications up to date and patched.
 Only run applications from trusted sources (such as those that are code signed).

 Enable logging and auditing; review the logs periodically, or automate alerts.
 Install and use antivirus software on systems susceptible to viruses, and keep the software up to date.
 Use strong passwords and passphrases, and don’t record them where they could be found.
 Use intrusion detection, firewalling, and other network-based protection systems as appropriate.
 For important facilities, use periodic vulnerability assessments and other testing methods to test security
and response to incidents.
 Encrypt mass-storage devices, and consider encrypting important individual files as well.
 Have a security policy for important systems and facilities, and keep it up to date

Multiple Choice Questions:


1. What is the primary distinction between security and protection in computer systems?
a) Security focuses on controlling access to resources, while protection ensures data integrity.
b) Security addresses vulnerabilities within the operating system, while protection guards against
unauthorized access.
c) Security is concerned with preserving system integrity and data, while protection controls access to
resources.

COME 1123 – OPERATING SYSTEM| 78


Answer: C) Security is concerned with preserving system integrity and data, while protection controls
access to resources.

2. Which of the following is NOT a type of security violation?


a) Unauthorized access to data
b) Unauthorized destruction of data
c) Unauthorized modification of system files
Answer: C) Unauthorized modification of system files

3. How do viruses primarily propagate?


a) Through network sniffing
b) Through Trojan horse programs
c) By self-replicating and infecting other programs
Answer: C) By self-replicating and infecting other programs

True or False Questions:


1. Malware authors may exploit intentionally left vulnerabilities in software to gain unauthorized access.
Answer: True

2. One-time passwords provide the same level of security as traditional passwords.


Answer: False

3. Denial-of-service attacks aim to steal sensitive data from a system.


Answer: False

Identification Questions:
1. Name one common type of code injection attack.
Answer: Buffer overflow attack

2. Name one common type of malware discussed in the module.


Answer: Trojan horse

3. What is the term for programs designed to exploit, disable, or damage computer systems?
Answer: Malware

CHAPTER 17: PROTECTION

17.1 Goals of Protection


It emphasizes the increasing relevance of computer system protection, which was originally designed for
multiprogramming operating systems to provide secure resource sharing. It increasingly includes complicated

COME 1123 – OPERATING SYSTEM| 79


systems connecting to vulnerable platforms such as the Internet. Protection techniques detect interface errors
early, prevent system contamination, and enforce resource-management regulations. This separation of policy
and mechanism enables for flexibility and adaptation to new policies without affecting the underlying
mechanisms.

17.2 Principles of Protection


The principle of least privilege is a crucial aspect of system protection, especially in operating system design. It
encourages granting programs, users, and systems the minimum necessary privileges to reduce errors and
malicious attacks. The text highlights the dangers of excessive privileges and how adhering to least privilege
can mitigate attacks. It also introduces compartmentalization, which involves protecting individual system
components with specific permissions and access restrictions. Access restrictions help detect and analyze
security breaches, and emphasizes the importance of depth defense against sophisticated attacks.

17.3 Protection Rings


The kernel, a trusted and privileged component in modern operating systems, necessitates hardware support for
executing with higher privileges than user processes. This is achieved through protection rings, where each ring
provides a subset of functionality. Initialization occurs at the highest privilege level, and special instructions
like system call facilitate transitions between user and kernel modes. Processor traps or interrupts can elevate
execution to higher-privilege rings, but access remains restricted to predefined code paths.

Different processor architectures implement privilege separation. In Intel architectures, user mode code operates
in ring 3, while kernel mode code resides in ring 0, with access controlled by special register bits. With
virtualization, Intel introduced an additional ring (-1) for hypervisors, granting them more capabilities than
guest operating system kernels.

ARM processors initially had user and kernel modes (USR and SVC), but ARMv7 introduced TrustZone,
adding an additional ring for a trusted execution environment. TrustZone offers exclusive access to hardware-

COME 1123 – OPERATING SYSTEM| 80


backed cryptographic features, enhancing security for sensitive information handling. Android extensively
utilizes TrustZone from Version 5.0 onwards.

Employing a trusted execution environment prevents attackers from accessing cryptographic keys if the kernel
is compromised, mitigating brute-force attacks. ARMv8 architecture further extends this model with four
exception levels (EL0 through EL3), with EL3 reserved for the most privileged secure monitor (TrustZone).
This setup allows running separate operating systems concurrently.

The secure monitor, operating at a higher execution level, is ideal for integrity checks on kernels, as seen in
Samsung's Realtime Kernel Protection for Android and Apple's WatchTower (KPP) for iOS.

17.4 Domain of Protection


Concept of protection rings and the organization of computer systems into domains. It introduces the idea of
treating a system as a collection of processes and objects, including both hardware (e.g., CPU, memory,
printers) and software (e.g., files, programs). Each object is uniquely named and accessed through well-defined
operations, akin to abstract data types. Processes should only access authorized objects and, at any given time,
only those necessary to complete their tasks. This principle, known as the need-to-know principle, limits
potential damage from faulty processes or attackers. For instance, when a process invokes a procedure or a
compiler, it should only access relevant variables or files, respectively. The comparison between the need-to-
know principle and the least privilege principle is drawn, with the former representing the policy and the latter
the mechanism for implementing it. For example, while need-to-know dictates the level of access a user should
have to a file, least privilege ensures that the operating system provides mechanisms to enforce these access
levels, such as read-only permissions.

COME 1123 – OPERATING SYSTEM| 81


17.4.1 Domain Structure
A process can operate within a protection domain, which specifies the resources it can access. Each domain
defines a set of objects and the types of operations that can be invoked on each object. Access rights are the
ability to execute an operation on an object. Domains can share access rights, such as in Figure 17.4, where
three domains are D1, D2, and D3. The association between a process and a domain can be static or dynamic.

For static protection domains, a mechanism must be available to change the content of a domain, as a process
may execute in different phases and need read and write access. This violates the need-to-know principle, as the
domain must always reflect the minimum necessary access rights.

For dynamic protection domains, a mechanism is available to allow domain switching, enabling the process to
switch from one domain to another. The content of a domain can also be changed, or if the content cannot be
changed, a new domain can be created with the changed content and switched to when needed.

A domain can be realized in a variety of ways:


• Each user may be a domain. In this case, the set of objects that can be accessed depends on the identity
of the user. Domain switching occurs when the user is changed— generally when one user logs out and
another user logs in.
• Each process may be a domain. In this case, the set of objects that can be accessed depends on the
identity of the process. Domain switching occurs when one process sends a message to another process
and then waits for a response.
• Each procedure may be a domain. In this case, the set of objects that can be accessed corresponds to
the local variables defined within the procedure. Domain switching occurs when a procedure call is
made.

We discuss domain switching in greater detail in Section 17.5.


Consider the standard dual-mode model of operating system execution, distinguishing between kernel mode and
user mode. In kernel mode, processes can execute privileged instructions and control the system entirely, while
in user mode, processes are restricted to nonprivileged instructions and operate within predefined memory
spaces. This model serves to protect the operating system from user processes.
However, in a multiprogram operating system, two protection domains are insufficient, as users also need
protection from each other. Thus, a more complex protection scheme is necessary. The passage suggests
examining UNIX and Android operating systems to understand how they implement such schemes.

17.4.2 Example: UNIX


The challenge in UNIX where certain privileged operations are restricted to the root user, which can hinder
regular users from performing everyday tasks like changing passwords or setting scheduled jobs. The solution

COME 1123 – OPERATING SYSTEM| 82


to this problem is the setuid bit, which, when enabled on an executable file, allows the user who executes the
file to temporarily assume the identity of the file owner, typically root. This mechanism enables processes to
perform privileged operations without requiring users to have root access.

However, setuid bit pose security risks, as they grant potentially powerful privileges to users. These executables
need to be carefully written to ensure they only affect necessary files and are resistant to tampering or
subversion. Despite efforts, many setuid bit have been subverted in the past, leading to security breaches and
privilege escalation for attackers. Measures to limit damage from bugs in setuid bit are discussed further in the
Section 17.8.

17.4.3 Example: Android Application IDs


In Android, each application is assigned a distinct user ID (UID) and group ID (GID) by the installed daemon
during installation. Additionally, each application receives a private data directory that is exclusively owned by
this UID/GID combination. This setup mirrors the protection provided by UNIX systems for separate users,
ensuring isolation, security, and privacy for each application.

To further enhance security, Android modifies the kernel to restrict certain operations, such as networking
sockets, to members of specific GIDs, like AID INET (GID 3003). Moreover, Android defines certain UIDs as
"isolated," preventing them from initiating Remote Procedure Call (RPC) requests to any services beyond a
minimal set. These mechanisms collectively bolster the security and isolation of applications on Android
devices.

Multiple Choice Questions:


1. How can a domain be realized in a computer system?
a) Each user is a domain
b) Each process is a domain
c) Each procedure is a domain
d) All of the above
Answer: d) All of the above

2. When does domain switching occur if each process is a domain?


a) When one user logs out and another user logs in
b) When one process sends a message to another process and waits for a response
c) When a procedure call is made
d) None of the above
Answer: b) When one process sends a message to another process and waits for a response

3. What determines the set of objects that can be accessed if each procedure is a domain?
a) Identity of the user
b) Identity of the process
c) Local variables defined within the procedure
d) All of the above
Answer: c) Local variables defined within the procedure

COME 1123 – OPERATING SYSTEM| 83


True or False Questions:
1. Computer system protection was initially developed solely for multiprogramming operating systems.
Answer: False

2. Protection techniques aim to detect interface errors early and prevent system contamination.
Answer: True

3. The separation of policy and mechanism in protection systems allows for flexibility in adapting to new
policies without impacting underlying mechanisms.
Answer: True

Identification Questions:
1. What mechanism in UNIX allows a user to temporarily assume the identity of the file owner, typically
root, to perform privileged operations without requiring root access?
Answer: The setuid bit
2. What does Android assign to each application during installation to ensure isolation, security, and
privacy?
Answer: User ID (UID) and Group ID (GID)

3. In Android, what GID restricts certain operations like networking sockets to its members?
Answer: AID INET (GID 3003)

COME 1123 – OPERATING SYSTEM| 84

You might also like