Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

Operating System (CH1)

Definition: An operating system (OS) is a crucial software program that acts as a bridge
between a computer's hardware and its users, managing resources and facilitating user
interaction.
Goals:
1. Executing User Programs: Ensure efficient execution of user programs with access to
required resources.
2. User Convenience: Provide a user-friendly interface for easier interaction and task
management.
3. Hardware Utilization: Manage hardware resources efficiently for optimal performance.
Components of a Computer System
1. Hardware: Provides computing resources including CPU, memory, and I/O devices.
2. Operating System: Controls hardware usage, resource management, and provides a
platform for running applications.
3. Application Programs: Define resource utilization for specific computing tasks.
4. Users: Interact with the computer system directly or indirectly.
Operating System Functions
1. Resource Allocation: Efficiently manage CPU time, memory, and I/O devices among
processes and users.
2. Control Program: Oversee program execution to prevent errors and enforce security
policies.
Operating System Definition:
 An operating system is like the boss of your computer.
 The main part of the operating system, called the kernel, is like the brain that's always
working in the background.
 Everything else on your computer is either stuff that helps the operating system run
smoothly (system programs) or things you use directly, like apps (application programs).
Computer Startup:
 When you turn on your computer, there's a special program called a bootstrap program
that wakes everything up.
 This program is stored in a special place on your computer's chips (ROM or EPROM,
firmware)and gets everything ready to go.
 It starts up the main part of the operating system (kernel) and starts execution
 Computer System Organization:
 Inside your computer, there are little workers called CPUs and device controllers. They
talk to each other through a common bus, like a busy road.
 These workers share a big memory space where they keep important stuff.
 Sometimes they all need to work at the same time, and they have to take turns using the
memory.
Computer-System Operation:

 Your computer's workers, the CPUs, and the I/O devices (like printers or keyboards) can
all do their jobs at the same time.
 Each device has a special boss called a controller that manages it.
 The controllers have their own little storage areas called buffers.
 The CPUs move information between the big memory and these little storage areas.
 When a device finishes a task, it tells the CPU it's done by sending a special signal
called an interrupt. It's like raising a flag to get the CPU's attention.
Common Functions of Interrupts:
 Interrupts are like alarms that make the computer stop what it's doing and pay attention
to something important.
 They have a list of important things to do called "interrupt vectors."
 The computer saves what it's doing, figures out what the interruption is about, and then
does the right thing based on that.
 Traps and Exceptions: These are like interrupts, but they're caused by errors or specific
requests from the user, like asking for help or encountering a problem.
 Interrupt-Driven OS: The operating system relies on interrupts to quickly respond to
events, making everything run smoother and faster.
Interrupt Handling:
1. State Preservation: The operating system saves important CPU information like
registers and the program counter to remember where the computer was before
handling an interrupt.
2. Interrupt Identification: It figures out what kind of interrupt happened:
 Polling: Checking each device in turn to see if it needs attention.

 Vectored Interrupt System: Using a table to quickly find the right action for each
interrupt.

3. Handling Interruptions: Different parts of the operating system deal with each type of
interrupt separately, ensuring the right action is taken for each event.
I/O Structure:
 When the computer starts doing something like copying files, it doesn't stop you from
using it.
 Instead, it waits for the task to finish in the background while you can still do other
things.
 Only one thing like copying files can happen at a time to avoid causing problems.
 control can return to the user program without waiting for I/O completion using a system
call.
 A device-status table holds information about each I/O device, such as type, address,
and state.
 The operating system accesses this table to check device status and update it with any
interrupts that occur.
Storage Definitions and Notation:
 Computers use bits, which are like tiny switches that can be on or off, to store
information.
 A bunch of bits together makes a byte, which is the smallest chunk of storage most
computers use.
Storage Structure:
Main Memory:
 Directly accessible storage for the CPU.
 Allows quick random access.
 Fast storage that's directly connected to the brain of the computer (the CPU), but it
forgets everything when the power goes off(volatile).
Secondary Storage:
 slower storage that remembers things even when the power goes off (nonvolatile)
beyond main memory.
 Includes hard disks, which are rigid platters with magnetic coating.
 Divided into tracks and sectors, managed by a disk controller.
 The disk controller determines the logical interaction between the device and the
computer
Solid-State Disks (SSDs):
 Faster and nonvolatile storage alternative to hard disks.
 Utilizes various technologies.
 Increasing popularity due to speed and reliability.
Storage Hierarchy:
 Storage is arranged like a pyramid, with the fastest and most expensive storage at the
top.
 (Caching), computer copies things from slower storage to faster storage to make things
faster. So(main memory can be viewed as a cache for secondary storage)
 Devices like printers drives have special helpers called device drivers to Provide uniform
interface between controller and kernel
Caching:
 Caching is like having a quick-access memory where the computer stores frequently
used information temporarily.
 It copies important stuff from slower storage to faster storage so that the computer can
access it faster.
 When the computer needs something, it first checks this quick-access memory (cache)
to see if it's already there.
 If it is, great! It uses it directly from the cache, which is super fast.
 If not, the computer copies the needed data to the cache and uses it from there.
 The cache is smaller than the main storage, so managing what goes into the cache and
what gets replaced is important.
Direct Memory Access (DMA) Structure:
 DMA is like a fast lane for data between high-speed devices and the main memory.
 Devices like super-fast internet or graphics cards can send data directly to memory
without bothering the main processor (CPU).
 Instead of interrupting the CPU for every bit of data, DMA only interrupts once per block
of data, which is much more efficient.
Computer System Architecture:
 Most computers have one main processor that does all the heavy tasks(a single general-
purpose processor).
 Some systems also have special-purpose processors for specific tasks.
 Multiprocessor systems(parallel systems, tightly-coupled systems), where multiple
processors work together, are becoming more common.
 They're like having multiple brains working on the same task, which can make
things faster and more reliable.
 Advantages include: Increased throughput ,Economy of scale ,Increased reliability

Two types of multiprocessor systems:


1. Asymmetric Multiprocessing: Each processor has its own specific job.
2. Symmetric Multiprocessing: Each processor can handle any task.
Clustered Systems:

 Clustered systems are like groups of computers that work together on tasks.
 Share storage through a storage-area network (SAN).
 Aim to provide reliable service that continues even if one system fails.
Two types:
Asymmetric clustering: One machine stands by in case of failure.
Symmetric clustering: Multiple nodes work together and monitor each other.
 Used for high-performance computing (HPC), requiring applications to utilize parallel
processing.
 Clusters can have features like a distributed lock manager(DLM) to avoid conflicts
between tasks.
Operating System Structure:
 Multiprogramming, (Batch system), helps computers be more efficient by keeping the
CPU busy all the time.
 Jobs are organized so that there's always something for the CPU to do.
 When one job has to wait, the operating system switches to another job.
 A subset of total jobs in system is kept in memory
 One job selected and run via job scheduling
 Timesharing, or multitasking, is like a supercharged version of multiprogramming where
the CPU switches between tasks so fast that users can interact with each one.
 Each user gets at least one program running at a time.  ( process)
 If there are multiple programs ready to run, the operating system decides which
one gets CPU time.  (CPU scheduling)

 If there isn't enough memory, swapping moves programs in and out to make
room.
 Virtual memory lets programs run even if they don't fit entirely into memory.

Operating-System Operations:
 Interrupts are like urgent signals that tell the operating system something important
happened.
 They can be triggered by hardware, like when a device needs attention, or by software,
like when there's an error.
Dual-Mode Operation:
 Helps the operating system (OS) protect itself and the computer.
 There are two modes: user mode and kernel mode.
 Hardware tells the difference between them with a (Mode bit)
 Kernel mode does important tasks for the system and is more powerful.
 Some instructions can only be used in kernel mode(privileged).
 When we need extra power, like making a system call, we switch to kernel mode briefly.
 Newer CPUs can do even more modes, like one for running virtual machines.
Process Management:
 What is a Process?
 A process (active entity)is a program(passive entity) that's currently running on a
computer.
 It's like a task that the computer is actively doing.
 Processes need resources like the CPU (the brain), memory, and access to files,
Initialization data.
 When a process is done, the computer reclaims the resources it used.
How Processes Work:
 A single-threaded process follows one set of instructions at a time, like reading a book
from start to finish.
 It uses one "program counter" to keep track of where it is in the instructions.
Multiple Processes:
 Systems usually have many processes running at once, doing different things.
 They can be user tasks or things the operating system is doing.
 The computer switches between processes quickly, like flipping through channels on TV.
 some operating system running concurrently on one or more CPUs
 Concurrency in System by multiplexing the CPUs among the processes / threads
Process Management Activities:
 Creating and deleting processes (both user and system).
 Pausing and resuming processes.
 Ensuring processes work together smoothly (synchronization).
 Allowing processes to communicate with each other.
 Handling situations where processes are stuck (deadlock handling).

Memory Management:
To run a program, the computer needs to keep some or all of its instructions and data in
memory.
Goal: Ensure optimal CPU usage and responsive system performance.
 Making sure programs have the memory they need to run.
 Keeping track of which parts of memory are being used.
 Deciding which processes or data should be in memory.
 Assigning and freeing up memory space as needed

Storage Management:

The OS presents a consistent view of storage, hiding physical differences like device
types.
Files are the logical units used to organize data, abstracting away details of underlying
storage devices.
Each storage medium (e.g., disk drive, tape drive) is managed by the OS, handling various
properties like access speed and capacity.
File-System Management Activities:
Files are typically organized into directories for better structure and management.

 Access Control: Systems enforce access control to regulate who can access which
files.
 Core OS Activities:

 Creating and Deleting: The OS allows users to create, delete, and manipulate
files and directories.

 File Manipulation: Primitives are provided to perform operations like copying,


moving, and renaming files.

 Storage Mapping: Files are mapped onto secondary storage, managing their
placement and retrieval.

 Backup: The OS handles backup operations to ensure data integrity, copying


files to stable, non-volatile storage media for safekeeping.

Mass-Storage Management
 Purpose: Disks are used to store data that doesn't fit in main memory or needs to be
kept for a long time.
 Importance: Proper management is crucial as it impacts the speed of computer
operations.
 OS Activities:
 Free-space Management: Keeping track of available space on disks.
 Storage Allocation: Deciding how to allocate space for new data efficiently.
 Disk Scheduling: Optimizing disk operations for speed.
Tertiary Storage: Includes slower options like optical storage and magnetic tape.
 Management: Even slower storage options require management by the OS or
applications.
 Variations: Storage media can vary between Write-Once, Read-Many-Times (WORM)
and Read-Write (RW).
Migration of Data "A" from Disk to Register
 Multitasking Environments:

 Ensure using the most recent value, regardless of its storage location.
 Multiprocessor Environments:
 Hardware should maintain cache coherency, ensuring all CPUs have the latest
value in their cache.
Cache coherency refers to the consistency of data stored in multiple caches in a
multiprocessor system. It ensures that all processors have a consistent view of
memory by managing the updating and invalidating of cached data to reflect
changes made by other processors.
 Distributed Environments:
 Complex due to multiple copies of data existing, requiring synchronization
mechanisms to ensure consistency.
I/O Subsystem:
 The operating system's job is to handle the differences between hardware devices so
users don't have to worry about them.
 The I/O (input/output) subsystem takes care of managing how data is transferred
between the computer's memory and its devices.
 It handles things like buffering (storing data temporarily), caching (storing frequently
used data in faster storage), and spooling (overlapping input and output to keep things
running smoothly).
 It also provides a way for the operating system to talk to specific hardware devices
through device drivers.
Protection and Security:
 Protection is about controlling who can access what resources on the computer.
 Security is about keeping the system safe from internal and external threats like viruses
and hackers.
 User IDs and group IDs help determine who can access which files and processes.
 Privilege escalation allows users to temporarily gain more access rights.
Kernel Data Structures:
 The kernel, the core part of the operating system, uses various data structures to
organize and manage its internal operations.
 These structures include linked lists, binary search trees, hash maps, bitmaps, and
more.
 They help the operating system efficiently store and access information.
Computing Environments:
 Traditional computing environments include standalone machines, but most systems
today are interconnected through networks like the Internet.
 Network computers (thin clients): Devices acting like simple web browsers, accessing
stuff over a network, like Web terminals .
 Mobile computing involves devices like smartphones and tablets, which have extra
features like GPS and use wireless networks for (a communications path, TCP/IP most
common).
 Distributed computing networks separate systems and allow them to communicate, often
through a network operating system.
 Client-server computing involves servers providing services to client machines, such as
database or file storage.
 Peer-to-peer computing treats all connected nodes as equals, allowing each to act as
both client and server.
 Virtualization lets operating systems run multiple environments within each other, useful
for testing, development, and running multiple OSes on the same hardware.
Cloud Computing:
 Cloud computing delivers computing, storage, and even applications over a network.
 It's like renting resources from a provider instead of owning and managing them yourself.
 Amazon EC2 is an example, offering thousands of servers and petabytes of storage
over the internet, where you pay based on your usage.
 There are different types of cloud computing:
 Public cloud: Available to anyone via the internet.

 Private cloud: Run by a company for its own use.

 Hybrid cloud: Combines both public and private cloud components.

 Software as a Service (SaaS): Applications available via the internet, like a word
processor.

 Platform as a Service (PaaS): Software stack ready for application use via the internet,
like a database server.

 Infrastructure as a Service (IaaS): Servers or storage available over the internet, like
storage for backups.

Real-Time Embedded Systems:


 Real-time embedded systems are specialized computers that perform specific tasks.

 They often use special-purpose or limited-purpose operating systems, including real-time


operating systems.

 These systems have well-defined time constraints, meaning tasks must be completed within
specific time limits for correct operation.
Open-Source Operating Systems:

 Open-source OS systems provide their source code freely, unlike closed-source systems.

 Started by the Free Software Foundation (FSF), open-source systems often use licenses like the
GNU Public License (GPL).

 Examples include GNU/Linux and BSD UNIX, which forms the core of Mac OS X.

 Open-source operating systems can be run using virtualization software like VMware Player or
Virtualbox, allowing users to explore different systems.
Operating System (CH2)

Operating System Services


Operating systems provide essential functions for running programs and
supporting users.:
 User Interface: This is how you interact with the computer. It can be
Command-Line (typing commands), Graphics User Interface (using icons
and windows), or Batch (automating tasks).
 Program Execution: The OS loads programs into memory and runs them.
It also handles program termination (show errors), whether it ends
normally or crashes.
 I/O Operations: Programs need to read and write files or use devices like
printers. The OS manages these input and output tasks.
 File-system Manipulation: Handling files and directories is crucial. The
OS lets programs create, delete, search, and manage permissions for files.
 Communications: Programs may need to share data, either on the same
computer or over a network. The OS handles this through shared memory
or message passing.
 Error Detection: The OS constantly checks for errors in hardware,
devices, or programs. It takes appropriate actions to ensure smooth
operation.
 Resource Allocation: When many users or tasks run together, resources
like CPU, memory, and storage need to be shared. The OS manages this
allocation.
 Accounting: It keeps track of which users use how many resources,
aiding in resource management and billing.
 Protection and Security: The OS ensures that users can only access
what they're allowed to and defends against unauthorized access, both
internally and externally.
User Operating System Interface
Operating systems offer different interfaces for users to interact with. Here are
two main types:
1. CLI (Command-Line Interface):
 Allows direct command entry.
 Commands can be typed in and executed.
 May have built-in commands or just call programs.
 Can be implemented in the kernel or as a separate program (shell).
2. GUI (Graphical User Interface):
 Provides a user-friendly desktop with icons, windows, and menus.
 Users interact with icons using a mouse or keyboard.
 Actions are performed by clicking, dragging, or selecting options.
 Examples include Microsoft Windows, Apple Mac OS X, and various
Linux desktop environments.
System Calls:
 These are interfaces for programs to access OS services.
 They're typically accessed through high-level Application Programming
Interface (API) like Win32 (Windows), POSIX (UNIX/Linux), or Java
(JVM).
 Examples include file operations like copying files or managing processes.
System Call Implementation:
 Each system call has a unique number associated with it.
 The OS maintains a table of system calls indexed by these numbers.
 When a program invokes a system call, the OS kernel executes it and
returns the result.
 Programmers don't need to know the internal details of how system calls
are implemented; they just use the APIs provided.

System Call Parameter Passing: (when more details needed)


 Parameters needed for system calls are passed in various ways.
(They can be stored in registers, memory blocks, or pushed onto the
stack.)
 Different OSs may have different methods for passing parameters.
Types of System Calls
1. Process Control:
 Creating and terminating processes.
 Loading and executing programs.
 Getting and setting process attributes.
 Waiting for events or time.
 Managing memory allocation.
 Debugging tools like dumping memory and single-step execution.
 Locks for managing shared data access between processes.
2. File Management:
 Creating, deleting, opening, and closing files.
 Reading, writing, and repositioning within files.
 Getting and setting file attributes.
3. Device Management:
 Requesting and releasing devices.
 Reading, writing, and repositioning data on devices.
 Getting and setting device attributes.
 Attaching or detaching devices logically.
4. Information Maintenance:
 Getting and setting time or date.
 Getting and setting system data.
 Getting and setting attributes of processes, files, or devices.
5. Communications:
 Creating or deleting communication connections.
 Sending and receiving messages between processes.
 if message passing model to host name or process name
(From client to server)
 Shared-memory model create and gain access to memory regions
 Transferring status information.
 Attaching and detaching remote devices.
6. Protection:
 Controlling access to resources.
 Setting permissions for users.
Examples of Operating Systems:
 MS-DOS:
 Single-tasking system.
 Programs are loaded into memory and executed one at a time.
 Simple method for running programs without creating separate
processes.
 FreeBSD:
 supporting multitasking.
 Users log in and choose a shell.
System Programs
 System programs provide a convenient environment for program
development and execution. They can be categorized into:
 File Manipulation:
 Tasks like creating, deleting, copying, renaming, printing, dumping,
listing, and manipulating files and directories.
 Status Information:
 It provides information like date, time, available memory, disk space,
and number of users. Some systems offer detailed performance and
debugging info.
 Registry (Optional): Some systems use a registry to store
configuration information for easy retrieval and management.
 File Modification:
 Text editors for creating and modifying files.
 Special commands for searching file contents or performing text
transformations.
Programming Language Support: OS offers tools like compilers, assemblers,
debuggers, and interpreters for different programming languages to aid in
software development.
Program Loading and Execution: It provides utilities for loading and executing
programs, including loaders, linkage editors, and debugging systems to ensure
smooth program execution.
 Communications:
 Mechanisms for creating virtual connections among processes,
users, and computer systems.
 Enable tasks like messaging, browsing web pages, sending emails,
remote login, and file transfer.

 Background Services:
 Services launched at boot time.(start when your computer starts up
and stop when it shuts down.)
 Provide functionalities such as disk checking, process scheduling,
error logging, and printing.
 Run in user context, not kernel context(means that they have a
limited access to certain system resources compared to other
programs.)
 known as services, subsystems, or daemons.
 Application Programs:
 Don’t pertain to system (not directly related to the core functions of
the operating system)
 Not typically considered part of OS
 Launched by users through the command line, mouse clicks, or
other interactions.( user-friendly interface for accessing the
functionalities offered by the OS.)
Operating System Design and Implementation
Designing and implementing an operating system (OS) is a complex task, with no
single right answer. Different systems have different structures, influenced by
hardware and user needs.
 Internal Structure Variation:
The internal structure of different operating systems can vary widely based on
factors like hardware, system type, and design goals.
 User goals, System goals
When designing an OS, you start by setting goals. Users want something
easy, safe, and fast, while the system should be reliable, flexible, and efficient.
One big rule: keep policies (decisions) separate from mechanisms (how
things are done). This makes it easier to change policies without messing up
how things work.

 Implementation
 OS implementation can vary, from early systems written in assembly
language to modern ones in languages like C and C++.
 Typically, assembly language is used for the lowest levels, while the main
body of the OS is written in C. System programs may use C, C++, or even
scripting languages like Perl or Python.
 Using higher-level languages makes it easier to adapt the OS to different
hardware, but it can be slower. Sometimes, emulation is used to make the
OS run on hardware it wasn't originally designed for.
 Emulation can enable running an OS on non-native hardware.(means that
the OS can work on different types of hardware, even if it's not the one it
was originally intended for.)

Operating System Structure:


Operating systems come in different structures:
Simple Structure: Like MS-DOS, most functionality in the least space, Not
divided into modules, lacking clear separation of components.
Non-Simple Structure: UNIX, for example, has f two separable parts- system
programs and the kernel, which handles core functions.( Provides the file system,
CPU scheduling, memory management, and other operating-system functions)

Layered Approach: The OS is divided into layers, from hardware layer 0


(bottom) to user interface layer N (top), each layer building on those below it.
Microkernel System: Here, the kernel's core functions are minimized, with
most operations handled in user space, promoting flexibility and security.
Communication takes place between user modules using message passing
Modular Approach: Modern systems like Linux and Solaris use loadable
kernel modules, allowing dynamic extension and adaptation. Each core
component is separate and communicates with others over known
interfaces.Modules can be loaded dynamically within the kernel as needed,
providing flexibility.
Hybrid Systems: Combining different approaches, like Windows' mix of
monolithic and microkernel elements, achieves various performance and
security goals.
Mac OS X Structure: It blends a microkernel (Mach) with BSD Unix parts and
dynamically loadable modules for flexibility and performance.
Android Architecture: Built on a modified Linux kernel, Android uses a runtime
environment with core libraries and a virtual machine for app execution.
Operating-System Debugging and Performance Tuning
1. Finding and Fixing Errors: Identifying and correcting bugs or issues
within the operating system.
2. Generating Log Files: The OS generates log files containing error
information to assist in debugging.
3. Core Dump Files: When an app fails, it creates a memory snapshot called
a core dump, capturing the memory state of the process at the time of
failure.
4. Crash Dump Files: If the whole OS crashes, it creates a crash dump,
showing what went wrong( kernel memory information).
5. Performance Tuning: Beyond addressing crashes, debugging includes
optimizing system performance through techniques like trace listings and
profiling.
6. Profiling: checks instructions to find inefficiencies and then optimizes
performance to improve speed.
Operating System Generation and System Boot
During system boot(starting up a computer when it's powered on), the computer's
power initializes, and execution begins at a fixed memory location, typically within
firmware ROM. The initial boot code is stored in this ROM, initiating the boot
process. To start the operating system, the hardware needs access to it. This is
facilitated by a small piece of code called the bootstrap loader, which is stored in
ROM or EEPROM. The bootstrap loader locates the kernel, which is the core of
the operating system, loads it into memory, and starts its execution. Sometimes,
this process involves a two-step approach, where a boot block is initially loaded
by the ROM code, and then the bootstrap loader is loaded from the disk.
Commonly used boot loaders like GRUB offer the flexibility to select the kernel
from multiple disks, versions, and options. Once the kernel is loaded, the system
becomes operational.
Operating System (CH3)
Process Concept:
 An operating system runs different programs, like (batch system) jobs or (Time-shared
systems)user tasks.

 A process is a program that's currently running.

 A process consists of several parts: the program code (text section), current activity
(program counter, processor registers), stack (temporary data like function parameters),
data section (global variables), and heap (dynamically allocated memory).

 Programs are passive entities stored on disk(executable file), while processes are active
entities loaded into memory.

 Different users running the same program can create multiple processes.

Process State:
 A process changes its state as it runs. States include new (being created), running
(executing instructions), waiting (waiting for an event), ready (waiting to be assigned to a
processor), and terminated (finished execution).

Process Control Block (PCB):


 The PCB contains information about each process.

 It includes the process state, program counter (next instruction location), CPU registers,
CPU scheduling information (like priorities), memory-management information (allocated
memory), accounting information (CPU usage), and I/O status information (devices and
open files).

CPU Switch From Process to Process:


 The operating system switches between processes using the CPU.

 It transfers control from one process to another as needed to keep the system running
smoothly.

Threads
 Processes typically have a single thread of execution.

 Threads allow for multiple program counters per process, enabling multiple locations to
execute concurrently.

 Thread details, including multiple program counters, are stored in the Process Control
Block (PCB).

Process Scheduling
 The goal of process scheduling is to maximize CPU utilization by quickly switching
processes onto the CPU for time-sharing.

 The process scheduler selects the next process to execute on the CPU and manages
scheduling queues.

 Scheduling queues include the job queue (all processes in the system), ready queue
(processes in main memory ready to execute), and device queues (processes waiting for
I/O).

 Processes move between these queues based on their execution status and I/O needs.

Schedulers
 Short-term scheduler (CPU scheduler) selects the next process to execute and
allocates CPU time. It operates frequently and must be fast.

 Long-term scheduler (job scheduler) selects processes to bring into the ready queue.
It operates infrequently and controls the degree of multiprogramming.

 Processes can be I/O-bound (spending more time on I/O) or CPU-bound (spending


more time on computations), and the long-term scheduler aims for a good mix.

 Medium-term scheduler can be added to decrease multiprogramming by removing


processes from memory and swapping them to disk.

Multitasking in Mobile Systems:

 iOS: Earlier versions of iOS allowed only one process to run at a time, while others were
suspended. Due to screen space limitations, iOS typically supports:

 Single Foreground Process: Controlled via the user interface and visible to the
user.

 Multiple Background Processes: These are in memory and running but not
displayed on the screen. They have certain limitations, like limited execution time
and specific tasks such as audio playback.

 Android: Supports both foreground and background processes with fewer restrictions.
Background processes typically use services to perform tasks and can continue running
even if the app is suspended. These services have no user interface and use minimal
memory.

Context Switch:
 Definition: When the CPU switches from executing one process to another, it performs
a context switch. This involves saving the state of the current process and loading the
saved state of the new process.

 Importance: Context switches incur overhead, and the system doesn't perform useful
work during this time.

 Factors Affecting Context Switch Time: The more complex the OS and the PCB the
longer the context switch. Some hardware supports multiple sets of registers per CPU,
allowing multiple contexts to be loaded at once.

Operations on Processes:

 Process Creation: Parent processes create child processes, forming a tree structure of
processes. Processes are identified and managed using a unique process identifier
(pid).

 Resource Sharing Options: Parent and child processes can share all, some, or no
resources. Similarly, execution options include concurrent execution or waiting until the
child process terminates.

 Address Space: Child processes typically duplicate the address space of their parent or
load a new program into their memory space.

Process Termination:

 Process Termination Steps: When a process finishes executing, it requests the


operating system to delete it using the exit() system call. Resources allocated to the
process are deallocated by the operating system.( When a process finishes its job, it
asks the operating system to delete it.)

 Parent-Child Relationship: Parents may terminate child processes using the abort()
system call, such as when a child exceeds allocated resources or when its task is no
longer needed. Some operating systems cascade termination, terminating all
descendants if a parent process terminates.( Parents can also stop their children if
they're causing trouble or no longer needed. Some systems also stop all descendants if
the parent stops.)

If no one is waiting for a process to finish, it becomes a zombie.

If a parent stops without waiting, the process is an orphan.


Multiprocess Architecture - Chrome Browser:

 Google Chrome: Chrome employs a multiprocess architecture for enhanced stability


and security:

 Browser Process: Manages the user interface and I/O operations.

 Renderer Process: Renders web pages and handles HTML and JavaScript.
Each website opened gets its own renderer process, running within a sandbox to
restrict I/O and minimize security risks.

 Plug-in Process: Handles different types of plug-ins, with each plug-in running in
its own process for better isolation and stability.

Interprocess Communication (IPC) :

allows processes to cooperate, share data, and synchronize their actions within a system. This
collaboration enables various benefits such as information sharing, speeding up computation,
modularizing tasks, and providing convenience in system operations.

 Processes within a system can either be independent, where they do not affect or are
affected by other processes, or cooperating, where they can influence each other's
execution.

 Cooperating processes share data and resources and can work together to accomplish
tasks efficiently.

Reasons for Cooperation(advantages):

 Information Sharing: Processes may need to exchange data or information to perform


certain tasks or share resources effectively.

 Computation Speedup: Cooperation between processes can lead to parallel execution


of tasks, thereby speeding up overall computation.

 Modularity: Breaking down complex tasks into smaller modules allows different
processes to handle specific functionalities independently.

 Convenience: Cooperation simplifies the development and management of complex


systems by allowing processes to work together seamlessly.
IPC Models:

 Two primary models of IPC are shared memory and message passing.

 Shared Memory: Involves a common area of memory accessible by multiple processes for
communication. Users handle synchronization.

 Message Passing: Processes communicate indirectly by sending and receiving messages. The
operating system facilitates this communication.

Producer-Consumer Problem:

There are two variations of the problem:


1. Unbounded-buffer: In this scenario, there is no practical limit on the size of the buffer that holds
the data items. The producer can keep producing items indefinitely, and the consumer can
consume them as needed.

2. Bounded-buffer: Here, a fixed-size buffer is assumed, meaning there's a limit on how many
items the buffer can hold. If the buffer is full, the producer must wait until there is space available
to produce more items, and if the buffer is empty, the consumer must wait until there are items
available to consume.
Shared Memory Communication:

 Involves an area of memory shared among processes.

 Users must synchronize their actions to access shared memory safely.

Message Passing Communication:

 Allows processes to communicate and synchronize by exchanging messages.

 Operations include sending and receiving messages, with fixed or variable message sizes.

Implementation Issues:

 Processes need to establish communication links and exchange messages.

 Considerations include link establishment, association with multiple processes, link capacity,
message size, and directionality.

 Implementation can be physical (e.g., shared memory, hardware bus, network) or logical
(direct/indirect, synchronous/asynchronous, automatic/explicit buffering).

Direct Communication:

 Processes directly address each other when sending or receiving messages.

 Examples: send(P, message) to send a message to process P, and receive(Q, message) to


receive a message from process Q.

 Communication links are set up automatically and are typically bidirectional.


Indirect Communication:

 In indirect communication, messages are exchanged through mailboxes (also known as ports).

 Each mailbox has a unique ID, and processes can communicate only if they share a common mailbox.

 Processes perform operations such as creating, sending, receiving messages through, and destroying
mailboxes.

 Mailbox sharing allows multiple processes to communicate through the same mailbox, posing challenges
like ambiguity in message delivery.

 Solutions include limiting the association of a link to at most two processes or allowing only one process at a
time to receive messages from a shared mailbox.

Synchronization:

 Message passing can be blocking (synchronous) or non-blocking (asynchronous).


 In blocking mode, the sender waits until the message is received, and the receiver waits until a
message is available.

 In non-blocking mode, the sender continues after sending, and the receiver receives either a valid
message or a null message.

 Different combinations of blocking and non-blocking operations are possible, with rendezvous
occurring when both send and receive operations are blocking.
Buffering:

 Buffering in interprocess communication (IPC) refers to the mechanism of managing a queue of


messages attached to a communication link.

 There are three common ways to implement buffering:

1. Zero capacity: No messages are queued on the link. In this case, the sender must wait
for the receiver, which is known as a rendezvous.

2. Bounded capacity: The buffer has a finite length, allowing only a fixed number of
messages to be stored. If the buffer is full, the sender must wait until there is space
available.

3. Unbounded capacity: The buffer can hold an infinite number of messages. In this
scenario, the sender never needs to wait, as there is always space available in the buffer.

Examples of IPC Systems:


1. POSIX Shared Memory:

 In POSIX systems, shared memory segments are created using functions like shm_open() and
ftruncate().

 Processes can write to and read from the shared memory segment using standard memory
operations.

2. Mach Communication:

 In Mach, communication is message-based, even for system calls.

 Tasks (similar to processes) have mailboxes for communication, with each task having two
mailboxes: Kernel and Notify.

 Messages are transferred using system calls like msg_send(), msg_receive(), and msg_rpc().

3. Windows LPC (Local Procedure Call) Facility:

 Windows uses an advanced LPC facility for interprocess communication.

 Communication occurs between processes on the same system using ports similar to mailboxes.

 The communication process involves opening a connection port, sending a connection request,
and establishing communication channels using private ports.

Communications in Client-Server Systems


Sockets:
 Sockets are communication endpoints identified by an IP address and a port number.

 Communication occurs between pairs of sockets.

 They support various types, including TCP, UDP, and MulticastSocket in Java.

(imagine sockets as phone numbers and ports as extensions in a large office building. They let different
programs on a computer talk to each other, like a chat between friends. Each socket has its own number
(IP address) and extension (port number). They can chat using different styles, like a direct call (TCP) or
shouting messages (UDP).)
Remote Procedure Calls (RPC):

 RPC simplifies procedure calls between processes on networked systems.

 It involves client-side stubs and server-side stubs to marshal and unmarshal parameters.

 RPC handles data representation using XDR format and introduces more failure scenarios
compared to local calls.

(Think of RPC like asking a friend to do something for you while you're on the phone. You tell them what
to do, and they do it for you. It's a way for programs on different computers to talk to each other, almost
like making a request to a faraway friend.)

Pipes:

 Pipes facilitate communication between two processes.

 Considerations include directionality, duplexity, and the need for a parent-child relationship.

 Ordinary pipes(anonymous pipes) are unidirectional and typically used between parent and
child processes.

 Named pipes allow bidirectional communication with No parent-child relationship and can be
shared by multiple processes.

(Pipes are like secret tunnels between two rooms in the same house. They let programs on the same
computer share information. Ordinary pipes are one-way, like a water pipe flowing in one direction, while
named pipes can flow both ways, like a two-way street.)

Remote Method Invocation (RMI): RMI is a fancy term for making objects in one computer talk to
objects in another computer
Operating System (CH4)
Motivation:
 Many modern programs are like juggling multiple tasks at once.
 Threads are like mini-workers inside a program, each doing a different job(Update
display , Fetch data , Spell checking ,Answer a network request).
 Process creation is heavy-weight while thread creation is light-weight (quicker and
easier)
 Can simplify code, increase efficiency
 Even the computer's brain (the kernel)is multithreaded (uses threads to stay organized).
Multithreaded Server Architecture:
 It's like a big building where everyone works together(client , server , thread).
Benefits:
 (Responsiveness)Threads make programs quick to respond, even if one part is busy.
 (Resource Sharing )They share resources easily, like sharing tools in a workshop.
 (Economy)Threads are cheaper and faster to create than whole programs.
 (Scalability)They help programs run smoothly on multiprocessor architectures
(computers with many cores).
Multicore Programming:
Multicore or multiprocessor systems are becoming more common, but they bring new
challenges for programmers:
 Dividing activities: Splitting up tasks among multiple cores
 Balance: You want to keep all cores busy without overwhelming them.
 Data splitting: to divide up the data so each core can work on its part independently..
 Data dependency: If one part of a task relies on the result of another, you must ensure
they happen in the right order.
 Testing and debugging: With multiple cores, finding and fixing errors becomes more
complex.
Parallelism means doing multiple tasks in the same time.
Concurrency means different tasks making progress together, even if they share the
same processor or core.
Types of parallelism:
 Data parallelism: Each core works on a different portion of the same task,.( distributes
subsets of the same data across multiple cores)
 Task parallelism: Different cores handle different tasks,( distributing threads across
cores, each thread performing unique operation)
As the number of tasks increases, computer hardware becomes more supportive of
handling them. Modern CPUs have multiple cores and often support multiple hardware
threads per core, allowing them to handle more tasks efficiently.
Amdahl’s Law:

 It's like figuring out how much faster you can do tasks with more cores.
 If most of the tsks can only be done in one core, adding more cores might not make it
much faster.
 The parts of the tasks that can't be split between cores slow down the process.

User Threads and Kernel Threads:


 User threads are like workers managed by the program itself.
 Kernel threads are like workers managed by the computer's main system(Kernel).
 Different operating systems have their own ways of managing threads.

Multithreading Models:
1. Many-to-One Model:
 In this model, multiple user-level threads are mapped to a single kernel thread.
 If one thread blocks, it causes all threads to block, potentially limiting parallelism.
 On multicore systems, only one thread may be in the kernel at a time, reducing
efficiency.
 Examples include Solaris Green Threads and GNU Portable Threads.
2. One-to-One Model:
 Each user-level thread is mapped to a separate kernel thread.
 Creating a user-level thread creates a corresponding kernel thread, offering more
concurrency.
 However, the number of threads per process may be restricted due to overhead.
 Examples of systems using this model include Windows, Linux, and Solaris 9
and later.
3. Many-to-Many Model:
 This model allows many user-level threads to be mapped to many kernel
threads.
 The operating system can create a sufficient number of kernel threads as
needed.
 Examples include Solaris prior to version 9 and Windows with the ThreadFiber
package.
4. Two-level Model:
 Similar to the many-to-many model, but it allows a user thread to be bound to a
specific kernel thread.
 Examples of systems using this model include IRIX, HP-UX, Tru64 UNIX, and
Solaris 8 and earlier.
Thread Libraries
Thread libraries provide programmers with an interface (API) for creating and managing threads
within their programs. There are two main approaches to implementing thread libraries:
Library entirely in user space
Kernel-level library supported by the OS
Pthreads
May be provided either as user-level or kernel-level
Implicit Threading:
 As the number of threads increases, managing them explicitly becomes more
challenging for programmers.
 Implicit threading involves compilers and run-time libraries managing thread creation and
management rather than programmers.
 Three methods of implicit threading include:
1. Thread Pools: Threads are created in a pool and await work, improving
performance by reusing existing threads, Separating task.
2. OpenMP: Compiler directives and API for C, C++, and FORTRAN to support
parallel programming in shared-memory environments.
3. Grand Central Dispatch: An Apple technology for Mac OS X and iOS operating
systems, managing details of threading and identifying parallel sections. serial –
blocks removed in FIFO order, queue is per process, called main queu
Thread Pools:
 Thread pools involve creating a pool of threads where they wait for tasks.
 Advantages include faster service for tasks with existing threads, dynamic adjustment of
the thread pool size, and separation of task creation from task execution strategies.
 Windows API supports thread pools.
OpenMP:
 OpenMP provides compiler directives and an API for parallel programming in shared-
memory environments like C, C++, and FORTRAN.
 It identifies parallel regions and allows parallel execution of loops and sections of code
using compiler directives.
Grand Central Dispatch:
 Developed by Apple for Mac OS X and iOS operating systems.
 Extends C, C++, and Objective-C with APIs and a run-time library.
 Manages threading details and identifies parallel sections.
 Blocks of code are identified within and placed in a dispatch queue, where they are
executed by available threads.
Threading Issues:
 Involves understanding various complexities and challenges related to threading,
including semantics of system calls like fork() and exec(), signal handling, thread
cancellation, thread-local storage, and scheduler activations.
Signal Handling:
In UNIX systems, signals are used to notify a process about specific events. When a signal is
generated, it's delivered to a process, which then needs to handle it.
There are two types of signal handlers:
default handlers, which are provided by the kernel and executed automatically when a signal is
received,
user-defined handlers, which can be implemented by the programmer to customize how signals
are handled.
For single-threaded programs, signals are delivered to the entire process. However, in multi-
threaded programs, there are several options for handling signals:
 Deliver the signal to the specific thread to which it applies.
 Deliver the signal to every thread in the process.
 Deliver the signal to certain threads in the process.
 Assign a specific thread to receive all signals for the process.
Thread Cancellation:
Thread cancellation involves terminating a thread before it completes its task.
There are two general approaches to thread cancellation:
asynchronous and deferred. Asynchronous cancellation terminates the target thread
immediately, deferred cancellation allows the target thread to periodically check if it should be
canceled.
 Invoking a thread cancellation request initiates cancellation, but the actual termination of
the thread depends on its current state.
 If a thread has cancellation disabled, the cancellation request remains pending until the
thread enables it.
 The default type of cancellation is deferred, meaning that cancellation only occurs when
the thread reaches a cancellation point, such as when it calls the pthread_testcancel()
function. At this point, a cleanup handler is invoked to perform necessary cleanup tasks.
 On Linux systems, thread cancellation is managed through signals.
Thread-Local Storage:
 Thread-local storage (TLS) allows each thread to have its own copy of data, which is
particularly useful when you don't have control over the thread creation process, such as
when using a thread pool.
 TLS differs from local variables in that TLS data remains accessible across function
invocations, similar to static data. However, TLS is unique to each thread.
Scheduler Activations:
 In threading models like M:M and Two-level, getting the right number of kernel threads
for an app is crucial.
 Scheduler activations solve this by (upcalls).letting the kernel communicate with the
thread library(upcall hander)..
 This helps manage kernel threads effectively by using lightweight processes (LWPs) as
a go-between for user and kernel threads
 LWPs act like virtual processors where processes assign user threads to run, each
linked to a kernel thread.
 The app decides how many LWPs to create based on its needs and available resources.

Operating System Examples:


 Windows Threads: Utilizes Windows API for Win 98, Win NT, Win 2000, Win XP, and
Win 7.
 Implements one-to-one mapping at the kernel level.
 Each thread includes a unique thread id, register set, separate user/kernel
stacks, and private storage area.
 Primary data structures: ETHREAD, KTHREAD, TEB.
 Linux Threads: Referred to as tasks.
 Created via clone() system call.
 Allows sharing of address space between parent and child tasks.
 Data structure: task_struct points to process data structures.

(‫ َوعَافِنِي َو ْار ُز ْق ِني‬،‫ َوا ْه ِدنِي‬،‫ َو ْار َح ْمنِي‬،‫)اللَّ ُه َّم ا ْغ ِف ْر ِلي‬

You might also like