Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 38

Unix Processes

The Little Critters That Do The Work

1
Process Management
 A Unix process is the execution of an image (or
the current state) of a virtual machine
 This virtual machine remains in memory until its
displaced by a higher priority process or
terminated by an exit system call or kill signal
 An image is characterized by:
 Memory in use
 General register values
 Status of files opened
 Default (current) directory

2
Process Creation & Initialization
 In Unix, process 0 is assigned to the scheduler
 Process 0 is created as part of the system boot
process
 Every other process is created as the result of a
fork or vfork system call
 The fork and vfork system calls split a process
into two processes
 The process that calls fork/vfork is the parent
process
 The newly created process is known as the child
process
3
Process 0 Creation and Initialization
 User memory is cleared and marked free
 System real-time clock interrupts are turned on
 Process 0 is hand crafted
 All system tables are initialized
 Root's superblock is read in and current date is set
to last modified date
 System buffer's are marked as free
 Process 0 is created by a fork call and scheduled
 Process 1 is created by a fork call to execute the
bootstrap loader
 Process 1 (init) manages all user login activity and
creates the initial user process for each user
4
The fork System Call
 The fork system call creates a child process in the
image of the parent process
 The image includes:
 Shared text
 Data
 User stack
 User structure
 Kernel stack
 A combination of fork and exec system calls is
used to create a new process and start another
program under the new process

5
fork/vfork Algorithm
int fork (void)
{
check for available kernel resources
get free process table slot, unique PID
mark child state "being created"
copy data from parent process table slot to child's slot
increment counts on current directory inode and changed root
increment open file counts in file table
make copy of parent context (u area, text, data, stack) in mem
push dummy sys level context onto child sys level context
contains data allowing child to recognize itself, and
start running from here when scheduled by kernel

6
fork/vfork Algorithm (cont'd)
if(executing process is parent process)
{
change child state to "ready to run"
return(child PID) /* from system to user */
}
else /* executing process is child process */
{
initialize u area timing fields
return(0) /* to user */
}
}

 vfork operates similarly but for efficiency does not copy


context, instead it assumes that the child will almost
immediately call exec, which would create another copy and
free the original 7
Virtual Machine
 Unix implements a collection of virtual machines for
managing process execution
 These virtual machines are generally similar to the
base hardware except some hardware dependent
and potentially hazardous instructions are not
available
 The virtual machines share a number of hardware
resources
 Memory
 Disk drives
 I/O ports
 The CPU
8
 A multi-user OS is designed to implement a virtual
machine for each user
 The real machine's resources are shared according
to the algorithms implemented for resource sharing
and scheduling
 These algorithms usually combine priority,
readiness-to-run, and round robin selection for
allocating resources to a virtual machine
 Virtual machines have one other advantage: they
provide a level of abstraction to programs that can
make them relatively insensitive to changes in the
physical machine's hardware
9
 The main features of a virtual machine are:
 Ability to interface with the OS to perform a number of
operations through the system call interface
 Processes running in a virtual machine are isolated and
protected from each other
 Each virtual machine has its own memory space
 Virtual machines provide access to all system devices
(via the operating system)
 Virtual machines are slower than the base
machine due to resource sharing and the
hardware abstraction but the advantages more
than make up for the slowdown
10
Memory Management
 Memory management is typically a complex issue
in multi-user systems because the needs and
desires of the OS and the users generally far
surpass the physically available memory
 The common solution is using part of the disk as
an extension to physical memory, called virtual
memory
 Since the CPU can only execute from physical
memory, code and data must be copied from
virtual memory to physical memory before it can
be executed or operated upon

11
Managing Virtual Memory
 Three common methods are used to manage virtual
memory
 Swapping
 Demand Paging
 Demand Paging and Swapping
 Swapping systems swap entire process memory
images to/from virtual memory
 A process sits on disk until it is ready to execute at
which time it is swapped into main memory and
scheduled to run
 The more physical main memory, the more
processes that can stay resident in memory
12
 Demand Paging systems do not swap complete
memory images
 Instead, they copy the minimum image needed to
run the process and then rely on page faults to
bring in additional portions of virtual memory as
the process needs it
 Every running process is partially in memory at all
times
 A page fault occurs whenever a process attempts
to access a portion of its memory space that is not
currently residing in physical main memory

13
 Demand Paging and Swapping systems not only
use demand paging to move pages back and forth
between main and virtual memory but can also
swap entire processes out if necessary
 These systems combine the advantages of both
the first two schemes, as well as all of the
disadvantages
 However, it is one of the more common methods of
dealing with thrashing

14
Demand Paging Architecture
 Demand paging requires loading pages of
memory from disk to physical memory on demand
 Instead of swapping the entire process image,
only the parts that are currently required are
loaded
 An advantage is that many processes can be in
memory at once
 A disadvantage is that when a page fault occurs,
the process blocks until the page can be retrieved
from disk

15
Page Faults and Caching
 The two most important concepts in demand
paging are page faults and cache memory
 Page faults are used to determine when a process
attempts to access data or code that is not
currently loaded into physical memory
 A page fault is generated when this occurs and
sets in motion the following:
 Adjusting in-memory page tables to accept a new page
 Setup for a page to be read from disk
 Sleep until page is loaded
 Restart process execution when the page has been
loaded and the process re-scheduled
16
Caching
 Cache memory is used in many implementations
to decrease the cost of a page fault by
anticipating future page faults and pre-loading the
required code or data into fast cache memory
before the process needs it
 A free page pool is used to store these pre-
fetched pages until they are either used or
discarded

17
Page Table
 The page table is a principal data structure used
to manage memory pages
 It consists of the following components:
 Page frame number (physical memory page address)
 Age
 Copy on modify flag
 Modified flag
 Reference count
 Validity flag
 Protection (read, write, execute)

18
Physical and Virtual Memory Organization

 Unix has been designed to optimize the use of


available memory
 If a system has enough physical memory, most of
the OS and in-memory data structures will reside
in physical main memory
 If there is not that much physical memory, the OS
resides partially on disk and is paged in and out
as needed

19
Unix System Memory
 Unix system memory consists of these components
 Low memory components
 Interrupt vectors

 Device handlers

 Physical memory mapping routines

 Kernel code
 Memory management

 Process management

 Disk management

 I/O management

 Interprocess communication

20
 System data
 Process table (proc structures)

 User structure

 Kernel stacks

 Buffer pool

 in-memory inodes

 In-memory processes

21
Unix System Memory Map

Process n

Process 3
Process 2

Process 1

Unix Data Structures


Proc Structures
Buffers
Kernel Code
Low Core
Vectors
Device Handlers
22
CPU Sharing
 The CPU and other system resources are shared
among process via the scheduler
 The Unix scheduler allocates the CPU to
processes
 It schedules them to run in turn (round-robin) until:
 They voluntarily release the CPU while waiting for a
resource or
 The kernel preempts them when their runtime exceeds
the scheduler's time quantum
 The scheduler then chooses the next highest
priority eligible process to run
 The original process will re-execute again when it
next becomes the highest priority eligible process
23
Scheduling Algorithm
int schedule (void)
{
while (no process picked to execute)
{
for (every process on run queue)
{
pick highest priority process already in memory
}
if (no process eligible to execute)
{
sleep
}
}
remove chosen process from run queue
switch context to chosen process, resume its execution
}
24
User Structure
 Database used by kernel to locate all required info
about a user process while process is in memory
 Large data structure
 Defined in /usr/include/sys/user.h
 Some of the major components are:
 Pointer to proc structure (process  Effective U and GIDs, as well as
table entry) real ID for file access perms
 System call return values  Addresses and sizes of allocated
 Buffer address and byte counts system buffers
for I/O operations  Kernel stack for this user
 Pointers to inodes for current,  User login data
root, and parent directories  File creation mask for this proc
 File descriptors of files opened by  Arguments and error codes for
the process system calls
25
Process Table
 Process table entries are defined in
/usr/include.sys/proc.h
 The proc structure, of which there is one per
process, is used by the kernel to determine:
 Priorities
 Scheduling states
 Required resources for a process to run

26
Proc Structure Components
 Process state (sleeping, running, ready-to-run)
 Process flags (in-core, being swapped out, cannot
be swapped out)
 Process priority and priority adjustments (nice)
 Scheduling parameters
 Pending signals
 Name of highest level process in group hierarchy
and the parent ID
 Address and size of swappable image
 Pointer to user structure
 Pointer to linked list of running processes

27
Process Events (Interrupts and Signals)

 There are two process events that can affect a


process, interrupts and signals
 There are two types of interrupts handled by the
kernel
 Device interrupts
 Hardware traps
 Interrupts are asynchronous events that are
generally caused by a hardware condition
 May be an indication that something needs attention or
that something is now available or ready
 A printer is ready for more data or a disk drive has

the requested block available


28
 Hardware traps are usually a result of some kind
of CPU error condition
 Invalid bus access
 Divide by zero
 If the process currently running when an interrupt
occurs is a user process, the interrupt causes a
switch to kernel mode where the associated
interrupt service routine (ISR) is executed
 These ISRs are automatically vectored through
fixed virtual addresses called vectors in kernel
space

29
Interrupt Context Switching
 Device interrupts often indicate the completion of
an I/O request
 Interrupt service then often results in a process
switch, which causes a scheduling activity to take
place
 Depending upon the source of data, there are
potentially two different buffers associated
 The File Buffer
 The Device I/O Buffer

30
File Buffer Headers
 File buffer headers contain the following information:

 Status flags (current state  Superblocks


of buffer)  Inode list
 Device queue pointer  Block number on disk
 Forward and backward  Low order memory
pointers for free list address
 Forward and backward  High order memory
pointers for hash list address
 Device ID to which buffer  Error information
is currently attached  Request start time
 Data size

31
Driver I/O Buffers
 Driver I/O buffer headers contain the following
information:
 Status flags
 Queue of work requests
for this controller
 Device ID
 Controller busy flag
 Error Status
 Device register data
 Temporary storage

32
Signals
 Signals are a software mechanism that are similar
to a message of some sort
 They can be trapped and handled or ignored
 Signals operate through two different system calls
 The kill system call
 The signal system call
 The kill System Call
 The kill system call sends a signal to a process
 kill is generally used to terminate a process
 It requires the PID of the process to be terminated and
the signal number to send as arguments

33
The Signal System Call
 The signal system call is much more diverse
 When a signal occurs, the kernel checks to see if
the user had executed a signal system call and
was therefor expecting a signal
 If the call was to ignore the signal, the kernel returns
 Otherwise, it checks to see if it was a trap or kill signal
 If not, it processes the signal
 If it was a trap or kill signal, the kernel checks to see if
core should be dumped and then calls the exit routine
to terminate the user process

34
Common Unix Signals
 SIGHUP Hang-up
 SIGINT Interrupt
 SIGQIT Quit
 SIGINS Illegal Instruction
 SIGTRAP Trace Trap
 SIGKILL Kill
 SIGSYS Bad argument to system call
 SIGPIPE Write on pipe with no one to read it
 SIGTERM Software termination signal from kill
 SIGSTOP Stop signal
 See /usr/include/sys/signal.h

35
Signal Acceptance
 There are a couple of possible actions to take
when a signal occurs
 Ignore it
 Process it
 Terminate
 The superuser can send signals to any process
 Normal users can only send signals to their own
processes

36
Process Termination
 A process is terminated by executing an exit
system call or as a result of a kill signal
 When a process executes an exit system call, it is
first placed in a zombie state
 In this state, it doesn't exist anymore but may
leave timing information and exit status for its
parent process
 A zombie process is removed by executing a wait
system call by the parent process

37
Process Cleanup
 The termination of a process requires a number of
cleanup actions
 These actions include:
 Releasing all memory used by the process
 Reducing reference counts for all files used by the
process
 Closing any files that have reference counts of zero
 Releasing shared text areas, if any
 Releasing the associated process table entry, the proc
structure
 This happens when the parent issues the wait

system call, which returns the terminated child's PID

38

You might also like