Professional Documents
Culture Documents
OS Module-1 Notes
OS Module-1 Notes
OS Module-1 Notes
An Operating System is a program that manages the computer hardware. It also provides
a basis for application programs and acts as an intermediary between the computer user
and the computer hardware.
Operating systems are designed to be convenient, others to be efficient, and others some
combination of the two.
User View:
The user view of the computer depends on the interface used.
Some users may use PC’s. In this the system is designed so that only one user can utilize
the resources and mostly for ease of use where the attention is mainly on performances
and not on the resource utilization.
Some users may use a terminal connected to a mainframe or minicomputers.
Other users may access the same computer through other terminals. These users may
share resources and exchange information. In this case the OS is designed to maximize
resource utilization- so that all available CPU time, memory & I/O are used efficiently.
Other users may sit at workstations, connected to the networks of other workstation and
servers. In this case OS is designed to compromise between individual visibility &
resource utilization.
System View
We can view system as resource allocator i.e. a computer system has many resources
that may be used to solve a problem. The OS acts as a manager of these resources. The
OS must decide how to allocate these resources to programs and the users so that it can
operate the computer system efficiently and fairly.
A different view of an OS is that it need to control various I/O devices & user programs
i.e. an OS is a control program used to manage the execution of user program to prevent
errors and improper use of the computer.
Resources can be either CPU Time, memory space, file storage space, I/O devices and so
on.
Each device controller is in charge of a specific type of device (for example, disk drives,
audio devices, and video displays). The CPU and the device controllers can execute
concurrently, competing for memory cycles.
To ensure orderly access to the shared memory, a memory controller is provided whose
function is to synchronize access to the memory.
Bootstrap program - For a computer to start running, when it is powered up or
rebooted-it needs to have an initial program to run.
Bootstrap program is stored in ROM or electrically erasable programmable read-only
memory (EPROM) known by the general term firmware within the computer hardware.
The occurrence of an event is usually signaled by an Interrupt from either the hardware
or the software. Hardware may trigger an interrupt at any time by sending a signal to the
CPU, usually by way of the system bus. Software may trigger an interrupt by executing a
special operation called system call (Monitor call).
When the CPU is interrupted, it stops what it is doing and immediately transfers
execution to a fixed location. The fixed location usually contains the starting address
where the service routine for the interrupt is located.
The interrupt service routine executes; on completion, the CPU resumes the interrupted
computation. A time line of this operation is shown in Figure 1.3.
Figure 1.3. Interrupt time line for a single process doing output.
Interrupt transfers control to the interrupt service routine generally, through the
interrupt vector, which contains the addresses of all the service routines.
Interrupt architecture must save the address of the interrupted instruction.
Incoming interrupts are disabled while another interrupt is being processed to prevent a
lost interrupt.
A trap is a software-generated interrupt caused either by an error or a user request.
An operating system is interrupt driven. The operating system preserves the state of the
CPU by storing registers and the program counter
To Determine which type of interrupt has occurred:
Polling
vectored interrupt system
Separate segments of code determine what action should be taken for each type of
interrupt
I/O Structure
After I/O starts, control returns to user program only upon I/O completion
Wait instruction idles the CPU until the next interrupt
Wait loop (contention for memory access)
At most one I/O request is outstanding at a time, no simultaneous I/O processing.
After I/O starts, control returns to user program without waiting for I/O completion
System call – request to the operating system to allow user to wait for I/O
completion
Device-status table contains entry for each I/O device indicating its type, address,
and state
Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt.
Device controller transfers blocks of data from buffer storage directly to main
memory without CPU intervention.
Only one interrupt is generated per block, rather than the one interrupt per byte.
Figure 1.5 shows all components of computer system.
Storage Structure
The CPU can load instructions only from memory, so any programs to run must be stored
there.
General-purpose computers run most of their programs from rewriteable memory, called
main memory (also called Random Access Memory or RAM).
Main memory commonly is implemented in a semiconductor technology called Dynamic
Random Access Memory or DRAM.
The read-only memory (ROM) cannot be changed, only static programs are stored there.
EEPROM cannot be changed frequently and so contains mostly static programs. For
example, smart phones have EEPROM to store their factory-installed programs.
All forms of memory provide an array of words. Each word has its own address.
Interaction is achieved through a sequence of load or store instructions to specific
memory addresses.
The load instruction moves a word from main memory to an internal register within the
CPU, whereas the store instruction moves the content of a register to main memory. The
CPU automatically loads instructions from main memory for execution.
A typical instruction-execution cycle, as executed on a system with Von Neumann
architecture, first fetches an instruction from memory and stores that instruction in the
Instruction Register (IR).
The instruction is then decoded and may cause operands to be fetched from memory and
stored in some internal register. After the instruction on the operands has been executed,
the result may be stored back in memory.
The programs and data to reside in main memory permanently. This arrangement usually
is not possible for the following two reasons:
Main memory is usually too small to store all needed programs and data permanently.
Main memory is a volatile storage device that loses its contents when power is turned
off or otherwise lost.
Computer systems provide secondary storage as an extension of main memory which can
hold large quantities of data permanently.
The most commonly used secondary storage device is magnetic disk, which provides
storage for both programs and data.
The wide variety of storage systems in a computer system can be organized in a hierarchy
(Figure 1.4) according to speed and cost.
Example: Disk controller microprocessor receives a sequence of requests from the main
CPU and implements its own queue and scheduling algorithm.
PCs contain a microprocessor in the keyboard to convert the key strokes into codes to be
sent to the CPU.
The use of special purpose microprocessors does not turn a single processor system into
a multiprocessor.
The ability to continue providing the service proportional to the level of surviving
hardware is called Graceful Degradation.
A system goes beyond the graceful degradation are called fault tolerant, because they
suffer a failure of any single component and still continue operation.
The benefit here is that many processes can run simultaneously – N processes can run if
N CPUs without causing a detoriation of performance.
Care must be taken for Controlling I/O to ensure that the data reach to appropriate
processor otherwise one processor is sitting idle while another is overhead.
Multiprocessing can cause a system to change its memory access model from uniform
memory access (UMA) to non-uniform memory access (NUMA).
UMA is defined as the situation in which access to any RAM from any CPU takes the same
amount of time.
Blade Servers are a recent development in which multiple processor boards, I/0 boards,
and networking boards are placed in the same chassis.
The difference between these and traditional multiprocessor systems is that each blade-
processor board boots independently and runs its own operating system.
Some blade-server boards are multiprocessor as well, which blurs the lines between types
of computers.
A layer of cluster software runs on the cluster nodes. Each node can monitor one or more
of the others (over the LAN). If the monitored machine fails, the monitoring machine can
take ownership of its storage and restart the applications that were running on the failed
machine.
In Asymmetric clustering, one machine is in hot standby mode, while the other is
running the applications.
The hot-standby host machine does nothing but monitor the active server. If that server
fails, the hot-standby host becomes the active server.
In Symmetric mode, two or more hosts are running applications and are monitoring
each other. This mode is obviously more efficient, as it uses all of the available hardware.
It does require that more than one application be available to run.
backing store for main memory. This can be achieved by using a technique called virtual
memory that allows for the execution of job i.e. not complete in memory.
Time sharing system should also provide a file system & file system resides on collection of
disks so this need disk management. It supports concurrent execution, job synchronization
& communication.
The operating system and the users share the hardware and software resources of the
computer system.
When sharing, if error in a program, might affect the execution of the program.
Without protection against these sorts of errors, either the computer must execute only
one process at a time or all output must be suspect.
A bit, called the mode bit, is added to the hardware of the computer to indicate the current
mode: kernel (0) or user (1).
User mode when executing harmless code in user applications
Kernel mode when executing potentially dangerous code in the system kernel.
When a user application requests a service from the operating system (via a system call), it
must transition from user to kernel mode to fulfill the request shown in figure 1.9.
At system boot time, the hardware starts in kernel mode. The operating system is then
loaded and starts user applications in user mode.
Whenever a trap or interrupt occurs, the hardware switches from user mode to kernel
mode (that is, changes the state of the mode bit to 0).
Whenever the operating system gains control of the computer, it is in kernel mode. The
system always switches to user mode (by setting the mode bit to 1) before passing control
to a user program.
The dual mode of operation provides us with the means for protecting the operating
system from errant users-and errant users from one another.
1.5.2 Timer
To prevent user program getting stuck in an infinite loop or not calling system services and
never returning control to the OS we use TIMER.
A Timer can be set to interrupt the computer after a specified period.
The period may be fixed (EX: 1/60 second) or variable (EX: from 1ms to 1second).
Before turning over the control to the user, the OS ensures that the timer is set to interrupt.
If timer interrupts, control transfers automatically to OS.
Timer is used to prevent a user program running long.
Timer control is a privileged instruction, (requiring kernel mode. )
Process needs resources such as CPU, memory, I/O, files, Initialization data to accomplish
its task.
Process termination requires reclaim of any reusable resources.
Single-threaded process has one program counter specifying location of next instruction
to execute. Process executes instructions sequentially, one at a time, until completion.
Multi-threaded process has one program counter per thread.
Typically system has many processes, some user, and some operating system running
concurrently on one or more CPUs. Concurrency by multiplexing the CPUs among the
processes / threads.
A process is the unit of work in a system. Such a system consists of a collection of
processes, some of which are operating-system processes (those that execute system code)
and the rest of which are user processes (those that execute user code). All these processes
can potentially execute concurrently – by multiplexing the CPU among them on a single
CPU.
The operating system is responsible for the following activities in connection with process
management:
Creating and deleting both user and system processes
Suspending and resuming processes
Providing mechanisms for process synchronization
Providing mechanisms for process communication
Providing mechanisms for deadlock handling
The entire speed of computer system depends on the speed of the disk sub system.
Magnetic tape drives and CD and DVD drives and Platters typical tertiary storage devices.
The media (tapes and optical platters) vary between WORM (write once, read many times)
and RW (read-write) formats.
1.6.3. Caching:
There are many cases in which a smaller higher-speed storage space serves as a cache, or
temporary storage, for some of the most frequently needed portions of larger slower
storage areas.
The hierarchy of memory storage ranges from CPU registers to hard drives and external
storage. (See table below.)
The OS is responsible for determining what information to store in what level of cache, and
when to transfer data from one level to another.
The proper choice of cache management can have a profound impact on system
performance.
Data read in from disk follows a migration path from the hard drive to main memory, then
to the CPU cache, and finally to the registers before it can be used, while data being written
follows the reverse path. Each step (other than the registers) will typically fetch more data
than is immediately needed, and cache the excess in order to satisfy future requests faster.
For writing, small amounts of data are frequently buffered until there is enough to fill an
entire "block" on the next output device in the chain.
The issues get more complicated when multiple processes (or worse multiple computers)
access common data, as it is important to ensure that every access reaches the most up-to-
date copy of the cached data (amongst several copies in different cache levels. )
Important principle, performed at many levels in a computer (in hardware, operating
system, software).
Information in use copied from slower to faster storage temporarily.
In multitasking environment, where CPU is switched back and forth among various
processes, care must be taken that, if several processes wish to access same file, then each
of these processes obtains recently updated value.
When only one process executes at time, no such difficulties arises.
Multitasking environments must be careful to use most recent value, no matter where it is
stored in the storage hierarchy.
Multiprocessor environment must provide cache coherency in hardware such that all
CPUs have the most recent value in their cache.
Each I/O device has a device handler that resides in separate process associated with that
device.
The I/O management consists of,
A Memory management component that include buffering,, caching & spooling.
General device-driver interface.
Drivers for specific Hardware device.
A network operating system is an OS that provides feature such as file sharing across the
network and includes communication.
Embedded into devices such as automobiles, climate control systems, process control, and
even toasters and refrigerators.
May involve specialized chips, or generic CPUs applied to a particular task. (Consider the
current price of 80286 or even 8086 or 8088 chips, which are still plenty powerful enough
for simple electronic devices such as kids toys.)
Process control devices require real-time (interrupt driven ) OS. Response time can be
critical for many such devices.
Embedded systems almost always run Real-Time operating systems. A real-time system is
used when rigid time requirements been placed on the operation of a processor or the flow
of data; thus, it is often used as a control device in a dedicated application.
Sensors bring data to the computer. The computer must analyze the data and possibly
adjust controls to modify the sensor inputs.
A real-time system has well-defined, fixed time constraints. Processing must be done within
the defined constraints, or the system will fail.
A real-time system functions correctly only if it returns the correct result within its time
constraints.
Handheld Systems include personal digital assistants (PDAs), such as Palm and Pocket-
PCs, and cellular telephones, many of which use special-purpose embedded operating
systems.
Handheld devices have small amounts of memory, slow processors, and small display
screens.
The amount of physical memory in a handheld depends on the device, but typically it is
somewhere between 1 MB and 1 GB.
Handheld devices use smaller, slower processors that consume less power.
Processors for most handheld devices run at a fraction of the speed of a processor in a PC.
Faster processors require more power.
To include a faster processor in a handheld device would require a larger battery, which
would take up more space and would have to be replaced.
Some handheld devices use wireless technology, such as Bluetooth or 802.11, allowing
remote access to e-mail and Web browsing. Cellular telephones with connectivity to the
Internet fall into this category.
PDAs that do not provide wireless access, downloading data typically requires the user
first to download the data to a PC or workstation and then download the data to the PDA.
Some PDAs allow data to be directly copied from one device to another using an infrared
link.
The limitations in the functionality of PDAs are balanced by their convenience and
portability.
(Technically clients and servers are processes, not HW, and may co-exist on the same
physical computer. )
A process may act as both client and server of either the same or different resources.
Served resources may include disk space, CPU cycles, time of day, IP name information,
graphical displays (X Servers), or other resources.
The client-server system has the general structure shown in figure 1.12
Server systems are broadly categorized as compute servers and file servers:
The Compute-server system provides an interface to which a client can send a request
to perform an action (for example, read data); in response, the server executes the
action and sends back results to the client A server running a database that responds to
client requests for data is an example of such a system.
The File-server system provides a file-system interface where clients can create,
update, read, and delete files. An example of such a system is a Web server that delivers
files to clients running Web browsers.
A peer acting as a client must first discover what node provides a desired service by
broadcasting a request for the service to all other nodes in the network. The node (or
nodes) providing that service responds to the peer making the request. To support
this approach, a discovery protocol must be provided that allows peers to discover
services provided by other peers in the network.
Many commands are given to the OS through control statements when the user logs on, a
program that reads & interprets control statements is executed automatically. This
program is sometimes called the control card interpreter or command line interpreter and
is also called as shell.
The command statements themselves deal with process creation & management, I/O
handling, secondary storage management, main memory management, file system access,
protection & Network.
The CI itself contains the code to execute the command. For example, to delete a file may
cause the CI to jump to a section of its code that sets up the parameters and makes
appropriate system call.
An alternative approach used in UNIX implements commands through system programs.
The UNIX command to delete a file: rm file.txt.
Figure 1.14 shows the Bourne shell command interpreter being used on Solaris 10.
In UNIX various GUI interfaces available including Common Desktop Environment (CDE), K
Desktop Environment (KDE) and GNOME desktop.
Usually UNIX users prefer a Command line interface such as shell interfaces where as
Windows users prefer Windows GUI.
System calls occur in different ways depending on the computer. Some time more
information is needed to identify the desired system call. The exact type & amount of
information needed may vary according to the particular OS & call.
Mostly accessed by programs via a high-level Application Program Interface (API) rather
than direct system call use
The API specifies a set of functions that are available to an application programmer,
including the parameters that are passed to each function and the return values the
programmer can expect.
Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems
(including virtually all versions of UNIX, Linux, and Mac OS X), and Java API for the Java
virtual machine (JVM).
System call sequence to copy the contents of one file to another file shown in figure 1.15
The use of APIs instead of direct system calls provides for greater program portability
between different systems. The API then makes the appropriate system calls through
the system call interface, using a table lookup to access specific numbered system
calls, as shown in Figure 1.16.
Figure 1.16. The handling of a user application invoking the open() system call.
Parameters are generally passed to system calls via registers, or less commonly, by
values pushed onto the stack. Large blocks of data are generally accessed indirectly,
through a memory addressed passed in register or on stack, as shown in Figure 1.17.
Process control
end, abort
load, execute
create process, terminate process
get process attributes, set process attributes
wait for time
wait event, signal event
allocate and free memory
File management
create file, delete file
open, close
read, write, reposition
get file attributes, set file attributes
Device management
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
Information maintenance
get time or date, set time or date
get system data, set system data
get process, file, or device attributes
set process, file, or device attributes
Communications
create, delete communication connection
send, receive messages
transfer status information
attach or detach remote devices
Protection
Set permission, get permission
Allow user, deny user
1. PROCESS CONTROL:
A running program needs to be able to halt its execution either normally (end) or
abnormally (abort).
If a system call is made to terminate the currently running program abnormally, or if the
program runs into a problem and causes an error trap, a dump of memory is sometimes
taken and an error message generated.
The dump is written to disk and may be examined by debugger - a system program
designed to aid the programmer in finding and correcting bugs-to determine the cause of
the problem.
The operating system must transfer control to the command interpreter to read the next
command.
In an interactive system, the command interpreter simply continues with the next
command; it is assumed that the user will issue an appropriate command to respond to
any error.
In a GUI system, a pop-up window might alert the user to the error and ask for guidance.
In a batch system, the command interpreter usually terminates the entire job and
continues with the next job.
2. FILE MANAGEMENT:
System calls can be used to create & deleting of files. System calls may require the name
of the files with attributes for creating & deleting of files.
Other operation may involve the reading of the file, write & reposition the file after it is
opened.
3. DEVICE MANAGEMENT:
The system calls are also used for accessing devices.
Many of the system calls used for files are also used for devices.
In multi user environment the requirement are made to use the device. After using the
device must be released using release system call the device is free to be used by another
user. These functions are similar to open & close system calls of files.
Read, write & reposition system calls may be used with devices.
MS-DOS & UNIX merge the I/O devices & the files to form file services structure. In file
device structure I/O devices are identified by file names.
4. INFORMATION MAINTAINANCE:
Many system calls are used to transfer information between user program & OS.
Example:- Most systems have the system calls to return the current time & date, number
of current users, version number of OS, amount of free memory or disk space & so on.
In addition the OS keeps information about all its processes & there are system calls to
access this information.
5. COMMUNICATION:-
There are two modes of communication,
Message Passing Models:-
In this information is exchanged using inter-process communication facility provided by
OS.
Before communication the connection should be opened.
The name of the other communicating party should be known, it can be on the same
computer or it can be on another computer connected by a computer network.
Each computer in a network may have a host name like IP name similarly each process
can have a process name which can be translated into equivalent identifier by OS.
The get host id & process id system call do this translation. These identifiers are then
passed to the open & close connection system calls.
The recipient process must give its permission for communication to take place with an
accept connection call.
Most processes receive the connection through special purpose system program
dedicated for that purpose called daemons. The daemon on the server side is called
server daemon & the daemon on the client side is called client daemon.
Shared Memory:-
In this the processes uses the map memory system calls to gain access to memory owned
by another process.
The OS tries to prevent one process from accessing another process memory.
In shared memory this restriction is eliminated and they exchange information by
reading and writing data in shared areas. These areas are located by these processes and
not under OS control.
They should ensure that they are not writing to same memory area.
Both these types are commonly used in OS and some even implement both.
Message passing is useful when small number of data need to be exchanged since no
conflicts are to be avoided and it is easier to implement than in shared memory. Shared
memory allows maximum speed and convenience of communication as it is done at
memory speed when within a computer.
6. PROTECTION:
Provides a mechanism for controlling access to the resources provided by a computer
system.
Protection was a concern only on multiprogrammed computer systems with several
users.
System calls providing protection include set permission and get permission, which
manipulate the permission settings of resources such as files and disks.
The allow user and deny user system calls specify whether particular users can-or
cannot-be allowed access to certain resources.
In addition to systems programs, operating systems are supplied with programs that are
useful in solving common problems or performing common operations. Such application
programs include web browsers, word processors and text formatters, spreadsheets,
database systems, compilers, plotting & statistical-analysis packages and games.
1.14.3. Implementation:
Operating systems have been written in assembly language, now OS have been written in
high level languages such as C or C++.
The advantage of using high level language is that the code can be written faster, more
compact & easier to understand and debug.
Improvements in compiler technology will improve the generated code for the OS by
simple recompilation.
OS is easier to port – to move to some other hardware – if it is written in high level
language.
The disadvantage of implementing OS in high level language is reduced speed and
increased storage requirements.
Performance improvements in OS results in better data structure & algorithms.
After OS is written, bottleneck routines can be identified and can be replaced with assembly
language code.
The original UNIX OS used a simple layered approach, but almost all the OS was in one big
layer, not really breaking the OS down into layered subsystems:
The operating system is divided into a number of layers (levels), each built on top of lower
layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user
interface (figure 1.22).
The main advantage of layered approach is the modularity i.e. each layer uses the services
& functions provided by the lower layer. This approach simplifies the debugging &
verification.
With modularity, layers are selected such that each uses functions (operations) and
services of only lower-level layers.
The problem is deciding what order in which to place the layers, as no layer can call upon
the services of any higher layer, and so many chicken-and-egg situations may arise.
Layered approaches can also be less efficient, as a request for service from a higher layer
has to filter through all lower layers before it reaches the HW, possibly with significant
processing at each step.
1.15.3. Microkernels:
The basic idea behind micro kernels is to remove all non-essential services from the kernel,
and implement them as system applications instead, thereby making the kernel as small
and efficient as possible.
Micro kernel is a small OS which provides the foundation for modular extensions.
The main function of the micro kernels is to provide communication facilities between the
current program and various services that are running in user space.
Most Microkernels provide basic process and memory management, and message passing
between other services, and not much more.
Mach was the first and most widely known microkernel, and now forms a major component
of Mac OSX.
Benefits:
Detriments:
Performance overhead of user space to kernel space communication
1.15.4. Modules:
The Modern OS development is object-oriented, with a relatively small core kernel and a set
of modules which can be linked in dynamically. See for example the Solaris structure, as
shown in Figure 1.24 below.
Modules are similar to layers in that each subsystem has clearly defined tasks and
interfaces, but any module is free to contact any other module, eliminating the problems of
going through multiple intermediary layers, as well as the chicken-and-egg problems.
The kernel is relatively small in this architecture, similar to Microkernels, but the kernel
does not have to implement message passing since modules are free to contact each other
directly.
One obvious difficulty involves the sharing of hard drives, which are generally partitioned
into separate smaller virtual disks for each operating OS.
Figure 1.25. System Models (a) Non Virtual Machines (b) Virtual Machines
1.16.2. Simulation:
An alternative to creating an entire virtual machine is to simply run an emulator, which
allows a program written for one OS to run on a different OS.
For example, a UNIX machine may run a DOS emulator in order to run DOS programs, or
vice-versa.
Emulators tend to run considerably slower than the native OS, and are also generally less
than perfect.
1.16.2. Para-Visualization:
Para-virtualization is another variation on the theme, in which an environment is provided
for the guest program that is similar to its native OS, without trying to completely mimic it.
Guest programs must also be modified to run on the Para-virtual OS.
Solaris 10 uses a zone system, in which the low-level hardware is not virtualized, but the
OS and its devices (device drivers) are.
Within a zone, processes have the view of an isolated system, in which only the
processes and resources within that zone are seen to exist.
Figure 1.26 shows a Solaris system with the normal "global" operating space as well as
two additional zones running on a small virtualization layer.
Figure 1.26. System Models (a) Non Virtual Machines (b) Virtual Machines
1.16.3. Implementation:
Implementation may be challenging, partially due to the consequences of user versus
kernel mode.
Each of the simultaneously running kernels needs to operate in kernel mode at some
point, but the virtual machine actually runs in user mode.
So the kernel mode has to be simulated for each of the loaded OS, and kernel system
calls passed through the virtual machine into a true kernel mode for eventual HW
access.
The virtual machines may run slower, due to the increased levels of code between
applications and the HW, or they may run faster, due to the benefits of caching. ( And
virtual devices may also be faster than real devices, such as RAM disks which are faster
than physical disks.)
1.16.4. Examples:
1.16.4.1. VMware:
Runs as an application on host operating system such as windows or Linux and allows
this host operating system to concurrently run several operating systems as independent
virtual machines.
The 80x86 hardware platform, allowing simultaneous operation of multiple Windows and
Linux OS, as shown by example in Figure 1.27.
The programmer could test the application on a host operating system and on three guest
operating systems with each system running as a separate virtual machine.
In figure, Linux is the running as the host operating system; FreeBSD, Windows NT and
windows XP are running as the guest operating system.
Figure 1.29. The architecture of the CLR for the .NET framework
OS may be designed and built for a specific HW configuration at a specific site, but more
commonly they are designed with a number of variable parameters and components, which
are then configured for a particular operating environment.
Systems sometimes need to be re-configured after the initial installation, to add additional
resources, capabilities, or to tune performance, logging, or security.
This address points to the "bootstrap" program located in ROM chips (or EPROM chips ) on
the motherboard.
The ROM bootstrap program first runs hardware checks, determining what physical
resources are present and doing power-on self tests (POST) of all HW for which this is
applicable. Some devices, such as controller cards may have their own on-board
diagnostics, which are called by the ROM bootstrap program.
The user generally has the option of pressing a special key during the POST process, which
will launch the ROM BIOS configuration utility if pressed. This utility allows the user to
specify and configure certain hardware parameters as where to look for an OS and whether
or not to restrict access to the utility with a password.
Some hardware may also provide access to additional configuration setup programs,
such as for a RAID disk controller or some special graphics or networking cards.
During boot up, depending on configuration, it may look for a floppy drive, CD ROM drive,
or primary or secondary hard drives, in the order specified by the HW configuration utility.
Assuming it goes to a hard drive, it will find the first sector on the hard drive and load up
the fdisk table, which contains information about how the physical hard drive is divided up
into logical partitions, where each partition starts and ends, and which partition is the
"active" partition used for booting the system.
For a single-boot system, the boot program loaded off of the hard disk will then proceed to
locate the kernel on the hard drive, load the kernel into memory, and then transfer control
over to the kernel.
For dual-boot or multiple-boot systems, the boot program will give the user an
opportunity to specify a particular OS to load, with a default choice if the user does not pick
a particular OS within a given time frame.
The boot program then finds the boot loader for the chosen single-boot OS, and runs that
program.
When the system enters full multi-user multi-tasking mode, it examines configuration files
to determine which system services are to be started, and launches each of them in turn. It
then spawns login programs (gettys) on each of the login devices which have been
configured to enable user logins.
GRUB is an example of an open source bootstrap program for Linux systems.
A disk that has a boot block is called boot disk or system disk.