Operating System IM

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 40

Republic of the Philippines

POLYTECHNIC UNIVERSITY OF THE PHILIPPINES


OFFICE OF THE VICE PRESIDENT FOR BRANCHES AND
EXTENSIONS
MARAGONDON BRANCH

INSTRUCTIONAL MATERIALS

IN

COMP 20103
OPERATING SYSTEM

Compiled by: Checked by:

Virgilio R. Cuajunco, Jr. Assoc. Prof. Ayreenlee E. Resus


Faculty Chairman
Committee on Writing Instructional
Materials

Date: _________________ Date: ___________________

Approved by:

Dr. Agnes Y. Gonzaga Assoc. Prof. Denise A. Abril


Head, Academic Programs Director

Date: _________________ Date: __________________


INTRODUCTION

Welcome to the Polytechnic University of the Philippines. This instructional materials will help you
become an effective learner and successfully meet the requirements of the course. You will
discover that you can learn in a very challenging way at your own pace. You can learn while
enjoying every activities in this course.

THE POLYTECHNIC UNIVERSITY OF THE PHILIPPINES

VISION

PUP: The National Polytechnic University

MISSION

Ensuring inclusive and equitable quality education and promoting lifelong learning
opportunities through a re-engineered polytechnic university by committing to:

 provide democratized access to educational opportunities for the holistic development of


individuals with global perspective

 offer industry-oriented curricula that produce highly-skilled professionals with managerial


and technical capabilities and a strong sense of public service for nation building

 embed a culture of research and innovation

 continuously develop faculty and employees with the highest level of professionalism

 engage public and private institutions and other stakeholders for the attainment of social
development goal

 establish a strong presence and impact in the international academic community

PHILOSOPHY

As a state university, the Polytechnic University of the Philippines believes that:

 Education is an instrument for the development of the citizenry and for the enhancement
of nation building; and

 That meaningful growth and transmission of the country are best achieved in an
atmosphere of brotherhood, peace, freedom, justice and nationalist-oriented education
imbued with the spirit of humanist internationalism.

2
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
TEN PILLARS

Pillar 1: Dynamic, Transformational, and Responsible Leadership


Pillar 2: Responsive and Innovative Curricula and Instruction
Pillar 3: Enabling and Productive Learning Environment
Pillar 4: Holistic Student Development and Engagement
Pillar 5: Empowered Faculty Members and Employees
Pillar 6: Vigorous Research Production and Utilization
Pillar 7: Global Academic Standards and Excellence
Pillar 8: Synergistic, Productive, Strategic Networks and Partnerships
Pillar 9: Active and Sustained Stakeholders’ Engagement
Pillar 10: Sustainable Social Development Programs and Projects

SHARED VALUES AND PRINCIPLES

 Integrity and Accountability


 Nationalism
 Spirituality
 Passion for Learning and Innovation
 Inclusivity
 Respect for Human Rights and The Environment
 Excellence
 Democracy

3
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
POLYTECHNIC UNIVERSITY OF THE PHILIPPINES
MARAGONDON BRANCH

GOALS

 Quality and excellent graduates


 Empowered faculty members
 Relevant curricula
 Efficient administration
 Development – oriented researches
 State-of-the-art physical facilities and laboratories
 Profitable income – generating programs
 Innovative instruction
 ICT – driven library
 Strong local and international linkages

PROGRAM OBJECTIVES

The College of Technology aims to achieve the following goals:

1. To offer career-focused programs of study with a well-chosen general education


component to enable students to cope with global standards for a meaningful and fruitful
life.

2. To inculcate in the students good moral values and work ethics.

3. To ensure teaching learning efficiency and effectiveness by means of highly qualified and
committed faculty members.

4. To encourage faculty members to produce appropriate, affordable, user-friendly


instructional materials that conform to national and global standards.

5. To promote a growing output-oriented consciousness in research, production, extension,


and community service.

6. To establish national and international linkages for student on-the-job training


requirements and funding and/or faculty-improvement assistance.

4
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
COMP 20103
OPERATING SYSTEM

COURSE DESCRIPTION

COURSE TITLE : OPERATING SYSTEM


COURSE CODE : COMP 20283
COURSE CREDIT : 3(2 lecture/1 laboratory) units
PRE-REQUISITE : COMP 20023

This course provides an introduction to the concepts, theories and components that serve as the
basis for the design of classical and modern operating systems. Topics include disk
management, process and memory management, file management, deadlocks and system
security and protection. This course discusses and simulates the basic functions of Operating
System.

COURSE OBJECTIVES

By the end of this course the student will be able to:


1. Recognize the importance of the operating systems.
2. Recognize how the applications interact with the operating system as the later working as
intermediary program between the machine and the application.
3. Know how the operating systems transport the application requests to the hardware.
4. Understand how operating systems managing resources such as processors, memory
and I/O.
5. Realize the efficiency or the deficiency of the different techniques used by some operating
systems.

Institutional learning Program outcomes Course outcomes


outcomes
1.Creative and Apply knowledge of computing, science, and Explain the more general systems principles
Critical Thinking mathematics appropritate to the discipline. that are used in the design of all computer
Analyze complex problems, and identify and system.
define the computing requirements Describe the basic concepts of operating
appropriate to its solution. systems, including development and
Identify and analyze user needs and take achievements, functionalities and objectives,
them into account in the selection, creation, structure and components.
evaluation and administration of computer Explain concepts covered in concurrency
based systems. control using deadlock and starvation.
Design, implement, and evaluate computer Explain how memory, I/O Devices, files,
based systems, processes, components, or processes and threads are managed, and
programs to meet desired needs and evaluate the performance of various
requirements under various constraints. scheduling algorithms.
Integrate IT-based solutions into the user Analyze, solve, simulate and assess how
environment effectively. Operating system use different algorithms to
Assist in the creation of an effective IT project perform a function.
plan. Compare and contrast the algorithms used for
5
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
2.Adeptness in the Apply knowledge through the use of current processor scheduling, disk scheduling and the
Responsible Use of techniques, skills, tools and practices different ways of allocating memory to tasks.
Technology necessary for the IT profession Design and develop a program using concrete
3.Community Understand best practices and standards and function of the operating system such as
Engagement their applications. process scheduling, dik scheduling or memory
4.High Level of Function effectively as a member or leader of management.
Leadership and a development team recognizing the different Analyze the tradeoffs inherent in OS Design.
Organizational roles within a team to accomplish a common Demystify the interactions between the
Skills. goal. programs written and the hardware.
5.Strong Service Develop the role of operating systems in a
Orientation wider context, e.g. extending OS Services via
6.Effective Communicate effectively with the computing system calls.
Communication community and with society at large about Familiarize students with the issues and new
complex computing activities through logical trends involved in the design and
writing, presentations, and clear instructions. implementation of modern operating system.
7.Sense of Nationalism Analyze the local and global impact of Analyze the relationship between the operating
and Global computing information technology on system and the hardware environment in
Responsiveness. individuals, organizations, and society. which it runs.
8.Sense of Personal Understand professional, ethical, legal,
and Professional security and social issues and responsibilities
Ethics in the utilization of information technology.
9.Passion to Life- Recognize the need for and engage in
Long Learning planning self-l;earning and imprroving
performance as a foundation for continuing
professional development.

COURSE REQUIREMENTS

The course requirements are as follows:

1. Students are encouraged to attend the class sessions (online students) and complete all
the requirements (online and offline students).
2. The course is expected to have a minimum of four (4) quizzes and two (2) major
examination (Midterm and Final Examination).
3. Other requirements such as written outputs, exercises, assignments and the likes will be
given throughout the sessions. These shall be submitted on the due dates set by the
teacher.

Note: Some activities will be rated using the Rubrics.

GRADING SYSTEM

The grading system will determine if the student passed or failed the course. There will be two
grading periods: Midterm and Final Period. Each period has components of: 70% Class Standing
+ 30% Major Examination. Final Grade will be the average of the two periodical grades.

Midterm Finals
Class Standing 70% Class Standing 70%

6
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
 Quizzes  Quizzes
 Activities  Activities
 Assignments  Project
Mid-term Examination 30%  Assignments
Final Examination 30%
FINAL GRADE = (Midterm + Finals) /2
RUBRICS:
Outstanding Very Good Average Poor
5.0-4.5 4.0-3.0 2.5-1.5 1.0
Completeness Complete in all Complete in some Incomplete in many Incomplete and does
aspects and includes aspects and includes aspects and includes not include
all requirements most of the few requirements requirements
requirements
Analysis and Use Presents an insightful Presents an insightful Presents shallow Presents incomplete
of the and in-depth analysis and analysis of most analysis of data; and analysis of data; and
Entrepreneurial of all data; uses many of the data; uses uses limited fails to use
Concepts and entrepreneurial some entrepreneurial entrepreneurial entrepreneurial
Business Tools concepts and concepts and concepts and concepts and
business tools business tools business tools business tools
learned in the subject learned in the subject learned in the subject learned in the subject
Setting of Presents complete, Presents specific, Presents some Presents limited,
Recommendation realistic, and realistic, and applicable unrealistic
for future action applicable applicable recommendations recommendation from
plans recommendations recommendations from the data the data gathered,
from, and shows how from the data gathered, and shows and failed to show
to use it in their future gathered, and shows how to use it in their how to use it in their
action plans how to use it in their future action plans. future action plans.
future action plans.
Over-all The paper has The paper has clarity, The paper lacks The paper is not clear
cohesiveness sophisticated clarity, conciseness, and clarity, conciseness, and contains serious
(writing and conciseness, and correctness. Includes and correctness. errors. Failed to
presentation) correctness. Includes some needed Includes limited include relevant data
all needed relevant relevant data and relevant data and and analysis.
data and analysis. analysis. analysis.

COURSE GUIDE

First semester Class (18 weeks)


Week Topic Learning Outcomes Methodology Resources Assessment
Orientation and
Discussion.
Understand the mission,
Review of the
vision, goals and objectives
Orientation of syllabus,
of the University.
University’s explanation of
Understand all the policies
vision, mission, learning activities
and in-house classroom PUP Student Handbook
1 goals and and assessment. None
management of the Course Syllabus
objectives. Explanation of
professor.
course
Acquire the relation of the
requirements and
subject to their course and
methods of
ultimately
evaluating student
performance
2 1. Introduction to  Summarize the Lecture LCD Projector Recitation
7
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Operating Systems objectives and functions Listening Team Laptop Computer Homework
 Definition, of modern operating Question Period Video Tutorials Written Exam
Functions and systems Library Research
Goals of the  Determine the functions
Operating of a contemporary
System operating system with
 History of the respect to convenience,
Operating efficiency and the ability
System to evolve.
 Types of  Compare and contrast
Operating networked, client-server,
Systems distributed operating
 Components of systems and single user
Operating operating systems.
System  Deduce the tradeoffs
inherent in operating
system design.
3-4 2. Computer System  Distinguish potential Lecture LCD Projector Recitation
Structure threats to operating Group Discussion Laptop Computer Homework
 The Computer systems and the security TextbookAssignm Video Tutorials Written Exam
System features designed to ent
 Computer Boot- guard against them.
Up  Explain the concept of
 Traps and logical layer.
Interrupts  Summarize the benefits
 I/O Structure of building abstact layers
 Storage in hierarchical fashion.
Structure  Describe how computing
 Hardware resources are used by
Protection application software and
managed by system
software
 Discuss the advantages
and disadvantages of
using interrupt
processing
 Explain the use of a
device list and driver I/O
queues
 Contrast kernel and user
mode in an operating
system
 Describe the value of
middleware
5–6 3. Process  Discover the different Lecture LCD Projector Seatwork
Management states that a task may Simulation Laptop Computer Homework
 Process Concept pass through and the Problem Solving Video Tutorials Written exam
 Process States data structures needed to Group Calculator Project
 Process Creation support the management Assignment CPU Scheduling Apps Application
and Termination of many tasks. Individual
 Process Threads  Visualize reasons for Asssignment
 Process using interrupts,
Schedulers dispatching, and context
 Process switching.
Scheduling  Classify the types of
Concepts processor scheduling
such as short term,
8
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
 Processor medium term longterm,
Scheduling and I/O.
Algorithms  Discuss the need for
preemption and deadline
scheduling.
 Compare and contrast
the common algorityms
used for both preemptive
and non-preemtive
scheduling of tasks in
operating systems, such
as priority, performance
cvomparison, and fair
share schemes.
 Detect the difference
between processes and
threads.
 Compare and contrast
static and dynamic
approaches to process
scheduling.
 Illustrate ways that the
logic embodied in
scheduling algorithms
are applicable to other
domains, sucha s disk
I/O, network scheduling,
project scheduling, and
problems beyond
computing.
 Measure the
performance of the
processor using different
scheduling algorithms.

7–8 4. Storage  Explain buffering and Library Research LCD Projector Recitation
Management describe strategies for Lecture Laptop Computer Seatwork
 Disk Scheduling implementing it. Group Discussion Video Tutorials Homework
Concepts  Identify the requirements Student’s Self Calculator Written exam
 Disk Scheduling for failure recovery. Assessment Disk Scheduling Apps
Algorithms  Identify the relationship Problem Solving
between the physical
hardware and the virtual
devices maintained by
the operating system.
 Differentiate the
mechanisms used in
interfacing a range of
devices (including
handheld devices,
networks, multimedia) to
a computer and explain
the implications of these
for the design of an
operating system.
 Describe the advantages
and disadvantages of

9
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
direct memory access
and discuss the
circumstances in which
its use is warranted.
 Implement a simple
device driver for a range
of possible devices.
 Assess the disk
scheduling algorithms
used to calculate the total
head movement and
seek time.
9 Midterm Examination
10 – 11 5. Memory  Explain memory Library Research LCD Projector Recitation
Management hierarchy and cost- Individual Laptop Computer Seatwork
 Memory performance trade-offs. Assignment Video Tutorials Reserch Work
Management  Summarize the principles Simulation Calculator Homework
Concepts of virtual memory as Problem Solving Written exam
 Swapping, applied to caching and Lecture
Overlays and paging. Demonstration
Compaction  Describe the reason for
 Multiprogramming and use of cache
with Fixed and memory.
Variable Partitions  Discuss the concept of
 Buddy System thrashing, both in terms
of the reasons it occurs
and the techniques used
to recognize and manage
the problem.
 Implement the different
memory management
schemes.
 Validate the trade-offs in
terms of memory size
and processor speed.
 Formulate how the
different memory
management schemes
behave.
 Defend the different ways
of allocating memory to
tasks, citing the relative
merits of each.
12 – 13 6. Virtual Memory  Calculate the physical Lecture LCD Projector Seatwork
Management and logical addresses Simulation Laptop Computer Homework
 Virtual Memory used. Problem Solving Video Tutorials Written Exam
Concepts  Evaluate the different task Calculator
 Paging, memory management Individual
Segmentation, schemes that can be Assignment
and used. Open Textbook
Segmentation  Experiment on the Study
with Paging different algorithms used
 Page for page replacement
Replacement  Simulate the behavior of
Algorithms page allocation
 Page Allocation strategies.
14 – 15 7. Deadlocks  Describe how deadlocks Group Report LCD Projector Recitation
10
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
 Deadlock can arise in a system, Reading Laptop Computer Seatwork
Concept and explain the solutions Assignment Video Tutorials Homework
 Conditions for available. Class Discussion Written Exam
Deadlock  Compare and Contrast Problem Solving
 Approaches in the different ways of Simulation
Handling handling deadlock.
Deadlock  Simulate and justify if
 Deadlock deadlock will or will not
Prevention and occur.
Avoidance
 Deadlock
Detection and
Recovery
16 – 17 8. File Management  Describe the choices to Reading LCD Projector Research Work
 File Concepts be made in designing the Assignment Laptop Computer
and Naming file systems. Library Research Video Tutorials
 Logical and  Summarize how Open Textbook
Physical View of hardware developments Study
Files have led to changes in
the priorities for the
 File Organization
design and the
and Access
management of file
 File Attributes systems.
and Operations  Compare and contrast
 Directories different approaches to
file organization,
recognizing the strengths
and weaknesses of each.
18 Final Examination

REFERENCES

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

11
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
TABLE OF CONTENTS

Topic Page

Introduction
Orientation

Lesson 1 Operating System 1


Unit 1 Fundamentals 1
Unit 2 Computer Architecture 6

Lesson 2 Function 10
Unit 1 Processes 10
Unit 2 Process Synchronization 14
Unit 3 Deadlocks 16

Lesson 3 Management 20
Unit 1 Main Memory Management 20
Unit 2 File Management 24
Unit 3 I/O Management 26

12
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Lesson 1 Operating System

Unit 1 Fundamentals

Overview:

The operating system is considered as the most important of all system software. This unit
introduces operating systems and their role in the over-all performance of computer systems. An
operating system is software that manages a computer’s hardware. It also provides a basis for
application programs and acts as an intermediary between the computer user and the computer
hardware. An amazing aspect of operating systems is how they vary in accomplishing these
tasks in a wide variety of computing environments. Operating systems are everywhere, from cars
and home appliances that include “Internet of Things” devices, to smart phones, personal
computers, enterprise computers, and cloud computing environments.

Learning Objectives:

After completion, you should be able to:

1. Describe the general organization of a computer system and the role of interrupts.
2. Describe the components in a modern multiprocessor computer system.
3. Illustrate the transition from user mode to kernel mode.
4. Discuss how operating systems are used in various computing environments.
5. Provide examples of free and open-source operating systems.

Course Materials:

What Operating Systems Do

We begin our discussion by looking at the operating system’s role in the overall computer
system. Acomputer system can be divided roughly into four components: the hardware, the
operating system, the application programs, and a user (Figure 1.1).

The hardware—the central processing unit (CPU), the memory, and the input/output (I/O)
devices—provides the basic computing resources for the system. The application
programs—such as word processors, spreadsheets, compilers, and web browsers—define the
ways in which these resources are used to solve users’ computing problems. The operating
system controls the hardware and coordinates its use among the various application programs
for the various users.

We can also view a computer system as consisting of hardware, software, and data. The
operating system provides the means for proper use of these resources in the operation of the
computer system. An operating system is similar to a government. Like a government, it performs

13
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
no useful function by itself. It simply provides an environment within which other programs can
do useful work.

To understand more fully the operating system’s role, we next explore operating systems from
two viewpoints: that of the user and that of the system.

User View

The user’s view of the computer varies according to the interface being used. Many computer
users sit with a laptop or in front of a PC consisting of a monitor, keyboard, and mouse. Such a
system is designed for one user to monopolize its resources. The goal is to maximize the work
(or play) that the user is performing. In this case, the operating system is designed mostly for
ease of use, with some attention paid to performance and security and none paid to resource
utilization—how various hardware and software resources are shared.

Figure 1.1 Abstract view of the components of a computer system.

System View

From the computer’s point of view, the operating system is the program most intimately involved
with the hardware. In this context, we can view an operating system as a resource allocator. A
computer system has many resources that may be required to solve a problem: CPU time,
memory space, storage space, I/O devices, and so on. The operating system acts as the
manager of these resources. Facing numerous and possibly conflicting requests for resources,
the operating system must decide how to allocate them to specific programs and users so that it
can operate the computer system efficiently and fairly.

A slightly different view of an operating system emphasizes the need to control the various I/O
devices and user programs. An operating system is acontrol program. A control program
manages the execution of user programs to prevent errors and improper use of the computer. It
is especially concerned with the operation and control of I/O devices.

14
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Defining Operating Systems

By now, you can probably see that the term operating system covers many roles and functions.
That is the case, at least in part, because of the myriad designs and uses of computers.
Computers are present within toasters, cars, ships, spacecraft, homes, and businesses. They are
the basis for game machines, cable TV tuners, and industrial control systems.

To explain this diversity, we can turn to the history of computers. Although computers have a
relatively short history, they have evolved rapidly. Computing started as an experiment to
determine what could be done and quickly moved to fixed-purpose systems for military uses,
such as code breaking and trajectory plotting, and governmental uses, such as census
calculation. Those early computers evolved into general-purpose, multifunction mainframes, and
that’s when operating systems were born. In the 1960s, Moore’s Law predicted that the number
of transistors on an integrated circuit would double every 18 months, and that prediction has held
true. Computers gained in functionality and shrank in size, leading to a vast number of uses and
a vast number and variety of operating systems.

• An operating system is software that manages the computer hardware, as well as


providing an environment for application programs to run.

• Interrupts are a key way in which hardware interacts with the operating system. A
hardware device triggers an interrupt by sending a signal to the CPU to alert the CPU
that some event requires attention. The interrupt is managed by the interrupt handler.

• For a computer to do its job of executing programs, the programs must be in main
memory, which is the only large storage area that the processor can access directly.

• The main memory is usually a volatile storage device that loses its contents when power
is turned off or lost.

• Nonvolatile storage is an extension of main memory and is capable of holding large


quantities of data permanently.

• The most common nonvolatile storage device is a hard disk, which can provide storage
of both programs and data.

• The wide variety of storage systems in a computer system can be organized in a


hierarchy according to speed and cost. The higher levels are expensive, but they are
fast. As we move down the hierarchy, the cost per bit generally decreases, whereas the
access time generally increases.

• Modern computer architectures are multiprocessor systems in which each CPU contains
several computing cores.

15
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
• To best utilize the CPU, modern operating systems employ multiprogramming, which
allows several jobs to be in memory at the same time, thus ensuring that the CPU
always has a job to execute.

• Multitasking is an extension of multiprogramming wherein CPU scheduling algorithms


rapidly switch between processes, providing users with a fast response time.

• To prevent user programs from interfering with the proper operation of the system, the
system hardware has two modes: user mode and kernel mode.

• Various instructions are privileged and can be executed only in kernel mode. Examples
include the instruction to switch to kernel mode, I/O control, timer management, and
interrupt management.

• A process is the fundamental unit of work in an operating system. Process management


includes creating and deleting processes and providing mechanisms for processes to
communicate and synchronize with each other.

• An operating system manages memory by keeping track of what parts of memory are
being used and by whom. It is also responsible for dynamically allocating and freeing
memory space.

• Storage space is managed by the operating system; this includes providing file systems
for representing files and directories and managing space on mass-storage devices.

• Operating systems provide mechanisms for protecting and securing the operating
system and users. Protection measures control the access of processes or users to the
resources made available by the computer system.

• Virtualization involves abstracting a computer’s hardware into several different execution


environments.

• Data structures that are used in an operating system include lists, stacks, queues, trees,
and maps.

• Computing takes place in a variety of environments, including traditional computing,


mobile computing, client–server systems, peer-to-peer systems, cloud computing, and
real-time embedded systems.

• Free and open-source operating systems are available in source-code format. Free
software is licensed to allow no-cost use, redistribution, and modification. GNU/Linux,
FreeBSD, and Solaris are examples of popular open-source systems.

16
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Inserted is a powerpoint presentation

OPERATING SYSTEM
FUNDAMENTAL.pptx
Activities/Assessments:

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

17
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Unit 2 Computer Architecture

Overview:

An operating system provides the environment within which programs are executed. Internally,
operating systems vary greatly in their makeup, since they are organized along many different
lines. The design of a new operating system is a major task. It is important that the goals of the
system be well defined before the design begins. These goals form the basis for choices among
various algorithms and strategies.

We can view an operating system from several vantage points. One view focuses on the services
that the system provides; another, on the interface that it makes available to users and
programmers; a third, on its components and their interconnections. In this chapter, we explore
all three aspects of operating systems, showing the viewpoints of users, programmers, and
operating system designers. We consider what services an operating system provides, how they
are provided, how they are debugged, and what the various methodologies are for designing
such systems. Finally, we describe how operating systems are created and how a computer
starts its operating system.

Learning Objectives:

After completion, you should be able to:

1. Identify services provided by an operating system.


2. Illustrate how system calls are used to provide operating system services.
3. Compare and contrast monolithic, layered, microkernel, modular, and hybrid
strategies for designing operating systems.
4. Illustrate the process for booting an operating system.
5. Apply tools for monitoring operating system performance.
6. Design and implement kernel modules for interacting with a Linux kernel.

Course Materials:

Operating systems provide a number of services. At the lowest level, system calls allow a
running program to make requests from the operating system directly. At a higher level, the
command interpreter or shell provides a mechanism for a user to issue a request without writing
a program. Commands may come from files during batch-mode execution or directly from a
terminal when in an interactive or time-shared mode. System programs are provided to satisfy
many common user requests.

The types of requests vary according to level. The system-call level must provide the basic
functions, such as process control and file and device manipulation. Higher-level requests,
satisfied by the command interpreter or system programs, are translated into a sequence of
system calls. System services can be classified into several categories: program control, status
requests, and I/0 requests. Program errors can be considered implicit requests for service.

18
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Once the system services are defined, the structure of the operating system can be developed.
Various tables are needed to record the information that defines the state of the computer system
and the status of the system's jobs.

The design of a new operating system is a major task. It is important that the goals of the system
be well defined before the design begins. The type of system desired is the foundation for
choices among various algorithms and strategies that will be needed.

Since an operating system is large, modularity is important. Designing a system as a sequence


of layers or using a microkernel is considered a good technique. The virtual-machine concept
takes the layered approach and treats both the kernel of the operating system and the hardware
as though they were hardware. Even other operating systems may be loaded on top of this virtual
machine.

Throughout the entire operating-system design cycle, we must be careful to separate policy
decisions from implementation details (mechanisms). This separation allows maximum flexibility
if policy decisions are to be changed later.

Operating systems are now almost always written in a systems implementation language or in a
higher-level language. This feature improves their implementation, maintenance, and portability.
To create an operating system for a particular machine configuration, we must perform system
generation.

For a computer system to begin running, the CPU must initialize and start executing the
bootstrap program in firmware. The bootstrap can execute the operating system directly if the
operating system is also in the firmware, or it can complete a sequence in which it loads
progressively smarter programs

• An operating system provides an environment for the execution of programs by


providing services to users and programs.

• The three primary approaches for interacting with an operating system are (1) command
interpreters, (2) graphical user interfaces, and (3) touchscreen interfaces.

• System calls provide an interface to the services made available by an operating


system. Programmers use a system call’s application programming interface (API) for
accessing system-call services.

• System calls can be divided into six major categories: (1) process control, (2) file
management, (3) device management, (4) information maintenance, (5)
communications, and (6) protection.

• The standard C library provides the system-call interface for UNIX and Linux systems.

• Operating systems also include a collection of system programs that provide utilities to
users.

19
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
• A linker combines several relocatable object modules into a single binary executable file.
A loader loads the executable file into memory, where it becomes eligible to run on an
available CPU.

• There are several reasons why applications are operating-system specific. These
include different binary formats for program executables, different instruction sets for
different CPUs, and system calls that vary from one operating system to another.

• An operating system is designed with specific goals in mind. These goals ultimately
determine the operating system’s policies. An operating system implements these
policies through specific mechanisms.

• A monolithic operating system has no structure; all functionality is provided in a single,


static binary file that runs in a single address space. Although such systems are difficult
to modify, their primary benefit is efficiency.

• A layered operating system is divided into a number of discrete layers, where the bottom
layer is the hardware interface and the highest layer is the user interface. Although
layered software systems have had some success, this approach is generally not ideal
for designing operating systems due to performance problems.

• The microkernel approach for designing operating systems uses a minimal kernel; most
services run as user-level applications. Communication takes place via message
passing.

• A modular approach for designing operating systems provides operating system


services through modules that can be loaded and removed during run time. Many
contemporary operating systems are constructed as hybrid systems using a
combination of a monolithic kernel and modules.

• A boot loader loads an operating system into memory, performs initialization, and begins
system execution.

• The performance of an operating system can be monitored using either counters or


tracing. Counters are a collection of system-wide or preprocess statistics, while tracing
follows the execution of a program through the operating system.

Inserted is a powerpoint presentation

COMPUTER
ARCHITECTURE.pptx

20
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Activities/Assessments:

Answer the following:

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

21
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Lesson 2 Functions

Unit 1 Processes

Overview:

Early computers allowed only one program to be executed at a time. This program had complete
control of the system and had access to all the system’s resources. In contrast, contemporary
computer systems allow multiple programs to be loaded into memory and executed concurrently.
This evolution required firmer control and more compartmentalization of the various programs;
and these needs resulted in the notion of a process, which is a program in execution. A process
is the unit of work in a modern computing system.

The more complex the operating system is, the more it is expected to do on behalf of its users.
Although its main concern is the execution of user programs, it also needs to take care of various
system tasks that are best done in user space, rather than within the kernel. A system therefore
consists of a collection of processes, some executing user code, others executing operating
system code. Potentially, all these processes can execute concurrently, with the CPU (or CPUs)
multiplexed among them. In this chapter, you will read about what processes are, how they are
represented in an operating system, and how they work.

Learning Objectives:

After completion, you should be able to:

1. Identify the separate components of a process and illustrate how they are represented
and scheduled in an operating system.
2. Describe how processes are created and terminated in an operating system, including
developing programs using the appropriate system calls that perform these operations.
3. Describe and contrast inter-process communication using shared memory and
message passing.
4. Design programs that use pipes and POSIX shared memory to perform inter-process
communication.
5. Describe client–server communication using sockets and remote procedure calls.
6. Design kernel modules that interact with the Linux operating system.

Course Materials:

A process is a program in execution. As a process executes, it changes state. The state of a


process is defined by that process's current activity. Each process may be in one of the following
states: new, ready, running, waiting, or terminated. Each process is represented in the operating
system by its own process-control block (PCB).

A process, when it is not executing, is placed in some waiting queue. There are two major
classes of queues in an operating system: I/0 request queues and the ready queue. The ready
queue contains all the processes that are ready to execute and are waiting for the CPU. Each
22
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
process is represented by a PCB, and the PCBs can be linked together to form a ready queue.
Long-term (job) scheduling is the selection of processes that ·will be allowed to contend for the
CPU. Normally, long-term scheduling is heavily influenced by resource allocation considerations,
especially memory management. Short-term (CPU) scheduling is the selection of one process
from the ready queue.

Operating systems must provide a mechanism for parent processes to create new child
processes. The parent may wait for its children to terminate before proceeding, or the parent and
children may execute concurrently. There are several reasons for allowing concurrent execution:
information sharing, computation speedup, modularity, and convenience.

The processes executing in the operating system may be either independent processes or
cooperating processes. Cooperating processes require an inter-process communication
mechanism to communicate with each other. Principally, communication is achieved through two
schemes: shared memory and message passing. The shared-memory method requires
communicating processes to share some variables. The processes are expected to exchange
information through the use of these shared variables. In a shared-memory system, the
responsibility for providing communication rests with the application programmers; the operating
system needs to provide only the shared memory. The message-passing method allows the
processes to exchange messages. The responsibility for providing communication may rest with
the operating system itself. These two schemes are not mutually exclusive and can be used
simultaneously within a single operating system.

Communication in client-server systems may use (1) sockets, (2) remote procedure calls (RPCs),
or (3) Java's remote method invocation (RMI). A socket is defined as an endpoint for
communication. A connection between a pair of applications consists of a pair of sockets, one at
each end of the communication channel. RPCs are another form of distributed communication.
An RPC occurs when a process (or thread) calls a procedure on a remote application. RMI is the
Java version of RPCs. RMI allows a thread to invoke a method on a remote object just as it
would invoke a method on a local object. The primary distinction between RPCs and RMI is that
in RPCs data are passed to a remote procedure in the form of an ordinary data structure,
whereas RMI allows objects to be passed in remote method calls.

A thread is a flow of control within a process. A multithreaded process contains several different
flows of control within the same address space. The benefits of multithreading include increased
responsiveness to the user, resource sharing within the process, economy, and the ability to take
advantage of multiprocessor architectures.

User-level threads are threads that are visible to the programmer and are unknown to the kernel.
The operating-system kernel supports and manages kernel-level threads. In general, user-level
threads are faster to create and manage than are kernel threads, as no intervention from the
kernel is required. Three different types of models relate user and kernel threads: The many-to-
one model maps many user threads to a single kernel thread. The one-to-one model maps each
user thread to a corresponding kernel thread. The many-to-many model multiplexes many user
threads to a smaller or equal number of kernel threads.

23
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Most modern operating systems provide kernel support for threads; among these are Windows
98, NT, 2000, and XP1 as well as Solaris and Linux.

Thread libraries provide the application programmer with an API for creating and managing
threads. Three primary thread libraries are in common use: POSIX Pthreads, Win32 threads for
Windows systems, and Java threads.

Multithreaded programs introduce many challenges for the programmer, including the semantics
of the fork () and exec () system calls. Other issues include thread cancellation, signal handling,
and thread-specific data.

CPU scheduling is the task of selecting a waiting process from the ready queue and allocating
the CPU to it. The CPU is allocated to the selected process by the dispatcher.

First-come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause
short processes to wait for very long processes. Shortest job-first (SJF) scheduling is provably
optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult,
however, because predicting the length of the next CPU burst is difficult. The SJF algorithm is a
special case of the general priority scheduling algorithm, which simply allocates the CPU to the
highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a
technique to prevent starvation.

Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR


scheduling allocates the CPU to the first process in the ready queue for q time units, where q is
the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted,
and the process is put at the tail of the ready queue. The major problem is the selection of the
time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the
quantum is too small, scheduling overhead in the form of context-switch time becomes
excessive.

The FCFS algorithm is nonpreemptive; the RR algorithm is preemptive. The SJF and priority
algorithms may be either preemptive or nonpreemptive. Multilevel queue algorithms allow
different algorithms to be used for different classes of processes. The most common model
includes a foreground interactive queue that uses RR scheduling and a background batch queue
that uses FCFS scheduling. Multilevel feedback queues allow processes to move from one
queue to another.

Many contemporary computer systems support multiple processors and allow each processor to
schedule itself independently. Typically, each processor maintains its own private queue of
processes (or threads), all of which are available to run. Issues related to multiprocessor
scheduling include processor affinity and load balancing.

Operating systems supporting threads at the kernel level must schedule threads-not processes-
for execution. This is the case with Solaris and Windows XP. Both of these systems schedule
threads using preemptive, priority-based scheduling algorithms, including support for real-time
threads. The Linux process scheduler uses a priority-based algorithm with real-time support as
24
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
well. The scheduling algorithms for these three operating systems typically favor interactive over
batch and CPU-bound processes.

The wide variety of scheduling algorithms demands that we have methods to select among
algorithms. Analytic methods use mathematical analysis to determine the performance of an
algorithm. Simulation methods determine performance by imitating the scheduling algorithm on a
"representative" sample of processes and computing the resulting performance. However,
simulation can at best provide an approximation of actual system performance; the only reliable
technique for evaluating a scheduling algorithm is to implement the algorithm on an actual
system and monitor its performance in a "real-world" environment.

Activities/Assessments:

References:

 OPERATING SYSTEM CONCEPTS


by ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

25
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Unit 2 Processes Synchronization

Overview:

A system typically consists of several (perhaps hundreds or even thousands) of threads running
either concurrently or in parallel. Threads often share user data. Meanwhile, the operating
system continuously updates various data structures to support multiple threads. A race condition
exists when access to shared data is not controlled, possibly resulting in corrupt data values.

Process synchronization involves using tools that control access to shared data to avoid race
conditions. These tools must be used carefully, as their incorrect use can result in poor system
performance, including deadlock.

Learning Objectives:

After completion, you should be able to:

1. Describe the critical-section problem and illustrate a race condition.


2. Illustrate hardware solutions to the critical-section problem using memory barriers,
compare-and-swap operations, and atomic variables.
3. Demonstrate how mutex locks, semaphores, monitors, and condition variables can be
used to solve the critical-section problem.
4. Evaluate tools that solve the critical-section problem in low-, moderate-, and high-
contention scenarios.

Course Materials:

Given a collection of cooperating sequential processes that share data, mutual exclusion must be
provided. One solution is to ensure that a critical section of code is in use by only one process or
thread at a time. Different algorithms exist for solving the critical-section problem, with the
assumption that only storage interlock is available.

The main disadvantage of these user-coded solutions is that they all require busy waiting.
Semaphores overcome this difficulty. Semaphores can be used to solve various synchronization
problems and can be implemented efficiently, especially if hardware support for atomic
operations is available.

Various synchronization problems (such as the bounded-buffer problem, the readers-writers


problem, and the dining-philosophers problem) are important mainly because they are examples
of a large class of concurrency-control problems. These problems are used to test nearly every
newly proposed synchronization scheme.

The operating system must provide the means to guard against timing errors. Several language
constructs have been proposed to deal with these problems. Monitors provide the
synchronization mechanism for sharing abstract data types. A condition variable provides a
method by which a monitor procedure can block its execution until it is signaled to continue.

26
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Operating systems also provide support for synchronization. For example, Solaris, Windows XP,
and Linux provide mechanisms such as semaphores, mutexes, spinlocks, and condition
variables to control access to shared data. The Pthreads API provides support for mutexes and
condition variables.

A transaction is a program unit that must be executed atomically/ that is, either all the operations
associated with it are executed to completion, or none are performed. To ensure atomicity
despite system failure, we can use a write-ahead log. All updates are recorded on the log, which
is kept in stable storage. If a system crash occurs, the information in the log is used in restoring
the state of the updated data items, which is accomplished by use of the undo and redo
operations. To reduce the overhead in searching the log after a system failure has occurred, we
can use a checkpoint scheme.

To ensure serializability when the execution of several transactions overlaps, we must use a
concurrency-control scheme. Various concurrency-control schemes ensure serializability by
delaying an operation or aborting the transaction that issued the operation. The most common
ones are locking protocols and timestamp ordering schemes.

Inserted is a powerpoint presentation

PROCESS
SYNCHRONIZATION.

Activities/Assessments:

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE
27
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
 OPERATING SYSTEM CONCEPTS, 3rd edition
By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

Unit 3 Deadlocks

Overview:

In a multiprogramming environment, several threads may compete for a finite number of


resources. A thread requests resources; if the resources are not available at that time, the thread
enters a waiting state. Sometimes, a waiting thread can never again change state, because the
resources it has requested are held by other waiting threads. This situation is called a deadlock.
We discussed this issue briefly as a form of liveness failure. There, we defined deadlock as a
situation in which every process in a set of processes is waiting for an event that can be caused
only by another process in the set.

Perhaps the best illustration of a deadlock can be drawn from a law passed by the Kansas
legislature early in the 20th century. It said, in part: “When two trains approach each other at a
crossing, both shall come to a full stop and neither shall start up again until the other has gone.”

In this chapter, we describe methods that application developers as well as operating-system


programmers can use to prevent or deal with deadlocks. Although some applications can identify
programs that may deadlock, operating systems typically do not provide deadlock-prevention
facilities, and it remains the responsibility of programmers to ensure that they design deadlock-
free programs. Deadlock problems—as well as other liveness failures—are becoming more
challenging as demand continues for increased concurrency and parallelism on multicore
systems.

Learning Objectives:

After completion, you should be able to:

1. Illustrate how deadlock can occur when mutex locks are used.
2. Define the four necessary conditions that characterize deadlock.
3. Identify a deadlock situation in a resource allocation graph.
4. Evaluate the four different approaches for preventing deadlocks.
5. Apply the banker’s algorithm for deadlock avoidance.
6. Apply the deadlock detection algorithm.
7. Evaluate approaches for recovering from deadlock.

Course Materials:

In concurrent computing, a deadlock is a state in which each member of a group is waiting for
another member, including itself, to take action, such as sending a message or more commonly
28
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
releasing a lock. Deadlock is a common problem in multiprocessing systems, parallel computing,
and distributed systems, where software and hardware locks are used to arbitrate shared
resources and implement process synchronization.

In an operating system, a deadlock occurs when a process or thread enters a waiting state
because a requested system resource is held by another waiting process, which in turn is waiting
for another resource held by another waiting process. If a process is unable to change its state
indefinitely because the resources requested by it are being used by another waiting process,
then the system is said to be in a deadlock.

In a communications system, deadlocks occur mainly due to lost or corrupt signals rather than
resource contention.

A deadlock state occurs when two or more processes are waiting indefinitely for an event that
can be caused only by one of the waiting processes. There are three principal methods for
dealing with deadlocks:
 Use some protocol to prevent or avoid deadlocks, ensuring that the system will never
enter a deadlock state.
 Allow the system to enter a deadlock state, detect it, and then recover.
 Ignore the problem altogether and pretend that deadlocks never occur in the system.

The third solution is the one used by most operating systems, including UNIX and Windows.

Necessary conditions
A deadlock situation on a resource can arise if and only if all of the following conditions
hold simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-shareable mode.
Otherwise, the processes would not be prevented from using the resource when
necessary. Only one process can use the resource at any given instant of time.
2. Hold and wait or resource holding: a process is currently holding at least one resource
and requesting additional resources which are being held by other processes.
3. No preemption: a resource can be released only voluntarily by the process holding it.
4. Circular wait: each process must be waiting for a resource which is being held by another
process, which in turn is waiting for the first process to release the resource. In general,
there is a set of waiting processes, P = {P1, P2, …, PN}, such that P1 is waiting for a
resource held by P2, P2 is waiting for a resource held by P3 and so on until PN is waiting
for a resource held by P1.

These four conditions are known as the Coffman conditions from their first description in a 1971
article by Edward G. Coffman, Jr.

While these conditions are sufficient to produce a deadlock on single-instance resource systems,
they only indicate the possibility of deadlock on systems having multiple instances of resources.

A deadlock can occur only if four necessary conditions hold simultaneously in the system mutual
exclusion, hold and wait, no preemption, and circular wait. To prevent deadlocks, we can ensure
that at least one of the necessary conditions never holds.

A method for avoiding deadlocks that is stringent than the prevention algorithms requires that the
operating have a priori information on how each process will utilize system resources. The
banker's algorithm, for example, requires a priori information about the maximum number of each

29
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
resource class that may be requested by each process. Using this information, we can define a
deadlock-avoidance algorithm.

If a system does not employ a protocol to ensure that deadlocks will never occur, then a
detection-and-recovery scheme must be employed. A deadlock detection algorithm must be
invoked to determine whether a deadlock has occurred. If a deadlock is detected, the system
must recover either by terminating some of the deadlocked processes or by preempting
resources from some of the deadlocked processes.
Where preemption is used to deal with deadlocks, three issues must be addressed: selecting a
victim, rollback1 and starvation. In a system that selects victims for rollback primarily on the basis
of cost factors, starvation may occur, and the selected process can never complete its
designated task.

Finally, researchers have argued that none of the basic approaches alone is appropriate for the
entire spectrum of resource-allocation problems in operating systems. The basic approaches can
be combined, however, allowing us to select an optimal approach for each class of resources in a
system.

Activities/Assessments:

Answer the following:


1.

2.

3.

30
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

Lesson 3 Management

31
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Unit 1 Main Memory Management

Overview:

The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be at least partially in main memory during execution.

Modern computer systems maintain several processes in memory during system execution.
Many memory-management schemes exist, reflecting various approaches, and the effectiveness
of each algorithm varies with the situation. Selection of a memory-management scheme for a
system depends on many factors, especially on the system’s hardware design. Most algorithms
require some form of hardware support.

The memory management algorithms vary from a primitive bare-machine approach to a strategy
that uses paging. Each approach has its own advantages and disadvantages. Selection of a
memory-management method for a specific system depends on many factors, especially on the
hardware design of the system. As we shall see, most algorithms require hardware support,
leading many systems to have closely integrated hardware and operating-system memory
management.

Learning Objectives:

After completion, you should be able to:

1. Explain the difference between a logical and a physical address and the role of the
memory management unit (MMU) in translating addresses.
2. Apply first-, best-, and worst-fit strategies for allocating memory contiguously.
3. Explain the distinction between internal and external fragmentation.
4. Translate logical to physical addresses in a paging system that includes a translation
look-aside buffer (TLB).
5. Describe hierarchical paging, hashed paging, and inverted page tables..

Course Materials:

Memory-management algorithms for multiprogrammed operating systems range from the


simple single-user system approach to paged segmentation. The most important
determinant of the method used in a particular system is the hardware provided. Every
memory address generated by the CPU must be checked for legality and possibly
mapped to a physical address. The checking cannot be implemented (efficiently) in
software. Hence, we are constrained by the hardware available.

The various memory-management algorithms (contiguous allocation, paging,


segmentation, and combinations of paging and segmentation) differ in many aspects. In
comparing different memory-management strategies, we use the following
considerations:

32
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
 Hardware support. A simple base register or a base-limit register pair is
sufficient for the single- and multiple-partition schemes, whereas paging and
segmentation need mapping tables to define the address map.

 Performance. As the memory-management algorithm becomes more complex,


the time required to map a logical address to a physical address increases. For
the simple systems, we need only compare or add to the logical address-
operations that are fast. Paging and segmentation can be as fast if the
mapping table is implemented in fast registers. If the table is in memory,
however, user memory accesses can be degraded substantially. A TLB can
reduce the performance degradation to an acceptable level.

• Virtual memory abstracts physical memory into an extremely large uniform array of storage.

33
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
• The benefits of virtual memory include the following: (1) a program can be larger than
physical memory, (2) a program does not need to be entirely in memory, (3) processes can
share memory, and (4) processes can be created more efficiently.

• Demand paging is a technique whereby pages are loaded only when they are demanded
during program execution. Pages that are never demanded are thus never loaded into
memory.

• A page fault occurs when a page that is currently not in memory is accessed. The page
must be brought from he backing store into an available page frame in memory.

• Copy-on-write allows a child process to share the same address space as its parent. If
either the child or the parent process writes (modifies) a page, a copy of the page is made.

• When available memory runs low, a page-replacement algorithm selects an existing page in
memory to replace with a new page. Page replacement algorithms include FIFO, optimal,
and LRU. Pure LRU algorithms are impractical to implement, and most systems instead use
LRU-approximation algorithms.

• Global page-replacement algorithms select a page from any process in the system for
replacement, while local page-replacement algorithms select a page from the faulting
process.

• Thrashing occurs when a system spends more time paging than executing.

• A locality represents a set of pages that are actively used together. As a process executes,
it moves from locality to locality. Aworking set is based on locality and is defined as the set
of pages currently in use by a process.

• Memory compression is a memory-management technique that compresses a number of


pages into a single page. Compressed memory is an alternative to paging and is used on
mobile systems that do not support paging.

• Kernel memory is allocated differently than user-mode processes; it is allocated in


contiguous chunks of varying sizes. Two common techniques for allocating kernel memory
are (1) the buddy system and (2) slab allocation.

• TLB reach refers to the amount of memory accessible from the TLB and is equal to the
number of entries in the TLB multiplied by the page size. One technique for increasing TLB
reach is to increase the size of pages.

• Linux, Windows, and Solaris manage virtual memory similarly, using demand paging and
copy-on-write, among other features. Each system also uses a variation of LRU
approximation known as the clock algorithm.

Activities/Assessments:

34
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Answer the following

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

35
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Unit 2 File Management

Overview:

The file system provides the mechanism for on-line storage and access to file contents, including
data and programs. The file system resides permanently on secondary storage, which is
designed to hold a large amount of data permanently. This chapter is primarily concerned with
issues surrounding file storage and access on the most common secondary-storage medium, the
disk. We explore ways to structure file use, to allocate disk space, to recover freed space to track
the locations of data, and to interface other parts of the operating system to secondary storage.
Performance issues are considered throughout the chapter.

Learning Objectives:

After completion, you should be able to:

1. To describe the details o'f implementing local file systems and directory structures.
2. To describe the implementation of remote file systems.
3. To discuss block allocation and free-block algorithms and trade-off.

Course Materials:

Disks provide the bulk of secondary storage on which a file system is maintained. They have two
characteristics that make them a convenient medium for storii1g multiple files:
1. A disk can be rewritten in place; it is possible to read a block from the disk, modify the
block, and write it back into the same place.
2. A disk can access directly any given block of information it contains. Thus, it is simple
to access any file either sequentially or randomly, and switching from one file to another requires
only moving the read-write heads and waiting for the disk to rotate.

The file system resides permanently on secondary storage, which is designed to hold a large
amount of data permanently. The most common secondary-storage medium is the disk.

Physical disks may be segmented into partitions to control media use and to allow multiple,
possibly varying, file systems on a single spindle. These file systems are mounted onto a logical
file system architecture to make them available for use. File systems are often implemented in a
layered or modular structure. The lower levels deal with the physical properties of storage
devices. Upper levels deal with symbolic file names and logical properties of files. Intermediate
levels map the logical file concepts into physical device properties.

Any file-system type can have different structures and algorithms. A VFS layer al1ows the upper
layers to deal with each file-system type uniformly. Even remote file systems can be integrated
into the system's directory structure and acted on by standard system calls via the VFS interface.
The various files can be allocated space on the disk in three vvays: through contiguous, linked, or
indexed allocation. Contiguous allocation can suffer from external fragmentation. Direct access is
very inefficient with linked allocation. Indexed allocation may require substantial overhead for its
index block. These algorithms can be optimized in many ways. Contiguous space can be
enlarged through extents to increase flexibility and to decrease external fragmentation. Indexed
allocation can be done in clusters of multiple blocks to increase throughput and to reduce the

36
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
number of index entries needed. Indexing in large clusters is similar to contiguous allocation with
extents.

Free-space allocation methods also influence the efficiency of disk-space use, the performance
of the file system, and the reliability of secondary storage. The methods used include bit vectors
and linked lists. Optimizations include grouping, counting, and the FAT, which places the linked
list in one contiguous area.

Directory-management routines must consider efficiency, performance, and reliability. A hash


table is a commonly used method as it is fast and efficient. Unfortunately, damage to the table or
a system crash can result in inconsistency between the directory information and the disk's
contents. A consistency checker can be used to repair the damage. Operating-system backup
tools allow disk data to be copied to tape1 enabling the user to recover from data or even disk
loss due to hardware failure, operating system bug, or user error.

Network file systems, such as NFS, use client-server methodology to allow users to access files
and directories from remote machines as if they were on local file systems. System calls on the
client are translated into network protocols and retranslated into file-system operations on the
server. Networking and multiple-client access create challenges in the areas of data consistency
and performance.

Due to the fundamental role that file systems play in system operation, their performance and
reliability are crucial. Techniques such as log structures and caching help improve performance,
while log structures and RAID improve reliability. The WAFL file system is an example of
optimization of perforn1ance to match a specific I/0 load.

Activities/Assessments:

1. What are the advantages of the variant of linked allocation that uses a FAT to chain
together the blocks of a file?

2. Discuss how performance optimizations for file systems might result in difficulties in
maintaining the consistency of the systems in the event of computer crashes.

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 3rd edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS
Unit 3 I/O Management

37
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
Overview:

The two main jobs of a computer are l/0 and processing. In many cases, the main job is 1/0, and
the processing is merely incidental. For instance, when we browse a web page or edit a file, our
immediate interest is to read or enter some information, not to compute an answer.

The role of the operating system in computer 1/0 is to manage and control 1/0 operations and I/0
devices. Although related topics appear in other chapters, here we bring together the pieces to
paint a complete picture of l/O. First, we describe the basics of J/0 hardware, because the nature
of the hard1Nare interface places requirements on the internal facilities of the operating system.
Next, we discuss the I/O services provided by the operating system and the embodiment of these
services in the application 1/0 interface. Then, we explain how the operating system bridges the
gap between the hardware interface and the application interface. We also discuss the UNIX
System V STI\EAMS mechanism, which enables an application to assemble pipelines of driver
code dynamically. Finally, we discuss the performance aspects of l/O and the principles of
operating-system design that improve I/0 performance.

Learning Objectives:

After completion, you should be able to:

1. Explore the structure of an operating system's I/0 subsystem.


2. Discuss the principles of I/0 hardware and its complexity.
3. Provide details of the performance aspects of I/0 hardware and software.

Course Materials:

The control of devices connected to the computer is a major concern of operating-system


designers. Because I/0 devices vary so widely in their function and speed (consider a mouse, a
hard disk, and a CD-ROM jukebox), varied methods arc needed to control them. These methods
form the I/O subsystem of the kernel, which separates the rest of the kernel from the complexities
of managing I/0 devices.

The basic hardware elements involved in I/O arc buses, device controllers, and the devices
themselves. The work of moving data between devices and main memory is performed by the
CPU as programmed I/0 or is offloaded to a DMA controller. The kernel module that controls a
device is a device driver. The system-ca]] interface provided to applications is designed to handle
several basic categories of hardware, including block devices, character devices, memory-
mapped files, network sockets, and programmed interval timers. The system calls usually block
the process that issues them, but non-blocking and asynchronous calls are used by the kernel
itself and by applications that must not sleep while waiting for an I/O operation to complete.

The kernel's I/0 subsystem provides numerous services. Among these are l/0 scheduling,
buffering, caching, spooling, device reservation, and error handling. Another service, name
translation, makes the connection between hardware devices and the symbolic file names used
by applications. It involves several levels of mapping that translate from character-string names,
to specific device drivers and device addresses, and then to physical addresses of l/O ports or
bus controllers.

38
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
The kernel uses many similar structures to track network connections, character-device
communications, and other I/O activities. UNIX provides file-system access to a variety of
entities, such as user files, raw devices, and the address spaces of processes. Although each of
these entities supports a read () operation, the semantics differ. For instance, to read a user file,
the kernel needs to probe the buffer cache before deciding whether to perforn1 a disk I/O. To
read a raw disk, the kernel needs to ensure that the request size is a multiple of the disk sector
size and is aligned on a sector boundary. To read a process image, it is merely necessary to
copy data from memory. UNJX encapsulates these differences within a uniform structure by
using an object-oriented technique. The open-file record, shown in Figure, contains a dispatch
table that holds pointers to the appropriate routines, depending on the type of file. Some
operating systems use object-oriented methods even more extensively.
In summary the I/O subsystem coordinates an extensive collection of services that ure available
to applications and to other parts of the kernel. The I/O subsystem supervises these procedures:

 Management of the name space for files and devices


 Access control to files and devices
 Operation control (for example1 a modem cannot seek ())
 File-system space allocation
 Device allocation
 Buffering, caching and spooling
 I/0 scheduling
 Device-status monitoring, error handling, and failure recovery
 Device-driver configuration and initialization
Activities/Assessments:

Answer the following:


1. When multiple interrupts from different devices appear at about the same time, a priority
scheme could be used to determine the order in which the interrupts would be serviced.
Discuss what issues need to be considered in assigning priorities to different interrupts.
2. What are the advantages and disadvantages of supporting memory mapped 1/0 to device
control registers?
3. Consider the following I/O scenarios on a single-user PC:
a. A mouse used with a. graphical user interface
b. A tape drive on a multitasking operating system (with no device pre-allocation
available)
39
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM
c. A disk drive containing user files
d. A graphics card with direct bus connection, accessible through memory-mapped
I/O.
For each of these scenarios, would you design the operating system to use buffering,
spooling, caching, or a combination? Would you use polled 1/0 or interrupt-driven I/O?
Give reasons for your choices.
4. In most multiprogrammed systems, user programs access memory through virtual
addresses, while the operating system uses raw physical addresses to access memory.
What are the implications of this design on the initiation of I/O operations by the user
program and their execution by the operating system?
5. What ore the various kinds of performance overheads associated with servicing an
interrupt?
6. Describe three circumstances under which blocking I/O should be used. Describe three
circumstances under which nonblocking l/0 should be used. Why not just implement
nonblocking l/0 and have processes busy-wait until their device is ready?

References:

 OPERATING SYSTEM CONCEPTS, 7th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, GREG GAGNE

 OPERATING SYSTEM CONCEPTS, 10th edition


By ABRAHAM SILBERSCHATZ, PETER BAER GALVIN

 OPERATING SYSTEMS Notes and Lectures

 MODERN OPERATING SYSTEMS 4th edition


By ANDREW S. TANENBAUM and HERBERT BOS

For further readings:

Operating_System_
Concepts,_10th edit

40
SUBJECT: COMP 202833 – OPERATING SYSTEM
PREPARED BY: VIRGILIO R. CUAJUNCO, JR., MEM

You might also like