2021 22 1 - Ÿsletimsistemleri 5 - en

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Machine Translated by Google

1906003052015
Operating Systems

Dr. Lecturer member Önder


EYECÿOÿLU Computer Engineering
Machine Translated by Google

WEEK TOPICS

Week 1 : Introduction to operating systems, Operating system strategies

Entrance Week 2 : System calls

Week 3 : Task, task management

Course Day and Week 4 : Threads


Time: Wednesday:
Week 5 : Job ranking algorithms
13:00-16:00 • Application Unix (Linux)
Operating system • Attendance Week 6 : Inter-task communication and synchronization
requirement is 70% • Applications will be carried
Week 7 : Semaphores, Monitors and applications
out on the C programming language. Programming
knowledge is expected from students.
Week 8 : Visa

Week 9 : Critical Area Problems

Week 10 : Deadlock Problems

Week 11 : Memory Management

Week 12 : Paging, Segmentation

Week 13 : Virtual Memory

Week 14 : File system, access and protection mechanisms, Disk

planning and management

Week 15 : Final
Machine Translated by Google

Entrance

Sources: –
Modern Operating Systems, 3rd Edition by Andrew S.
Tanenbaum, Prentice Hall, 2008.
– Computer Operating Systems (BIS), Ali Saatçi, 2nd Edition,
Bÿçaklar Kitabevi.
Machine Translated by Google

LESSON - 4

THREAD MANAGEMENT
Machine Translated by Google

THREADS
• Traditional (or heavy) operation has a single control. process model, two independent
It is based on the concept: resource grouping and execution.
• A thread (also called lightweight processing LWP)
is a unit. It consists of o a thread id,
o a program counter, o a
recordset, o and a
stack.

• All threads in a process have exactly the same address space; It means they share the
same global variables.

5
Machine Translated by Google

THREADS

6
Machine Translated by Google

THREADS
• In the code section, the data section of the same process and other files such as open files and signals
OS shares resources.
• If a process has more than one thread control in the same address space, it can be thought of
as separate processes running in semi-
parallel. • «Multiithreading» works as multiple processes collide. The processor quickly moves
back and forth between threads, creating a situation where threads are processed in
parallel. • In reality, if there are 3 threads in a process, CPU time is shared among the 3 threads
depending on the speed of the CPU.

7
Machine Translated by Google

THREADS
• Although a thred is run depending on a process, thread and process are different concepts and are separated.
can be considered as.

Thread shares an address space, open files, and other resources
• Processes share physical memory, disks, printers, and other resources.
• • Since each thread can access every memory address in the address space of the process, threads
Because there is no protection between
• It is impossible.
• It is unnecessary. The threads are partners, not rivals.

As in a traditional process (i.e. a process with only one thread), a thread can be in any of several states. Transitions
between thread states are the same as transitions between process states.

8
Machine Translated by Google

BENEFITS

• The benefits of multithreaded programming can be divided into four parts.


These:

1. Flexibility
2. Resource sharing
3. Overhead economy 4.
Ability to use multiprocessor architectures

9
Machine Translated by Google

MULTITHREADING MODEL

Thread management can generally


occur in two different ways:

In the User Universe : The operating system is


unaware of the existence of threads. Each task
manages switching between its own threads within
its own time period . The operating system is not
notified of the situation. Example: POSIX P
-Threads, Mach C - Threads

In the Kernel Universe : The operating system ,


takes care of the tasks as well as the management
of the threads under the tasks. The operating
thread systemWindows
handles each
NT switching. Example:

10
Machine Translated by Google

MULTITHREADING MODEL

There are three models that determine the relationship


between threads in the user universe and threads in
the kernel
universe. 1. Many-to-
one model : + Thread management is done by
the thread library in user space, so it is efficient ;
- However, if a thread makes a
blocking system call, the entire process will be
blocked.

- Multiple threads cannot run in parallel on


multiprocessor systems because only one
thread can access the kernel at a time.

2. One-to-One model:
3. Many-to-Many model:

11th
Machine Translated by Google

MULTITHREADING MODEL

2. One to One model:

• Provides greater concurrency than the many-to-one


model by allowing another thread to execute while
one thread makes a blocking system call ;

• It also allows parallel execution of multiple threads


on multiple processors. • The only
drawback of this model is that creating a user thread
requires the creation of a corresponding kernel
thread. • Because the overhead of kernel
threading can tax an application's performance, most
implementations of this model limit the number of
threads supported by the system.

• Linux implements the same model as the Windows


Operating System family.

Many to Many model:


12
Machine Translated by Google

MULTITHREADING MODEL

2. Many-to-Many model:

• Many user-level threads are smaller or


is replicated by an equal number of core threads.
• The number of core threads depends on a particular application or
is specific to a machine.
• While the many-to-one model allows the developer to create any
number of user threads, true concurrency is not gained because the
kernel can only schedule one thread at a time. • The many-to-many
model provides greater
concurrency, but the developer must be careful not to create too many
threads in an application • The many-to-many model has none of
these
shortcomings:
ÿ Developers can create as many user threads as needed, and
the corresponding kernel thread can run in parallel across
multiprocessors.
ÿ Additionally, when a thread performs a blocking system call, the
kernel can schedule another thread to execute.

13
Machine Translated by Google

THREAD LIBRARIES
A thread library provides the programmer with an API for creating and managing threads .
There are two basic ways to implement a threading library.

1. The first approach is to provide a library completely without kernel support.


All code and data structures for the library reside in user space. This means calling a function
in the library means a native function call in user space and not a system call.

2. The second approach is to implement a kernel-level library directly supported by the OS.
In this case, the code and data structures for the library reside in the kernel space. Calling a
function within the API for the library usually results in a system call to the kernel.

14
Machine Translated by Google

THREAD LIBRARIES
Three main threading libraries are used today: 1.
POSIX Pthreads. Pthreads, a thread extension of the POSIX standard, allow users or
can be provided as a kernel-level library.
2.Win32. The Win32 threading library is a kernel-level library available on Windows systems.

3.Java. Java thread API allows creating and managing threads directly in Java programs.
However, since in most cases the JVM runs on top of a host operating system, the Java
threading API is typically implemented using a threading library available on the host
system.

15
Machine Translated by Google

PTHREAD LIBRARY (POSIX)

• Pthreads, POSIX that defines an API for threading and synchronization


Specifies the standard (IEEE 1003.1c).
• This is a specification for thread behavior, not an implementation. Operating system designers can
implement the specification any way they want. • Many systems
implement Pthreads features, including Solaris, Linux, Mac OS X, and Tru64 UNIX . Shareware
applications are also available in the public domain for various Windows operating systems.

16
Machine Translated by Google

PTHREAD LIBRARY (POSIX)

17
Machine Translated by Google

LESSON - 4
Machine Translated by Google

one. Entrance

The need for inter-process communication may arise for various reasons. Sometimes a
process needs the result produced by the other process, sometimes they work together
They may have to wait for each other to solve the problem. Interprocess communication
It can happen in different ways:

1. The processes to communicate are on the same or different machines connected via a computer network.
it could be

2. Linked (data transfer) or disconnected (message transfer) communication


can be established between processes

3. Appointment method: Processes to communicate communication


how to initiate

a. An object created in the file system can be used


b. Internet address available

one

9
Machine Translated by Google

one. Entrance

Interprocess characteristic machine distance Meeting


Communication feature

exit() Returns same machine From child process to parent process


integer

signal() Signal same machine signal number


its number

mmap to virtual memory virtual address


same machine
memory region

Pipe Queuing, first-in, first- same machine file descriptor


out
IPC ID
Message same machine
message queue

shared IPC ID
like mmap same machine
memory

Semaphore synchronization same machine IPC ID

Socket Message Same or different file descriptor


machine

2
0
Machine Translated by Google

2. Inter-Process Communication Mechanisms


The Unix operating system provides three basic structures for interprocess interaction:
1. Message transfer
2. Shared memory
3. Semaphore
In the Unix operating system, the kernel uses a unique key for all resources. A unique key
must be generated for the resources used for these three inter-process communication structures.
The ftok() function is used for this .
ftok() - Creates IPC Key from File Name. (https://
www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/apis/p0zftok.htm)
Identifier-based interprocess communication methods require you to provide a key to the
msgget(), semget(), shmget() functions to obtain interprocess communication identifiers . The
ftok() function is a mechanism used to generate this key.

Return Values:
value ftok() succeeded.
ftok() is not successful. Errno variable indicates the error
(key_t)-1
is defined to indicate.

ftok1.c

2
one
Machine Translated by Google

Message Queues
You can use the following functions to work with the message queue system:
• msgget()
Used to access the message queue
• msgsnd()
Used to send messages
• msgrcv()
Used to receive messages
• msgctl()
Used to manage message queue
#include <sys/msg.h>
int msgget(key_t key, int msgflg); key: unique key
identifying the message queue

Meaning
msgflg 0
Returns the ID of the message queue

IPC_CREAT | 0640 If the message queue does not exist, it creates it and returns its ID.

IPC_CREAT | IPC_EXCL | 0640 If there is no message queue, it creates it and identifies its ID.
returns, if any error is returned

2
2
Machine Translated by Google

Message Queues
#include <sys/msg.h> int
msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); int
msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);

Here msgp is the address of a memory location of type msgbuf that contains the
message : struct
msgbuf { long mtype; char
mtext[1]; }
The message can be of any nature. mtext[1] is used here only to point to the beginning of the data.
msgsz defines the size of the
message that the msgp pointer points to. If the value of the msgflg flag
is 0 , it causes the calling process to be blocked when the message queue is full for the msgsnd call
and the message queue is empty for the msgrcv call. If the value of this flag is IPC_NOWAIT, it works
without interruption. If the message queue is full for the msgsnd call and the message queue is empty
for the msgrcv call, it returns an error code.
The value of the error code, errno, is EAGAIN for a call to msgsnd and ENOMSG for a call to msgrcv .

msg_queue.c

2
3
Machine Translated by Google

Shared Memory
Shared memory is when a process shares part of its memory space with another process (Figure-1).
The shared memory area falls across different regions of Process A and Process B's memory address spaces.
System calls related to the shared memory system are listed below. We perform the operations of allocating,
mounting and returning shared memory space using these system calls.

2
4
Machine Translated by Google

Shared Memory
Now let's look at the parameters and usage of these functions respectively.
#include <sys/types.h>
#include <sys/ipc.h>
shm.h>
int shmget(key_t key, size_t size, int shmflg); key: shared
unique key identifying memory
size: defines the size of the shared memory partition. This value for an existing shared memory space
is ignored.
shmflg Meaning
0 Returns the ID of the shared memory
IPC_CREAT | 0640 shared memory creates And

Otherwise it returns its ID


IPC_CREAT | IPC_EXCL | 0640 If shared memory does not exist, it creates and
Returns ID, returns error if any

#include <sys/types.h>
shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg); int
shmdt(void *shmaddr);
shmflg Meaning
0 Shared memory space for both reading and writing
available
SHM_RDONLY Shared memory space can be used read-only

2
5
Machine Translated by Google

Mission Control
Planning Algorithms
1. FCFS (First Come First Served) Algorithm:
According to this algorithm; The task that first requests the AIB uses the processor first. With FIFO queue
can be run. After the task is submitted to the ready tasks queue, its task control block
(PCB) is added to the end of the tail. The task at the head of the queue when AIB is empty
It is submitted to AIB for processing and is deleted from the queue. In this algorithm, the waiting time of tasks is
becomes high.

Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:

Duty Run Time (sec)


P1 24
P2 3
P3 3

2
6
Machine Translated by Google

Mission Control
Scheduling in Batch Systems

27 _
Machine Translated by Google

Mission Control
1. FCFS (First Come First Served) Algorithm:
Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:
Duty Run Time (sec)
P1 24
P2 3
P3 3
1. Let's assume that the tasks are presented in the sequence P1, P2, P3. Planning
accordingly:

Average waiting time: (24+27+0) / 3 = 17 ms.


2. If your tasks are ordered as P2, P3, P1, planning:

Average waiting time: (3+6+0) / 3 = 3 ms.

2
8
Machine Translated by Google

Mission Control
2.SJF (Shortest Job First) Algorithm: In this algorithm, when
the CPU is idle, the task with the shortest running time among the remaining tasks is
presented to the processor to run. If the remaining times of two tasks are the same,
then the FCFS algorithm is applied. In this algorithm: Each task is evaluated with the
next CPU processing time of that task. This is used to find the shortest time job.
SJF Types:

1. Uninterrupted SJF: If AÿB is allocated to a task, the task cannot be interrupted until the AÿB processing
time expires.
2.Must be Interrupted SJF: If a new task is presented to the system whose AIB processing time is less than
the remaining processing time of the currently running task , the old task will be interrupted. This method is
called SRTF (Shortest Remaining Time First) method.
Optimization for the smallest average waiting time for a set of tasks given SJF
does.

2
9
Machine Translated by Google

Mission Control
2.SJF (Shortest Job First) Algorithm: Example: P1, P2, P3, P4 tasks
are presented in the following sequence.
Let's assume. Accordingly, let's find the average waiting time according to the
uninterrupted SJF method:

• SJF : tob = tP1 + tP2 + tP3 + tP4 = (3+9+16+0) / 4 = 7 ms


• FCFS : tob = tP1 + tP2 + tP3 + tP4 = (0+6+13+21) / 4 =10.75 ms

3
0
Machine Translated by Google

Mission Control
3.Multi-Queue Scheduling Algorithm:
According to this algorithm, tasks are divided into certain classes and each class of tasks creates
its own queue. In other words, ready tasks are converted into a multi-level queue. Tasks are
placed in a certain queue according to the type of task, its priority, memory status or other
characteristics. The scheduling algorithm for each queue may be different. However, it is created
in the algorithm that allows tasks to be transferred from one queue to another.

According to this algorithm, tasks in the high priority queue are processed first. If this resource
is empty, lower level tasks can be run from it.

1
Machine Translated by Google

Mission Control
4. Priority Planning Algorithm:
According to this algorithm, a priority value is assigned to each task and the tasks use the
processor in order of priority. The same priority tasks are run with the FCFS algorithm.

32
Machine Translated by Google

Mission Control
5.Round Robin (RR) Algorithm: Each task takes a
small time frame of the RMS. When this time is up, the task is truncated and added to the end
of the queue of ready tasks.
• Example: Let's assume that tasks P1, P2, P3, P4 are presented in the following sequence.
If the time period is 20 ms, then:

You might also like