Professional Documents
Culture Documents
2021 22 1 - Ÿsletimsistemleri 5 - en
2021 22 1 - Ÿsletimsistemleri 5 - en
2021 22 1 - Ÿsletimsistemleri 5 - en
1906003052015
Operating Systems
WEEK TOPICS
Week 15 : Final
Machine Translated by Google
Entrance
Sources: –
Modern Operating Systems, 3rd Edition by Andrew S.
Tanenbaum, Prentice Hall, 2008.
– Computer Operating Systems (BIS), Ali Saatçi, 2nd Edition,
Bÿçaklar Kitabevi.
Machine Translated by Google
LESSON - 4
THREAD MANAGEMENT
Machine Translated by Google
THREADS
• Traditional (or heavy) operation has a single control. process model, two independent
It is based on the concept: resource grouping and execution.
• A thread (also called lightweight processing LWP)
is a unit. It consists of o a thread id,
o a program counter, o a
recordset, o and a
stack.
• All threads in a process have exactly the same address space; It means they share the
same global variables.
5
Machine Translated by Google
THREADS
6
Machine Translated by Google
THREADS
• In the code section, the data section of the same process and other files such as open files and signals
OS shares resources.
• If a process has more than one thread control in the same address space, it can be thought of
as separate processes running in semi-
parallel. • «Multiithreading» works as multiple processes collide. The processor quickly moves
back and forth between threads, creating a situation where threads are processed in
parallel. • In reality, if there are 3 threads in a process, CPU time is shared among the 3 threads
depending on the speed of the CPU.
7
Machine Translated by Google
THREADS
• Although a thred is run depending on a process, thread and process are different concepts and are separated.
can be considered as.
•
Thread shares an address space, open files, and other resources
• Processes share physical memory, disks, printers, and other resources.
• • Since each thread can access every memory address in the address space of the process, threads
Because there is no protection between
• It is impossible.
• It is unnecessary. The threads are partners, not rivals.
As in a traditional process (i.e. a process with only one thread), a thread can be in any of several states. Transitions
between thread states are the same as transitions between process states.
8
Machine Translated by Google
BENEFITS
1. Flexibility
2. Resource sharing
3. Overhead economy 4.
Ability to use multiprocessor architectures
9
Machine Translated by Google
MULTITHREADING MODEL
10
Machine Translated by Google
MULTITHREADING MODEL
2. One-to-One model:
3. Many-to-Many model:
11th
Machine Translated by Google
MULTITHREADING MODEL
MULTITHREADING MODEL
2. Many-to-Many model:
13
Machine Translated by Google
THREAD LIBRARIES
A thread library provides the programmer with an API for creating and managing threads .
There are two basic ways to implement a threading library.
2. The second approach is to implement a kernel-level library directly supported by the OS.
In this case, the code and data structures for the library reside in the kernel space. Calling a
function within the API for the library usually results in a system call to the kernel.
14
Machine Translated by Google
THREAD LIBRARIES
Three main threading libraries are used today: 1.
POSIX Pthreads. Pthreads, a thread extension of the POSIX standard, allow users or
can be provided as a kernel-level library.
2.Win32. The Win32 threading library is a kernel-level library available on Windows systems.
3.Java. Java thread API allows creating and managing threads directly in Java programs.
However, since in most cases the JVM runs on top of a host operating system, the Java
threading API is typically implemented using a threading library available on the host
system.
15
Machine Translated by Google
16
Machine Translated by Google
17
Machine Translated by Google
LESSON - 4
Machine Translated by Google
one. Entrance
The need for inter-process communication may arise for various reasons. Sometimes a
process needs the result produced by the other process, sometimes they work together
They may have to wait for each other to solve the problem. Interprocess communication
It can happen in different ways:
1. The processes to communicate are on the same or different machines connected via a computer network.
it could be
one
9
Machine Translated by Google
one. Entrance
shared IPC ID
like mmap same machine
memory
2
0
Machine Translated by Google
Return Values:
value ftok() succeeded.
ftok() is not successful. Errno variable indicates the error
(key_t)-1
is defined to indicate.
ftok1.c
2
one
Machine Translated by Google
Message Queues
You can use the following functions to work with the message queue system:
• msgget()
Used to access the message queue
• msgsnd()
Used to send messages
• msgrcv()
Used to receive messages
• msgctl()
Used to manage message queue
#include <sys/msg.h>
int msgget(key_t key, int msgflg); key: unique key
identifying the message queue
Meaning
msgflg 0
Returns the ID of the message queue
IPC_CREAT | 0640 If the message queue does not exist, it creates it and returns its ID.
IPC_CREAT | IPC_EXCL | 0640 If there is no message queue, it creates it and identifies its ID.
returns, if any error is returned
2
2
Machine Translated by Google
Message Queues
#include <sys/msg.h> int
msgsnd(int msqid, const void *msgp, size_t msgsz, int msgflg); int
msgrcv(int msqid, void *msgp, size_t msgsz, long msgtyp, int msgflg);
Here msgp is the address of a memory location of type msgbuf that contains the
message : struct
msgbuf { long mtype; char
mtext[1]; }
The message can be of any nature. mtext[1] is used here only to point to the beginning of the data.
msgsz defines the size of the
message that the msgp pointer points to. If the value of the msgflg flag
is 0 , it causes the calling process to be blocked when the message queue is full for the msgsnd call
and the message queue is empty for the msgrcv call. If the value of this flag is IPC_NOWAIT, it works
without interruption. If the message queue is full for the msgsnd call and the message queue is empty
for the msgrcv call, it returns an error code.
The value of the error code, errno, is EAGAIN for a call to msgsnd and ENOMSG for a call to msgrcv .
msg_queue.c
2
3
Machine Translated by Google
Shared Memory
Shared memory is when a process shares part of its memory space with another process (Figure-1).
The shared memory area falls across different regions of Process A and Process B's memory address spaces.
System calls related to the shared memory system are listed below. We perform the operations of allocating,
mounting and returning shared memory space using these system calls.
2
4
Machine Translated by Google
Shared Memory
Now let's look at the parameters and usage of these functions respectively.
#include <sys/types.h>
#include <sys/ipc.h>
shm.h>
int shmget(key_t key, size_t size, int shmflg); key: shared
unique key identifying memory
size: defines the size of the shared memory partition. This value for an existing shared memory space
is ignored.
shmflg Meaning
0 Returns the ID of the shared memory
IPC_CREAT | 0640 shared memory creates And
#include <sys/types.h>
shm.h>
void *shmat(int shmid, const void *shmaddr, int shmflg); int
shmdt(void *shmaddr);
shmflg Meaning
0 Shared memory space for both reading and writing
available
SHM_RDONLY Shared memory space can be used read-only
2
5
Machine Translated by Google
Mission Control
Planning Algorithms
1. FCFS (First Come First Served) Algorithm:
According to this algorithm; The task that first requests the AIB uses the processor first. With FIFO queue
can be run. After the task is submitted to the ready tasks queue, its task control block
(PCB) is added to the end of the tail. The task at the head of the queue when AIB is empty
It is submitted to AIB for processing and is deleted from the queue. In this algorithm, the waiting time of tasks is
becomes high.
Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:
2
6
Machine Translated by Google
Mission Control
Scheduling in Batch Systems
27 _
Machine Translated by Google
Mission Control
1. FCFS (First Come First Served) Algorithm:
Example: Let's assume that tasks P1, P2, P3 are placed in the queue respectively:
Duty Run Time (sec)
P1 24
P2 3
P3 3
1. Let's assume that the tasks are presented in the sequence P1, P2, P3. Planning
accordingly:
2
8
Machine Translated by Google
Mission Control
2.SJF (Shortest Job First) Algorithm: In this algorithm, when
the CPU is idle, the task with the shortest running time among the remaining tasks is
presented to the processor to run. If the remaining times of two tasks are the same,
then the FCFS algorithm is applied. In this algorithm: Each task is evaluated with the
next CPU processing time of that task. This is used to find the shortest time job.
SJF Types:
1. Uninterrupted SJF: If AÿB is allocated to a task, the task cannot be interrupted until the AÿB processing
time expires.
2.Must be Interrupted SJF: If a new task is presented to the system whose AIB processing time is less than
the remaining processing time of the currently running task , the old task will be interrupted. This method is
called SRTF (Shortest Remaining Time First) method.
Optimization for the smallest average waiting time for a set of tasks given SJF
does.
2
9
Machine Translated by Google
Mission Control
2.SJF (Shortest Job First) Algorithm: Example: P1, P2, P3, P4 tasks
are presented in the following sequence.
Let's assume. Accordingly, let's find the average waiting time according to the
uninterrupted SJF method:
3
0
Machine Translated by Google
Mission Control
3.Multi-Queue Scheduling Algorithm:
According to this algorithm, tasks are divided into certain classes and each class of tasks creates
its own queue. In other words, ready tasks are converted into a multi-level queue. Tasks are
placed in a certain queue according to the type of task, its priority, memory status or other
characteristics. The scheduling algorithm for each queue may be different. However, it is created
in the algorithm that allows tasks to be transferred from one queue to another.
According to this algorithm, tasks in the high priority queue are processed first. If this resource
is empty, lower level tasks can be run from it.
1
Machine Translated by Google
Mission Control
4. Priority Planning Algorithm:
According to this algorithm, a priority value is assigned to each task and the tasks use the
processor in order of priority. The same priority tasks are run with the FCFS algorithm.
32
Machine Translated by Google
Mission Control
5.Round Robin (RR) Algorithm: Each task takes a
small time frame of the RMS. When this time is up, the task is truncated and added to the end
of the queue of ready tasks.
• Example: Let's assume that tasks P1, P2, P3, P4 are presented in the following sequence.
If the time period is 20 ms, then: