Professional Documents
Culture Documents
Mohamed Abdelrahman Anwar - 20011634 - Sheet 4
Mohamed Abdelrahman Anwar - 20011634 - Sheet 4
Operating Systems
Sheet 4
Threads
4.1 Processes and Threads
1.
a. Creation & Deletion Overhead:
• A process is an instance of a program that is being
executed, with its own memory space, file descriptors, and
system resources. Creating a new process involves a
significant amount of overhead, such as allocating memory
and other resources, loading the program into memory,
and setting up communication channels with other
processes. Similarly, deleting a process involves freeing up
all the resources used by the process, including memory,
files, and other system resources. Thus, the creation and
deletion of a process is relatively expensive and time-
consuming.
• A thread, on the other hand, is a lightweight execution
unit that shares the same memory space as its parent
process. Creating a new thread involves much less
overhead than creating a new process, as most of the
resources are already allocated for the process. Similarly,
deleting a thread is less expensive than deleting a process,
as most of the resources are still being used by the parent
process. Thus, the creation and deletion of a thread is
relatively fast and inexpensive compared to a process.
Way of Communication and its speed:
• Processes are isolated from each other and cannot directly
access each other's memory space. To communicate
between processes, inter-process communication (IPC)
mechanisms such as pipes, sockets, or message queues
must be used. IPC has higher overhead than thread
communication, as it involves copying data between
different memory spaces and synchronization mechanisms
to avoid race conditions. Thus, the speed of
communication between processes is slower compared to
threads.
• Threads, on the other hand, can communicate with each
other directly through shared memory. This is faster than
IPC because threads can access the same memory location
without copying data or using synchronization
mechanisms. However, care must be taken to avoid race
conditions, where two or more threads access the same
memory location simultaneously, leading to unexpected
results.
4.2 TYPES OF THREADS
2.
a. Mapping:
• User-level threads are created and managed entirely by
the application program without any support from the
operating system. The mapping between user-level
threads and kernel-level threads is one-to-one, which
means that each user-level thread corresponds to one
kernel-level thread. This mapping is maintained by the
application program.
• Kernel-level threads, on the other hand, are created and
managed by the operating system. The mapping between
kernel-level threads and user-level threads is many-to-one
or many-to-many, which means that multiple user-level
threads can be mapped to the same kernel-level thread, or
multiple kernel-level threads can be used to support a
single user-level thread.
Dealing with multi-processor systems:
• User-level threads are not aware of the underlying
hardware and do not take advantage of multiple
processors or cores. As a result, user-level threads cannot
fully utilize the processing power of a multi-processor
system.
• Kernel-level threads, on the other hand, are aware of the
underlying hardware and can be scheduled to run on
different processors or cores, allowing them to fully utilize
the processing power of a multi-processor system.
Overhead on the kernel:
• User-level threads have very low overhead on the kernel,
as all thread management is done by the application
program. The kernel is only involved when a thread needs
to block or unblock, which is a relatively rare occurrence.
• Kernel-level threads have higher overhead on the kernel,
as all thread management is done by the operating
system. The kernel is involved in every thread creation,
deletion, and context switch.
Portability:
• User-level threads are highly portable across different
operating systems, as they do not rely on any specific
features of the operating system. However, the
performance of user-level threads may be affected by
differences in hardware or system configuration.
• Kernel-level threads are less portable across different
operating systems, as they rely on specific features of the
operating system. However, the performance of kernel-
level threads is more consistent across different hardware
and system configurations.
Who is doing dispatching and scheduling?
• User-level threads are dispatched and scheduled by the
application program. The program must implement its
own scheduling algorithm and determine which thread to
run next.
• Kernel-level threads are dispatched and scheduled by the
operating system. The operating system uses a kernel-
level scheduler to determine which thread to run next,
based on factors such as thread priority, time slice, and
processor affinity.
3. When a user-level thread makes a system call, it enters the
kernel mode to execute the system call. At this point, the entire
process is blocked, including all user-level threads within the
process. This is because user-level threads within a process
share the same process context, including the same address
space, file descriptors, and other resources. When one thread
blocks, it blocks the entire process.
This does not happen in kernel-level threads, as each kernel-
level thread has its own kernel-level context and can execute
system calls independently of other threads in the process.
When a kernel-level thread makes a system call, only that
thread is blocked, and other kernel-level threads in the same
process can continue to execute.
4.
ULTs:
Advantages:
• Thread switching does not require kernel-mode privileges
because all of the thread management data structures are
within the user address space of a single process.
Therefore, the process does not switch to the kernel mode
to do thread management. This saves the overhead of two
mode switches (user to kernel; kernel back to user).
• Scheduling can be application specific. One application
may benefit most from a simple round-robin scheduling
algorithm, while another might benefit from a priority-
based scheduling algorithm. The scheduling algorithm can
be tailored to the application without disturbing the
underlying OS scheduler.
• ULTs can run on any OS. No changes are required to the
underlying kernel to support ULTs. The threads library is a
set of application-level functions shared by all applications.
Disadvantages:
• In a typical OS, many system calls are blocking. As a result,
when a ULT executes a system call, not only is that thread
blocked, but all of the threads within the process are
blocked as well.
• In a pure ULT strategy, a multithreaded application cannot
take advantage of multiprocessing. A kernel assigns one
process to only one processor at a time. Therefore, only a
single thread within a process can be executed at a time.
In effect, we have application-level multiprogramming
within a single process. While this multiprogramming can
result in a significant speedup of the application, there are
applications that would benefit from the ability to execute
portions of code simultaneously.
KLTs:
Advantages:
This approach overcomes the two principal drawbacks of the
ULT approach:
• First, the kernel can simultaneously schedule multiple
threads from the same process on multiple processors.
• Second, if one thread in a process is blocked, the kernel
can schedule another thread of the same process.
• Another advantage of the KLT approach is that kernel
routines themselves can be multithreaded.
Disadvantages:
The principal disadvantage of the KLT approach compared to
the ULT approach is that the transfer of control from one
thread to another within the same process requires a mode
switch to the kernel.
9.
a. The number of kernel threads allocated to the program is
less than the number of processors: In this scenario, the system
will switch between user-level threads and kernel-level threads
to allocate processors. This may result in poor performance due
to frequent context switches between threads. Furthermore,
the system will not be fully utilizing all processors, as some
processors may remain idle while other threads are being
executed.
b. The number of kernel threads allocated to the program is
equal to the number of processors: In this scenario, each
processor will be assigned a kernel-level thread, which will
manage the execution of one or more user-level threads. This
can result in good performance, as each processor will be fully
utilized, and context switching between threads will be
minimized.
c. The number of kernel threads allocated to the program is
greater than the number of processors but less than the
number of user-level threads: In this scenario, some kernel-
level threads will manage the execution of multiple user-level
threads, which may result in some processors being idle while
other threads are being executed. This can lead to poor
performance, as some processors may remain underutilized
while other processors are overutilized.
10.
a. the program's main goal is to demonstrate how to use
pthreads to create a simple multithreaded program in C, and
how to use a shared global variable between two threads.
b.
• The output seems correct, except for the fact that there is
a single extra '.' in the first line of the output. This could be
because the printf() function is not flushing the output
buffer immediately.
• As for the final output, it shows that the value of myglobal
is 21. This is the expected value because both the main
thread and the child thread increment myglobal 20 times
each, for a total of 40 increments. Therefore, the final
value of myglobal should be 20 + 20 + 1 (initial value) = 41.
However, the output shows that the final value of
myglobal is 21, which is not the expected value. This could
be due to a race condition that occurs when both threads
try to access and modify the myglobal variable
concurrently. Since there is no synchronization mechanism
such as a mutex or a semaphore used in the program, the
order in which the threads access and modify myglobal is
non-deterministic. Therefore, the final value of myglobal
could be unpredictable, and different runs of the program
could produce different results.