Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

UNIVERSITY OF GONDAR

INSTITUTE OF TECHNOLOGY
Department: Electrical and Computer Engineering
Computer stream
Programming Language Group Assignment

Name: ID No
1 chuol kay ---------------------------------------------------------00311/09

2 Dawit Minuye-----------------------------------------------------00334/09

3 Bistrat Debalke-----------------------------------------------------00296/09

4 Buruk kassahun----------------------------------------------------00307/09

5 Derso Eshettie-------------------------------------------------------

Submission Date: 12/02/2021

Submitted To: Mr. Lake F.


1) Concurrency in programing

i. What is concurrency in programing?

Concurrency is the notion of multiple things happening at the same time. With the
proliferation of multicore CPUs and the realization that the number of cores in each processor
will only increase, software developers need new ways to take advantage of them.

It happens when multiple copies of the same program are run at the same time, but in the course
of their execution, those copies communicate with each other. In many simple concurrent
applications, we use a single machine, and the program’s instruction code is only loaded into
memory once in other words, a single process is created but the process’s execution has
multiple threads. Each thread remembers which instruction it’s on, and executes that instruction
before going on to the next one; thus, the various threads in a process each follow their own
control flow, but can make decisions based on information they receive from other threads.

In a concurrent program, several streams of operations may execute concurrently. Each stream of
operations executes as it would in a sequential program except for the fact that streams can
communicate and interfere with one another.

Concurrency can be very tricky. Two threads can communicate by having a shared memory
location that one writes to and the other reads from. But then the value read by the latter depends
on whether the read occurred before or after the former wrote to that memory location.

ii. List and explain Concurrency levels?

 Instruction level: executing two or more machine instructions


simultaneously. This is actually concurrency at the hardware level,
called "Concurrency at the level of instruction execution" above.
 Statement level: executing two or more high-level language statements
simultaneously. This means that several statements of a program are
executed concurrently on some physical parallel computer architecture.

1
 Unit level: executing two or more subprogram units simultaneously.
This level of concurrency is also called logical concurrency. In contrast
to physical concurrency, it does not imply that the logically concurrent
"units" are really executed in physical concurrency.
 Program (or task) level: Often several programs are executed
concurrently (in an interleaved manner) on a computer within a time-
sharing operating system. The CPU of the computer is shared among the
programs by allocating the CPU to each program for a short time slice at
a time. The allocation is done by a scheduler. The same technique of
time sharing is usually also used for realizing concurrent "units" within
a given program.

iii. Explain briefly the difference between concurrency and parallelism with
example?

Concurrency and parallelism are related terms but not the same, and often misconceived as
the similar terms. The crucial difference between concurrency and parallelism is
that concurrency is about dealing with a lot of things at same time (gives the illusion of
simultaneity) or handling concurrent events essentially hiding latency. On the
contrary, parallelism is about doing a lot of things at the same time for increasing the speed.

Concurrency is a technique utilized for decreasing the response time of the system using single
processing unit or sequential processing. A task is divided into multiple parts, and its part is
processed simultaneously but not at the same instant. It produces the illusion of parallelism, but
in actual the chunks of a task are not parallelly processed. Concurrency is obtained
by interleaving operation of processes on the CPU, in other words through context switching
where the control is swiftly switched between different threads of processes and the switching is
unrecognizable. That is the reason it looks like parallel processing.

Concurrency imparts multi-party access to the shared resources and requires some form of
communication. It works on a thread when it is making any useful progress then it halts the
thread and switches to different thread unless it is making any useful progress.

2
Parallelism is devised for the purpose of increasing the computational speed by using multiple
processors. It is a technique of simultaneously executing the different tasks at the same instant. It
involves several independent computing processing units or computing devices which are
parallel operating and performing tasks in order to increase computational speed-up and improve
throughput.

Key differences between Concurrency and Parallelism:

1. Concurrency is the act of running and managing multiple tasks at the same time. On the
other hand, parallelism is the act of running various tasks simultaneously.
2. Parallelism is obtained by using multiple CPUs, like a multi-processor system and
operating different processes on these processing units or CPUs. In contrast, concurrency
is achieved by interleaving operation of processes on the CPU and particularly context
switching.
3. Concurrency can be implemented by using single processing unit while this can not be
possible in case of parallelism, it requires multiple processing units.
As an example, suppose that we need to compute n values and add them together. we know
that this can be done with the following serial condition.

Sum=0;

For (i=0 ;i<n ;i++)

{
X=Compute_next_value(…);
Sum+=x;}

Now suppose we also have p cores and p is much smaller than n. then each core can form a
partial sum of approximately n/p values.

//here the prefix my_ indicates that each core is using its own, private values, and each core
can execute the block of code independently of the other cores.

My_sum=0;

my_first_i=…..;

my_last_i=……;

For(my_i= my_first_i;my_i< my_last_i;my_i++) {

my_x= Compute_next_value(…); my_sum+=my_x;}

3
iv. List and explain some of the design issues for programming concurrency in terms of
object oriented paradigms?
 Concurrency encompasses a host of design issues, including communication among
processes, sharing and competing for resources (such as memory, files, and I/O
access), synchronization of the activities of multiple processes, and allocation
of processor time to processes.
The design space for object-based concurrent programming, emphasizing high-
level design alternatives in the area of process structure, internal process
concurrency, synchronization, inter-process communication.

● Process structure

Design issues of process structure include:

● Shared memory versus object model- process-oriented


architectures discard the shared memory model that forms the basis
of procedure-oriented programming and replace it by an object-
model in which each object is responsible for its own protection.

● Client and server interaction- objects may be viewed as server


processes that are activated by message from their client.

● Logical versus physical distribution

● Weakly and strongly distributed system

2) Thread in programming.
i. What is Thread in programming?
A thread is short for a thread of execution. Threads are a way for a program to divide ("split")
itself into two or more simultaneously (or pseudo-simultaneously) running tasks associated with
a single use of a program that can handle multiple concurrent users. From the program's point-of-
view, a thread is the information needed to serve one individual user or a particular service
request. If multiple users are using the program or concurrent requests from other programs

4
occur, a thread is created and maintained for each of them. The thread allows a program to know
which user is being served as the program alternately gets re-entered on behalf of different users.
(One way thread information is kept by storing it in a special data area and putting the address of
that data area in a register.

ii. Write the advantages and disadvantages of thread in programming?


Advantage
 Enhanced performance by decreased development time
 Simplified and streamlined program coding
 Improvised GUI responsiveness
 Simultaneous and parallelized occurrence of tasks
 Better use of cache storage by utilization of resources
 Decreased cost of maintenance
 Better use of CPU resource
Disadvantages
 Complex debugging and testing processes
 Overhead switching of context
 Increased potential for deadlock occurrence
 Increased difficulty level in writing a program
 Unpredictable results
 All the variables both local and global are shared between threads. This creates a security
issue as the global variables give access to any process in the system.

3. Synchronization and parallelism in programming?

What is the relationship between synchronization and parallelism in programming?


● Synchronization is one of the key problems in building shared-resource-based parallel
software. At the source-language level shared resources are mostly shared variables,
while at the machine-language level they are registers, memory cells and status flags, etc.
To improve the productivity and the confidence of parallel software source languages
should provide high-level abstractions for synchronization to ease parallel programming

5
(rather than simply providing low-level locks, etc.); and compilers and runtime systems
are required to provide accurate and efficient implementation for such abstractions.

● Most of the prevalent parallel programming paradigms use low-level synchronized


facilities such as locks for synchronizing shared variables, which makes parallel
programming difficult and error prone and limits benefits from programming on multi-
core processors

How to implement parallelism in java? Explain

Parallel computing involves dividing a problem into sub problems, solving those problems
simultaneously (in parallel, with each sub problem running in a separate thread), and then
combining the results of the solutions to the sub problems. Java SE provides the Fork/Join
Framework which helps you to more easily implement parallel computing in your applications.

One difficulty in implementing parallelism in applications that use collections is that collections
aren’t thread-safe, which means that multiple threads can’t manipulate a collection without
introducing thread interference or memory consistency errors. Not that parallelism isn’t
automatically faster than performing operations serially, although it can be if you have enough
data and processor cores.

In Java, you can run streams in serial or in parallel. When a stream executes in parallel, the Java
runtime partitions the stream into multiple sub streams.

4) Which is one better of program execution? Write your reason for each case.

The better on is Parallel and Concurrent based on the explained below

i. Sequential Execution

A sequence is an ordered list of something. Sequential execution means that each command in a
program script executes in the order in which it is listed in the program. The first command in
the sequence executes first and when it is complete, the second command executes, and so on.

6
ii. Concurrent Execution

Concurrency means that an application is making progress on more than one task - at the same
time or at least seemingly at the same time (concurrently).

If the computer only has one CPU the application may not make progress on more than one task
at exactly the same time, but more than one task is in progress at a time inside the application. To
make progress on more than one task concurrently the CPU switches between the different tasks
during execution. This is illustrated in the diagram below:

iii. Parallel Execution

Parallel execution is when a computer has more than one CPU or CPU core, and makes progress
on more than one task simultaneously. However, parallel execution is not referring to the same
phenomenon as parallelism. Parallel execution is efficient than their corresponding sequential
execution.so in term of performance the parallel execution is better than sequential.

Parallel execution is illustrated below:

iv. Concurrent, but not parallel execution.

An application can be concurrent, but not parallel. This means that it makes progress on more
than one task seemingly at the same time (concurrently), but the application switches between
making progress on each of the tasks - until the tasks are completed. There is no true parallel
execution of tasks going in parallel threads / CPUs.

V. Parallel, not concurrent execution

An application can also be parallel but not concurrent. This means that the application only
works on one task at a time, and this task is broken down into subtasks which can be processed
in parallel. However, each task (+ subtask) is completed before the next task is split up and
executed in parallel.

7
vi. Concurrent and Parallel execution.

Finally, an application can also be both concurrent and parallel in two ways:

The first is simple parallel concurrent execution. This is what happens if an application starts up
multiple threads which are then executed on multiple CPUs.

The second way is that the application both works on multiple tasks concurrently, and also
breaks each task down into subtasks for parallel execution. However, some of the benefits of
concurrency and parallelism may be lost in this scenario, as the CPUs in the computer are
already kept reasonably busy with either concurrency or parallelism alone. Combining it may
lead to only a small performance gain or even performance loss. Make sure you analyze and
measure before you adopt a concurrent parallel model blindly.

References

[1] t. W. a. M. V.Zelkowitz, Programing Language, new jersey: U.S, 2001.

www.coursehero.com, website

www.wikipedia.com

You might also like