Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Multithreading in C/C++:

Concurrent programming & parallelism is one of the most common features that has
been used in the industry level very robustly. Since the CPUs has been advanced based or
their performances, the number of cores, developers are also supposed to take advantage of
those in software level too. To understand multithreading, we need to understand
concurrency. For example, we all use a browser, as I use Firefox. While you open Firefox
application, you can use multiple tabs simultaneously. Maybe in one tab, you stream music,
in another tab, you play video and in some other tab, you browse your information. Though
we are running multiple tabs simultaneously, we are running under single application only.
So, the application is the main process while the tabs can be thought of as threads running
independently with some common shared resources. This is what, known as concurrency.

Now, let's look from the coding point of view. What can be multithreading within coding?
Suppose, you are reading a huge file line by line. Say, you are currently reading this creating
a single process only (without multithreading). In this case, you are reading lien by line and it
will take n*x amount of time given is the number of lines and x is the amount of time needed
to read a single line. Now, say you opted for multithreading. You have a good number of
CPU cores and you created a thread pool with a fixed number of threads say y which is a
good match with the number of CPU cores you have. In this case, you coded in such a way
that the first thread will read 1st line, the second thread will read the second line, the yth
thread will read the yth line. Now, since all threads running concurrently, you can read y lines
in x seconds. The first thread will again read (y+1)th line and so on. So here the total time
taken will be much less and roughly it will take time around (n*x)/y.

This is the power of multithreading. Not only on independent cases, say there is a case where
you want to create two threads and you need to finish one thread before the other one so that
the data integrity maintains.

Implementation of Multithreading:
Multithreading is C is maintained by OS. C doesn't have any additional support for
multithreading. Rather it supports POSIX thread APIs which are available on OS like
GNU/LINUX, Solaris, macOS X.

The threading APIs are supported by header <pthread.h>

1. Creating a thread in C
Each thread is a pthread_t object that has a unique pthread_t id. Two different threads can't
have same pthread_t id. The API that's used to create a thread object is pthread_create()

The syntax of the API is like below:

pthread_create(
pthread_t* id,
pthread_attr_t* attribute,
void* function, void* arg
);
Where,

pthread_t* id = pointer to the thread id of this thread


pthread_attr_t* attribute = Sets attributes of a thread, mostly we pass NULL to this
void* function = name of the worker function that you want to run on the created thread
void* arg = arguments of the worked function
First two arguments are more on the thread side and programmer is not bothered about that as
most of the cases we pass the reference to the id & NULL respectively.

Now if the worker function is void* then, it is fine, otherwise, you have to typecast it. For
example, say the signature of the worker function is
Int myworker(void*)
Then we need to typecast myworker as (void* (*)(void*))&myworker
Now if the worker function has more than one arguments, then we need to create a struct and
need to typecast that to void*.
For example, worker function has one int and one char as arguments.
Then create a struct,
struct myarg{
Int a;
Char b;
};
typedef struct myarg myarg;

Now, typecast it to (void* )(&myarg)


This is how you tackle different scenarios while creating the thread.
2. Waiting for the child thread to finish
This is a very important step for threading. Say, you have created a child thread from your
main thread. Both are working on some common resources. You need your child thread to
finish at some point from where the parent thread can further take on. See the below scenario.

Here, at some point in the main thread, you create a child thread. So now both the main
thread and the child thread are executing parallel, but at some further point in the main thread
you have a scenario where you have to ensure that the child thread has finished, otherwise,
there might be a problem with data integrity or something else. Here there can be two cases,
one is the child thread finishes its execution before the main thread reaches that point. This is
the case which is not our concern. But there can be the case where the child thread is in still
execution but the main thread has reached that point. In such cases, we need to halt the main
thread forcibly for the child thread to finish. This is known as busy waiting. The API we use
to implement this is pthread_join()

The syntax of the API is like below:

pthread_join(p_thread* id, void** return_value);


Where,

p_thread* id = pointer to the thread id of the thread that has to be finished (child thread)
void** return_value = reference to the return value from the child thread(worker function )
3. Exiting a thread
To exit a thread pthread_exit() is used. If any value has to be returned from the thread, its
reference is passed as an argument for this API. All local variables are destroyed and only
global or dynamic variables can be returned.

C/C++ Implementation of Multithreading

Program:

#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>

static int i;

void* myworker(void* arg)


{
//delay loop to strenghthen the fact that main thread
//is waiting for the child thread to finish
for (int p = 0; p < 10000000; p++)
;

i = 5;

return NULL;
}

int main()
{
pthread_t id;

i = 3;
printf("In the main thread, i= %d \n", i);
printf("Creating a new thread where we will change the value of i and we will print
the changed value here\n");
int failed = pthread_create(&id, NULL, myworker, NULL);
if (failed) {
printf("can't create a child thread, process killed\n");
return 0;
}
else {
printf("Successfully created a child thread\n");
}

//wait for the child thread to finish


pthread_join(id, NULL);
printf("Child thread has finished changing the value of i\n");
printf("i is now %d \n", i);

return 0;
}

Output

In the main thread, i= 3


Creating a new thread where we will change the value of i and we will print the changed
value here
Successfully created a child thread
Child thread has finished changing the value of i
i is now 5

How to run multithreading program?

If you have tried to run in some online complier and you have failed and started blaming me,
then just hold a minute. POSIX APIs supported by OS like Linux, Solaris etc. So, first of all,
don't try on any online compiler. Try only in some dedicated Linux machine.

Still, you are having some compiler error? Saying that can't resolve reference to pthread
related stuff?

That's because you haven't passed the "-lpthread" option which tells compiler pthread related
stuff will be linked later. So to run it correctly following are the steps

Open a Linux terminal.

Enter the directory where your code is. Say the code name is "code.c"
To compile: gcc code.c -lpthread or g++ code.c -lpthread
To run: ./a.out
With GCC compiler
Multithreading in C/C++ with Examples (2)
With G++ compiler
Multithreading in C/C++ with Examples (3)
REMOTE PROCEDURE CALL

Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-
server-based applications. It is based on extending the conventional local procedure calling so
that the called procedure need not exist in the same address space as the calling procedure.
The two processes may be on the same system, or they may be on different systems with a
network connecting them.

When making a Remote Procedure Call:

1. The calling environment is suspended, procedure parameters are transferred across the
network to the environment where the procedure is to execute, and the procedure is executed
there.

2. When the procedure finishes and produces its results, its results are transferred back to the
calling environment, where execution resumes as if returning from a regular procedure call.

NOTE: RPC is especially well suited for client-server (e.g. query-response) interaction in
which the flow of control alternates between the caller and callee. Conceptually, the client
and server do not both execute at the same time. Instead, the thread of execution jumps from
the caller to the callee and then back again.

Working of RPC

The following steps take place during a RPC :

A client invokes a client stub procedure, passing parameters in the usual way. The client stub
resides within the client’s own address space.
The client stub marshalls(pack) the parameters into a message. Marshalling includes
converting the representation of the parameters into a standard format, and copying each
parameter into the message.
The client stub passes the message to the transport layer, which sends it to the remote server
machine.
On the server, the transport layer passes the message to a server stub, which
demarshalls(unpack) the parameters and calls the desired server routine using the regular
procedure call mechanism.
When the server procedure completes, it returns to the server stub (e.g., via a normal
procedure call return), which marshalls the return values into a message. The server stub then
hands the message to the transport layer.
The transport layer sends the result message back to the client transport layer, which hands
the message back to the client stub.
The client stub demarshalls the return parameters and execution returns to the caller.
Key Considerations for Designing and Implementing RPC Systems are:
Security: Since RPC involves communication over the network, security is a major concern.
Measures such as authentication, encryption, and authorization must be implemented to
prevent unauthorized access and protect sensitive data.
Scalability: As the number of clients and servers increases, the performance of the RPC
system must not degrade. Load balancing techniques and efficient resource utilization are
important for scalability.
Fault tolerance: The RPC system should be resilient to network failures, server crashes, and
other unexpected events. Measures such as redundancy, failover, and graceful degradation
can help ensure fault tolerance.
Standardization: There are several RPC frameworks and protocols available, and it is
important to choose a standardized and widely accepted one to ensure interoperability and
compatibility across different platforms and programming languages.
Performance tuning: Fine-tuning the RPC system for optimal performance is important. This
may involve optimizing the network protocol, minimizing the data transferred over the
network, and reducing the latency and overhead associated with RPC calls.

RPC ISSUES :
Issues that must be addressed:

1. RPC Runtime:
RPC run-time system is a library of routines and a set of services that handle the network
communications that underlie the RPC mechanism. In the course of an RPC call, client-side
and server-side run-time systems’ code handle binding, establish communications over an
appropriate protocol, pass call data between the client and server, and handle communications
errors.

2. Stub:
The function of the stub is to provide transparency to the programmer-written application
code.

On the client side, the stub handles the interface between the client’s local procedure call and
the run-time system, marshalling and unmarshalling data, invoking the RPC run-time
protocol, and if requested, carrying out some of the binding steps.
On the server side, the stub provides a similar interface between the run-time system and the
local manager procedures that are executed by the server.
3. Binding: How does the client know who to call, and where the service resides?
The most flexible solution is to use dynamic binding and find the server at run time when the
RPC is first made. The first time the client stub is invoked, it contacts a name server to
determine the transport address at which the server resides.

Binding consists of two parts:

Naming:
Locating:
A Server having a service to offer exports an interface for it. Exporting an interface registers
it with the system so that clients can use it.
A Client must import an (exported) interface before communication can begin.
4. The call semantics associated with RPC :
It is mainly classified into following choices-

Retry request message –


Whether to retry sending a request message when a server has failed or the receiver didn’t
receive the message.
Duplicate filtering –
Remove the duplicate server requests.
Retransmission of results –
To resend lost messages without re-executing the operations at the server side.
ADVANTAGES :
RPC provides ABSTRACTION i.e message-passing nature of network communication is
hidden from the user.
RPC often omits many of the protocol layers to improve performance. Even a small
performance improvement is important because a program may invoke RPCs often.
RPC enables the usage of the applications in the distributed environment, not only in the local
environment.
With RPC code re-writing / re-developing effort is minimized.
Process-oriented and thread oriented models supported by RPC.

You might also like