Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

TYPES OF INSTRUCTIONS

FLYNN’S CLASSIFICATION

It is based on the notion of a stream of information. Two types of information flow in to the processor: instructions and data.

The instruction stream is defined as the sequence of instructions executed by the processing unit.

The data stream is defined as the sequence of data including inputs, partial, or temporary results, called by the instruction stream.

TYPES OF FLYNN’S TAXONOMY

According to Flynn’s classification, either of the instructions or data streams can be single or multiple. The Computer Architecture can
be classified into four categories:

- SISD (Single-instruction stream Single-data streams)


- SIMD (Single-instruction stream Multiple-data streams)
- MISD (Multiple-instruction stream Single-data streams)
- MIMD (Multiple-instruction stream Multiple-data streams)
SISD Computers (Single-instruction stream Single-data streams):

- An SISD computer system is a uni-processor machine which is capable of executing a single instruction, operating on a
single data stream.
- Single Instruction: only one instruction stram is being acted on by the CPU during any one clock cycle.
- Single Data: only one data stream is being used as input during any clock cycle.
- Conventional single-procesor Von Neumann computers are classified as SISD systems.
- It is a serial (non-parallel) computer.
- Instructions are executed sequentially, but may be overlapped in their execution stages (Pipelining). Most SISD uni-processor
systems are pipelined.
- SISD computers may have more than one functional units, all under the supervision of control unit.
- Example: Most PC’s, single CPU workstations, minicomputers, mainframes, CDC-6600, VAX 11, IBM 7001 are SISD
computers.

a = b + c  O(1)

V1 = V2 + V3  O(n)

M1 = M2 + M3  O(n2)
SIMD (Single-instruction stream Multiple-data streams): (available as GPU)

- An SIMD system is capable of executing the same instruction on all the CPUs but operating on different data streams
- Single Instruction: All processing units execute the same instruction at any given clock cycle.
- Multiple Data: Each processing unit can operate on a different data element.
- SIMD computer has single control unit which issues one instruction at a time but it has multiple ALU’s or processing units to
carry out on multiple data sets simultaneously.
- Well suited to scientific computing since they involve lots of vector and matrix operations.
- Example: Array Processor sand Vector Pipelines
o Array processors: ILLIAC-IV, MPP
o Vector Pipelines: IBM 9000, Cray X-MP, Y-MP & C90.

VECTOR PROCESSING: (gaming industry)


V1 = V2 + V3  O(1)
https://www.youtube.com/watch?
M1 = M2 + M3  O(n)
v=s9OA3Xnz4d0&list=PLz8TdOA7NTzQNlzLxRf
sv2KexBzRSn3MF&index=82

ARRAY PROCESSING: (military, satellite image


processing)

https://www.youtube.com/watch?
v=gKYGA7fFad4&list=PLz8TdOA7NTzQNlzLxRf
sv2KexBzRSn3MF&index=83
MISD (Multiple-instruction stream Single-data streams):

- In MISD, multiple instructions operate on single data stream.


- An MISD computing system is capable of executing different instructions on different PUs but all of them operating on the
same dataset.
- Multiple Instruction: Each processing unit may be executing a different instruction steam.
- Single Data: Every processing unit can operate on a same data element.
- Machine built using the MISD model are preactically not useful in most of the application, a few machines are built, but none
of them are available commercially.
- Example: Systolic Arrays

MIMD (Multiple-instruction stream Multiple-data streams): (asynchronous)

- An MIMD system is capable of executing multiple instructions on multiple data sets.


- Multiple Instruction: Every processor may be executing a different instruction stream.
- Multiple Data: Every processor may be working with a different data stream.
- MIMD systems are parallel computers capable of processing several programs simultaneously.
- They can be categorized as loosely coupled or tightly depending on sharing of data and control.
- Example:
o Most current supercomputers, networked parallel compute “grids” and multi-processor SMP computers, including
some type of PCs.
o IBM-379, Cray-2, Cray X-MP, C.mmp, UNIVAC-1100/80

HYPER THREADING: (single bank with single window dealing with two queues.)

Hyper-Threading is a technology used by some Intel microprocessor s that allows a single microprocessor to
MULTICORE: (one bank with two windows dealing with two
queues each)

Multicore refers to an architecture in which a single physical


processor incorporates the core logic of more than one processor.

Cheaper than multiprocessor.

Speed of processor is a relative term and can be


measured in:
Direct Memory Access (DMA) transfers the block of data between the memory
and peripheral devices of the system, without the participation of the processor.
The unit that controls the activity of accessing memory directly is called a DMA
controller.
In concurrent computing, a program is one in which multiple tasks can be in
progress at any instant.
Most
In modern
parallel operating systems
computing, a programareismultitasking. This means
one in which multiple that
tasks the operating
cooperate
system provides
closely to solve asupport for the apparent simultaneous execution of multiple pro-
problem.
grams.
In distributed computing, a program may need to cooperate with other programs
to solve a problem.

SHARED MEMORY SYSTEMS:

In a shared-memory system a collection of autonomous processors is connected to a memory system via an interconnection network,

and each processor can access each memory location. In a shared-memory system, the processors usually communicate implicitly by
accessing shared data structures.

- In shared memory architecture, multiple processors operate independently but share the common memory as global address
space.
- They are tightly coupled systems as processors share a common memory.
- Only one processor can access the shared memory at a time.
- Changes in a memory location effected by one processor are visible to all other processors.
- In shared-memory systems with multiple multicore processors, the interconnect can either connect all the processors directly
to main memory or each processor can have a direct connection to a block of main memory, and the processors can access
each other’s blocks of main memory through special hardware built into the processors.
- In Uniform Memory Access, or UMA system, the time to access all the memory locations will be same for all the cores.
- In Non-uniform Memory Access, or NUMA system, a memory location to which a core is directly connected can be
accessed more quickly than a memory location that must be accessed through another chip.

To work on DM environment we need to install certain software,


to implement cluster or grid architecture.

DISTRIBUTED-MEMORY

SYSTEMS:

Symmetric Multi-Processing

SPMD: Single Program Multiple Data (Distributed Computing)

LAM-MPI – Local Area Multicomputer – Message Passing Interface

GRID COMPUTING: (millions or billions)


CLUSTER COMPUTING: (100s)
It has the following characteristics:
It has the following characteristics:
- Network connected to the internet. (non-homogenous)
- Homogenous computers - Allows heterogeneity
- Dedicated computers - Not much time constraint.
- Comes under single authority. - Dedicated computers
- Comes under single authority.
-
In a distributed-memory system, each processor is paired with its own private memory, and the processor-memory pairs
communicate over an interconnection network. So in distributed-memory systems the processors usually communicate explicitly by
sending messages or by using special functions that provide access to the memory of another processor.

- The most widely available distributed-memory systems are called clusters.


- They are composed of a collection of commodity systems—for example, PCs—connected by a commodity interconnection
network—for example, Ethernet.
- In fact, the nodes of these systems, the individual computational units joined together by the communication network, are
usually shared-memory systems with one or more multicore processors.
- To distinguish such systems from pure distributed-memory systems, they are sometimes called hybrid systems.
- The grid provides the infrastructure necessary to turn large networks of geographically distributed computers into a unified
distributed-memory system.
- In general, such a system will be heterogeneous, that is, the individual nodes may be built from different types of hardware.

MULTITASKING VS MULTITHREADING VS MULTIPROCESSING

1. Single tasking / Uni-programming System

2. Batch Processing System

3. Multitasking/Multiprogramming:

Most modern operating systems are multitasking. This means that the operating system provides support for the apparent simultaneous
execution of multiple programs.

- Multitasking is possible even on a system with a single core, since each process runs for a small interval of time, often called
a time slice.
- In a multitasking OS if a process needs to wait for a resource—for example, it needs to read data from external storage—it
will block.
- Threading provides a mechanism for programmers to divide their programs into more or
less independent tasks with the property that when one thread is blocked another thread
can be run.
- Threads are contained within processes, so they can use the same executable, and they
usually share the same memory and the same I/O devices.
- In fact, two threads belonging to one process can share most of the process’ resources
- If a process is the “master” thread of execution and threads are started and stopped by
the process, then we can envision the process and its subsidiary threads as lines: when a
thread is started, it forks off the process; when a thread terminates, it joins the process.

4. Multithreading:
Hardware multithreading provides a means for systems to continue doing useful work when the task being currently executed has
stalled—for example, if the current task has to wait for data to be loaded from memory.

In a single program there are multiple execution paths. If one of the thread has some I/O
call then only that thread won’t schedule, and the rest will execute as scheduled.

Multiprocessing Environment

Multiprocessing, in computing, a mode of operation in which two or more processors in a computer simultaneously process two or
more different portions of the same program (set of instructions)

T3 is a thread of another process. It is possible that two tasks (To, and T3) are running at
the same time.

MULTITASKING USING FORK()

Shell Command

ps – equivalent to windows task/resource manager command for linux.

kill – to kill a process. It also kills a process tree (multiple processes).

Unix/Linux System Calls in C/C++

getpid() – provide process id.

getppid() – provide parent process id.

system() – apne program mai rehte hoye use run kar sakte hein. A computer user’s instruction that calls for action by the computer's
executive program.

fork() – to create duplicate or clone of its own program. It creates a child process which runs simultaneously with the parent process.

exec() – program A se B ko call karna hai tou yeh command use karte hien

execv() / execl() / execve() / execvp() / execlp()

wait() – child ka wait karte waqt yeh command use karte hien

waitpid() – kisi particular child ka wait ke liye yeh use karte

kill() – apne program se kisi dosre program ko kill karne ke liye

IN DETAIL

System Call: Executing a command

int system (const char * command)

this function executes command as a shell command. Ek


program mai rehte hoye dosre ko run karna but it doesn’t show
output.
3760 is the ID of terminal.

Terminal (3760) created the program (4028) p1_pid.out

Fork() System Call

- System call fork() is used to create processes.


- It takes no arguments and returns a process ID.
- The purpose of fork() is to create a new process, which becomes the child process of the caller.
- After a new child process is created, both processes will execute the next instruction following the fork() system call.
- Therefore, we have to distinguish the parent from the child. This can be done by testing the returned value of fork():
o If fork() returns a negative value, the creation of a child process was unsuccessful.
o fork() returns a zero to the newly created child process.
o fork() returns a positive value, the process ID of the child process, to the parent. The returned process ID is of type
pid_t defined in sys/types.h. Normally, the process ID is an integer. Moreover, a process can use function getpid() to
retrieve the process ID assigned to this process

Therefore, after the system call to fork(), a simple test can tell which process is the child. Please note that Unix will make an
exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate
address spaces.

child_pid  data type  unsigned integer

vfork()

It does not make copy. Instead, the child


process created with vfork shares its parent's
address space until it calls _exit or one of the
exec functions. In the meantime, the parent
process suspends execution.

Process Creation

Process creation is by means of the kernel system call, fork( ).

This causes the OS, in Kernel Mode, to:

- Allocate a slot in the process table for the new process.


- Assign a unique process ID to the child process.
- Copy of process image of the parent, with the exception of any shared memory.
- Increment the counters for any files owned by the parent, to reflect that an additional process now also owns those files.
- Assign the child process to the Ready to Run state.
- Returns the ID number of the child to the parent process, and a 0 value to the child process.

After Creation

After creating the process the Kernel can do one of the following, as part of the dispatcher routine:

- Stay in the parent process.


- Transfer control to the child process.
- Transfer control to another process.

System Call: Executing a file

It executes the file named by filename as a new process image.

int execv (const char *filename, char *const argv[])

int execvp (const char *filename, char *const argv[])

int execve (const char *filename, char *const argv[], char *const
env[])

int execl (const char *filename, const char *arg0, ...)

int execlp (const char *filename, const char *arg0, ...)

System Call: Waiting for a Process Completion

• pid_t waitpid (pid_t pid, int *status-ptr, int options)

• pid_t wait (int *status-ptr)

Calling process is suspended until the child process makes


status information available by terminating.
Zombie Process (program complete ho gya hai lekin finish nahi.)

In Unix and Unix-like computer operating systems, a zombie process or defunct process is a process that has completed execution (via
the exit system call) but still has an entry in the process table: it is a process in the "Terminated state."

ITERATIVE VS CONCURRENT SERVER USING FORK (Watch again)

Iterative Server

- It processes one request at a time.


- It is easy to build
- Unnecessary delay

Concurrent Server

- It handles multiple requests at one time


- It is more difficult to design and build
- Better performance
INTRO TO MULTITHREADING

Process Image

text - code segment

stack – local variable

data – global and static variables

heap – dynamic memory allocation

At a time a single task is performed, when main is executed routine1 and


routine2 are not executing. When routine1 is called then it starts executing
and the control is in its hands. Stack has the variables of routine1. Once
routine1 finishes executing control is taken back to main and stack discards
all the routine1 variable. Main starts executing and then it calls routine2.
Now main and routine1 are idle and routine2 start executing. Interruption in
between will cause the program to stop.

Even when fork command is used and a copy of the process is build, even
then only one process runs at a time in the processor, although they will have separate process images.

In multitasking, a single thread can’t have multiple traces.

In multithreading, two threads can work simultaneously.

In this code segment, main will start executing, and call routine1.
Simultaneously, main will again start executing and call routine2. Now
all three threads are executing at the same time in the processor and both
routine’s variables are stored in the stack.

Advantage

- Interruption (for example, I/O call) in any of the thread will not
affect the execution of the other.

Problems in Multitasking
- Time to create new process
- Memory requirements
- Switching Time for scheduling

In concurrent server, as we increase the


clients using fork command, it will
increase memory. For example if one
client is of 50 mb and there are 5 such
clients then 250mb will be required.

In Thread Control Block,


there is a separate PC for
each thread. All the
thread IDs are also in this
block.
In PCB, process related
stuff is stored. For
example, how many files
are opened by this
program?

Suppose, this single thread is of 50mb and


fork command is run making a copy of this
which will copy the exact same contents and
take the same memory. Every time a copy is
made 50mb is used up.

Creating a new Process

pid = fork()

if ( pid < 0 )

{ printf ("fork failed,Error = %d\n", pid); exit(0); }

else if (pid ==0) /* this is the child of the fork */

{ do_nothing(); exit(0); }
else /* this is the parent of the fork */

{ waitpid(pid, status, 0); }

Benefits of Threads

- Less time to
o create a new thread than a process
o terminate a thread than a process
o switch between two threads within the same process
- Low Memory Requirements than Multiprogramming
- Since threads within the same process share memory and files, they can communicate with each other without invoking the
kernel.
MULTITHREADING IN C#

Thread class provides the join() method which


allows one thread to wait until another thread
completes its execution. If t is a Thread object
whose thread is currently executing, then t. join()
will make sure that t is terminated before the next
instruction is executed by the program.

When do we join threads?

1- Join is used only when one thread must wait for the open to finish (let’s say thread A prepares a file and thread B cannot
continue until the file is ready).
2- If function/thread is returning some value.

In output, first all even numbers will be


printed then “T1 has ended” will be printed
and finally odd numbers will be printed in
the end.

Thread. sleep() method can be used to pause the execution of current thread for specified
time in milliseconds.

If main ends, then

- In C, threads also end and program terminates


- In C#, threads don’t end and program waits for the threads to end.
MULTITHREADING IN JAVA

In Java, multiple inheritance of classes is not possible instead we use interfaces.

JAVA THREAD
For multithreading support in Java we have to write method “run().”

THREAD FUNCTION AND STATES

MULTITHREADING IN C

Concurrency Control

Thread is a call-back function. Thread is not invoked but created and called by OS.

A callback function is a function passed into another function as an argument, which is then invoked inside the outer function to
complete some kind of routine or action. It is invoked by the OS. For example: Events in C# windows form.
POSIOX Threads

- Historically, hardware vendors have implemented their own proprietary versions of threads.
- These implementations differed substantially from each other making it difficult for programmers to develop portable
threaded applications
- In order to take full advantage of the capabilities provided by threads, a standardized programming interface was required.
- For UNIX systems, this interface has been specified by the IEEE POSIX 1003.1c standard (1995).
- Latest version (IEEE POSIX 1003.1-2008)

Thread Management

• Creating and Terminating Threads

– pthread_create (thread, attr, start_routine, arg)

attr: 0default thread type – to change the attribute type we use “pthread_attr_init (attr)”

start_routine: function is input here.

Arg: function arguments are input here.

– pthread_exit (status)

if we put pthread_exit(status) in place of start_routine that thread itself requests to exit and tries to return a value.

– pthread_cancel (thread)

It is used externally to kill a thread. No matter what state is the thread in at the time, it is forcefully killed.

– pthread_attr_init (attr)

– pthread_attr_destroy (attr)

• Joining and detaching Threads

– By default threads are joinable.

– pthread_join (threadid,status)

– pthread_detach (threadid)

– pthread_attr_setdetachstate (attr,detachstate)

– pthread_attr_getdetachstate (attr,detachstate)

You might also like