1.when A Sparse Matrix Is Represented With A 2-Dimensional Array, We

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

1.

When a sparse matrix is represented with a 2-dimensional array, we waste a lot of


space to represent that matrix. For example, consider a matrix of size 100 X 100 containing
only 10 non- zero elements. In this matrix, only 10 spaces are filled with non-zero values
and remaining spaces of the matrix are filled with zero.
Representing a sparse matrix by a 2D array leads to wastage of lots of memory as zeroes in the
matrix are of no use in most of the cases. So, instead of storing zeroes with non-zero elements,
we only store non-zero elements. This means storing non-zero elements with triples- (Row,
Column, value).

Algorithm for PUSH operation:


1. Algorithm for PUSH operation: PUSH(STACK, TOP, SIZE, ITEM) Step 1: if TOP >= N-1
then. PRINT “stack is overflow” Exit. ...
2. Algorithm for POP operation: PUSH(STACK, TOP, ITEM) Step 1: if TOP = 0 then.
PRINT “stack is empty” Exit. ...
3. Step 2: ITEM = STACK[POP] Step 3: TOP = TOP -1. Step 4: Return.

Linked list:-
push(value) - Inserting an element into the Stack
1. Step 1 - Create a newNode with given value.
2. Step 2 - Check whether stack is Empty (top == NULL)
3. Step 3 - If it is Empty, then set newNode → next = NULL.
4. Step 4 - If it is Not Empty, then set newNode → next = top.
5. Step 5 - Finally, set top = newNode.

Quicksort is a divide-and-conquer algorithm. It works by selecting a 'pivot' element from the


array and partitioning the other elements into two sub-arrays, according to whether they are less
than or greater than the pivot. For this reason, it is sometimes called partition-exchange sort.
Difference
The number of comparisons in Bubble Sort increases rapidly with the array size since each
element needs to be compared with other elements. - Quick Sort algorithm achieves
faster sorting by using a pivot element and shows better performance with larger arrays
due to its complexity.

1. _HASHING -----_Hashing in data structure results in an array index that is already


occupied to store a value. In such a case, hashing performs a search operation and probes
linearly for the next empty cell. Linear probing in hash techniques is known to be the easiest
way to resolve any
collisions in hash tables.
Types of Hashing
There are many different types of hash algorithms such as RipeMD, Tiger, xxhash and more, but
the most common type of hashing used for file integrity checks are MD5, SHA-2 and CRC32.

MD5 - An MD5 hash function encodes a string of information and encodes it into a 128-bit
fingerprint. MD5 is often used as a checksum to verify data integrity
SHA-2 – SHA-2, developed by the National Security Agency (NSA), is a cryptographic hash
function. SHA-2 includes significant changes from its predecessor, SHA-1.
CRC32 – A cyclic redundancy check (CRC) is an error-detecting code often used for detection
of accidental changes to data. Encoding the same data string using CRC32 will always result in
the same hash output, thus CRC32 is sometimes used as a hash algorithm for file integrity
checks.

Open addressing is a collision handling technique used in hashing where, when a collision occurs
(i.e., when two or more keys map to the same slot), the algorithm looks for another empty slot in
the hash

2. ....KRUSKAL'S---------Kruskal’s algorithm is the concept that is introduced in the graph theory


of discrete mathematics. It is used to discover the shortest path between two points in a
connected weighted graph. This algorithm converts a given graph into the forest, considering
each node as a separate tree.
EXAMPLE--- -Kruskal's Algorithm is a popular algorithm used in graph theory to find the
Minimum Spanning Tree (MST) of a weighted graph. The MST represents the subset of edges
that form the most efficient way to connect all the vertices while minimizing the total weight.

3.........a-b+mn^*op+-q)rs^*t+z

solve ^ now, as it has high priority, then multiplication and division, followed by add and
subtract. As they have next priority and follow left to right associativity.

4.............What is a Stack?
A stack is a type of data structure that is similar to a linear list and is represented by a
sequential collection of objects.

A stack can be thought of as a physical stack or pile, in which the items are stacked one on top
of the other like a stack of books. The objects are placed in such a way that it is only possible to
add new items to or remove existing items from one end of the stack, which is referred to as
the top of the stack.

------------------What is an Array?
An array is a form of linear data structure that is always defined as a collection of items that
have the same data type.

The value of the array is always stored at a place that has been predetermined and is referred
to as the array's index.

Arrays are not dynamic objects like stacks; rather, their sizes remain constant throughout their
use. This means that once space is allocated for an array, its dimensions cannot be changed.

5............Binary Tree Data Structure

A tree whose elements have at most 2 children is called a binary tree. Since each element in a
binary tree can have only 2 children, we typically name them the left and right children.

Binary Search Tree Data Structure


A binary Search Tree is a node-based binary tree data structure that has the following
properties:

The left subtree of a node contains only nodes with keys lesser than the node’s key.
The right subtree of a node contains only nodes with keys greater than the node’s key.
The left and right subtree each must also be a binary search tree.
There must be no duplicate nodes.

G........What is a Complete Binary Tree?


A complete binary tree is a special type of binary tree where all the levels of the tree are filled
completely except the lowest level nodes which are filled from as left as possible.

7..........Quick Sort is based on the concept of Divide and Conquer algorithm, which is also the
concept used in Merge Sort. The difference is, that in quick so
An algorithm is considered efficient if its resource consumption, also known as
computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means:
it will run in a reasonable amount of time or space on an available computer, typically as a
function of the size of the input.
we discussed how Asymptotic analysis overcomes the problems of the naive way of
analyzing algorithms. But let’s take an overview of the asymptotic notation and learn about
What is Worst, Average, and Best cases of an algorithm:
for Beginners
DSA Tutorial
Data Structures
Algorithms
Array
Strings
Linked List
Stack
Queue
Tree
Graph
Searching
Sorting
Recursion
Dynamic Programming
Binary Tree
Binary Search Tree
Heap
Hashing
Divide &
Conquer
Mathematical
Geometric
Bitwise
Greedy
Backtracking
Branch and Bound
Matrix
Pattern Searching
Randomized
Related Articles
Explore Our Geeks Community
Write an Interview Experience
Share Your Campus Experience
Algorithms Tutorial
What is Algorithm | Introduction to Algorithms
Definition, Types, Complexity and Examples of Algorithm
Algorithms Design Techniques
Why the Analysis of Algorithm is important?
Analysis of Algorithms
Asymptotic Notation and Analysis (Based on input size) in Complexity Analysis of Algorithms
Worst, Average and Best Case Analysis of Algorithms
Types of Asymptotic Notations in Complexity Analysis of Algorithms
How to Analyse Loops for Complexity Analysis of Algorithms
How to analyse Complexity of Recurrence Relation
Introduction to Amortized Analysis
Types of Algorithms
The Role of Algorithms in Computing
Most important type of Algorithms
Worst, Average and Best Case Analysis of Algorithms
Read
Discuss(U0+)
Courses
Practice
Video
In the previous post, we discussed how Asymptotic analysis overcomes the problems of the
naive way of analyzing algorithms. But let’s take an overview of the asymptotic notation and learn
about What is Worst, Average, and Best cases of an algorithm:

Popular Notations in Complexity Analysis of Algorithms


1. Big-O Notation
We define an algorithm’s worst-case time complexity by using the Big-O notation, which
determines the set of functions grows slower than or at the same rate as the expression.
Furthermore, it explains the maximum amount of time an algorithm requires to consider all input
values.

2. Omega Notation
It defines the best case of an algorithm’s time complexity, the Omega notation defines whether
the set of functions will grow faster or at the same rate as the expression. Furthermore, it
explains the minimum amount of time an algorithm requires to consider all input values.

3. Theta Notation
It defines the average case of an algorithm’s time complexity, the Theta notation defines when
the set of functions lies in both O(expression) and Omega(expression), then Theta notation is
used. This is how we define a time complexity average case for an algorithm.

Measurement of Complexity of an Algorithm


Based on the above three notations of Time Complexity there are three cases to analyze an
algorithm:

1. Worst Case Analysis (Mostly used)


In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
We must know the case that causes a maximum number of operations to be executed. For
Linear Search, the worst case happens when the element to be searched (x) is not present in
the array. When x is not present, the search() function compares it with all the elements of arr[]
one by one. Therefore, the worst-case time complexity of the linear search would be O(n).

You might also like