Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 28

What is data Structure Exactly?

A data structure is a way of organizing and storing data to perform operations


efficiently. It defines a set of rules for organizing and managing data, providing a way to
store, access, and manipulate information. Data structures are essential components in
computer science and are used in the design and implementation of algorithms.

There are various types of data structures, each with its own set of rules and
characteristics. Some common data structures include:

1. Arrays: An ordered collection of elements, each identified by an index or a key.


2. Linked Lists: Elements are stored in nodes, and each node points to the next one in the
sequence.
3. Stacks: A last-in, first-out (LIFO) data structure where elements are added and removed
from the same end, resembling a stack of plates.
4. Queues: A first-in, first-out (FIFO) data structure where elements are added at the rear
and removed from the front.
5. Trees: A hierarchical data structure with a root element and subtrees of children with a
parent-child relationship.
6. Graphs: A collection of nodes connected by edges, representing relationships between
elements.
7. Hash Tables: Uses a hash function to compute an index into an array of buckets or
slots, from which the desired value can be found.

The need for data structures


Data structures play a fundamental role in computer science and programming because
they provide efficient ways to organize, store, and manipulate data. Here are some key
reasons why data structures are essential:

1. Organization of Data:
 Efficient Storage: Data structures allow for the efficient storage of data, ensuring
that information is organized in a way that enables quick and easy access.
 Logical Representation: They provide a logical and hierarchical representation
of data, making it easier for programmers to conceptualize and work with
complex datasets.
2. Data Retrieval and Search:
 Fast Retrieval: Well-designed data structures facilitate fast retrieval of data. For
example, using data structures like arrays or hash tables can significantly speed
up the search process.
 Search Algorithms: Data structures are closely tied to search algorithms, which
are crucial for finding specific pieces of information within a dataset.
3. Memory Management:
 Dynamic Memory Allocation: Data structures allow for dynamic memory
allocation, enabling programs to adapt to changing data sizes during runtime.
 Memory Efficiency: They help in utilizing memory efficiently by avoiding
unnecessary storage and reducing memory fragmentation..
4. Algorithm Design:
 Algorithm Efficiency: Efficient algorithms often rely on the appropriate choice of
data structures. The same algorithm may have different time complexities based
on the underlying data structures used.

Data Structures operations


Insertion
Deletion
Update
Searching
Sorting
Merging
1. Insertion:
 Array: Inserting an element in an array involves shifting elements to
accommodate the new element.
 Linked List: Adding a new node involves updating the pointers to maintain the
sequence.
 Binary Search Tree (BST): Inserting a node follows the rules of the BST to
maintain the ordering.
2. Deletion:
 Array: Deleting an element involves shifting elements to fill the gap.
 Linked List: Removing a node involves updating the pointers to bypass the node
being deleted.
 Binary Search Tree (BST): Deleting a node requires maintaining the BST
properties.
3. Update:
 Array: Modifying an element involves accessing and changing the value directly.
 Linked List: Updating a node involves modifying the value in the node.
 Binary Search Tree (BST): Updating involves deleting the old node and inserting
a new one.
4. Searching:
 Array: Linear search or binary search for sorted arrays.
 Linked List: Linear search as elements are not necessarily stored in contiguous
memory locations.
 Binary Search Tree (BST): Searching follows the rules of the BST, allowing for
efficient search operations.
5. Sorting:
 Array: Various sorting algorithms like Bubble Sort, Insertion Sort, Merge Sort,
Quick Sort, etc.
 Linked List: Merge Sort is often preferred for linked lists due to its structure.
 Binary Search Tree (BST): In-order traversal of a BST results in a sorted
sequence.
6. Merging:
 Array: Merging involves combining two sorted arrays into a single sorted array.
 Linked List: Merging two sorted linked lists involves rearranging pointers to
create a single sorted linked list.
 Binary Search Tree (BST): Merging two BSTs involves inserting all elements of
one tree into the other.

Algorithm Definition
An algorithm is a step-by-step set of instructions or a well-defined computational
procedure that takes an input, performs a sequence of operations, and produces an
output. Essentially, it is a method or a systematic way of solving a problem or
performing a task. Algorithms are used in various fields such as computer science,
mathematics, and engineering, and they play a crucial role in computer programming.

Key characteristics of algorithms include:

1. Input: The algorithm takes some input data or parameters.


2. Output: It produces a result or output based on the input.
3. Definiteness: Each step of the algorithm must be precisely defined and unambiguous.
4. Finiteness: The algorithm should terminate after a finite number of steps.
5. Effectiveness: The steps of the algorithm should be practical and achievable using basic
operations.
6. Generality: An algorithm is designed to solve a general class of problems, not just
specific instances.
Good Algorithm

The term "good algorithm" can be subjective and context-dependent. The effectiveness of an
algorithm depends on the specific problem it aims to solve, the constraints of the system it
operates in, and the goals or criteria used to evaluate its performance.

1. Correctness: A good algorithm should produce the correct output for all valid inputs. It
should accurately solve the intended problem.
2. Efficiency: An efficient algorithm accomplishes the task using a reasonable amount of
resources such as time, memory, or energy.
3. Scalability: A good algorithm should be able to handle larger inputs or datasets without
a significant increase in resource requirements.
4. Clarity and Simplicity: A good algorithm is often simple, easy to understand, and well-
organized. Clarity in design and implementation contributes to maintainability
5. Robustness: The algorithm should be able to handle unexpected inputs or edge cases
gracefully, avoiding crashes or incorrect results.
Various Factors that impact on performance
Hardware
Operating system
Compiler
Size of input
Nature of input
Algorithm
1. Hardware:
 The physical components of a computer, such as the CPU, RAM, storage, and
GPU, can significantly affect the performance of a program. Faster and more
powerful hardware generally leads to better performance.
2. Operating System:
 The choice of the operating system can influence how a program interacts with
the underlying hardware. Different operating systems have different performance
characteristics and may offer various optimizations.
3. Compiler:
 The compiler translates source code into machine code that can be executed by
the computer. The efficiency of the compiler and the optimizations it applies can
impact the performance of the resulting executable.
4. Size of Input:
 The amount of data or the size of the input can affect how efficiently a program
runs. Some algorithms may have different time or space complexities based on
the size of the input.
5. Nature of Input:
 The type of data and its characteristics can also impact performance. For
example, certain algorithms may perform differently on sorted or unsorted data,
6. Algorithm:
 The choice of algorithm is crucial. Different algorithms solve problems in different
ways, and their efficiency can vary based on the specific requirements of the task
at hand.

Be careful to differentiate between performance


and complexity
1. Performance:
 Definition: Performance refers to how well a system, process, or product
accomplishes its intended function. It is often measured in terms of speed,
efficiency, responsiveness, throughput, or other relevant metrics.
 Examples:
 In computing, the performance of a software application may be assessed
based on its execution speed, response time, and resource utilization.
 In engineering, the performance of a car could be evaluated based on its
speed, fuel efficiency, and handling capabilities.
 Measurement: Performance is typically measured quantitatively using specific
metrics, and improvements are often sought to enhance the overall effectiveness
of a system.
2. Complexity:
 Definition: Complexity refers to the intricacy, intricateness, or the number of
components involved in a system, design, or process. It is a qualitative measure
of how difficult a system is to understand, manage, or develop.
 Examples:
 In software development, the complexity of a codebase may be
determined by the number of lines of code, the level of nesting, and the
presence of intricate algorithms.
 In project management, the complexity of a project could be influenced by
the number of stakeholders, dependencies, and the overall structure of
tasks.
 Measurement: Complexity is often a subjective measure, although there are
tools and models that attempt to quantify it based on certain criteria. However,
it's not as straightforward to measure as performance.
Comparing Functions: Asymptotic Notation

Asymptotic notation is a mathematical tool used in computer science and mathematics


to describe the behavior of functions as their input approaches certain values, often
infinity. There are three main asymptotic notations: big O notation, omega notation, and
theta notation. These notations help in comparing the growth rates of functions.

1. Big O Notation (O):


 Definition: Let f(x) and g(x) be two functions defined on some subset of the
real numbers. We say that f(x) is O(g(x)) (read as "big O of g(x)") if there exist
constants C and k such that ∣f(x)∣≤C⋅∣g(x)∣ for all x>k.
 Meaning: f(x) grows at most as fast as g(x) for sufficiently large x.
2. Omega Notation (ΩΩ):
 Definition: Let f(x) and g(x) be two functions defined on some subset of the
real numbers. We say that f(x) is Ω(g(x)) (read as "omega of g(x)") if there exist
constants C and k such that ∣∣f(x)∣≥C⋅∣g(x)∣ for all x>k.
 Meaning: f(x) grows at least as fast as g(x) for sufficiently large x.
3. Theta Notation (ΘΘ):
 Definition: Let f(x) and g(x) be two functions defined on some subset of the
real numbers. We say that f(x) isΘ(g(x)) (read as "theta of g(x)") if f(x) is both
O(g(x)) and Ω(g(x)).
 Meaning: f(x) grows at the same rate as g(x) for sufficiently large x.

Standard Analysis Techniques


constant time statements
analyzing loops
analyzing nested loops
analyzing seqence of statements
analyzing conditional statements
1. Constant Time Statements:
 Operations that take a constant amount of time, regardless of the input size.
 Examples include variable assignments, basic arithmetic operations, and function
calls that have constant time complexity.
2. Analyzing Loops:
 Time complexity of a loop is typically expressed in terms of the number of
iterations.
 For a loop with 'n' iterations, the time complexity is often O(n) if the loop body
has constant time complexity.
 Nested loops result in multiplied complexities. For example, if you have a loop
inside another loop, the time complexity may become O(n^2).
3. Analyzing Nested Loops:
 For nested loops, the time complexity is often the product of the complexities of
the individual loops.
 For two nested loops with sizes 'n' and 'm', the time complexity is O(n * m).
4. Analyzing Sequence of Statements:
 When statements are executed sequentially, the time complexity is the sum of
the complexities of each statement.
 For example, if you have statements with complexities O(a), O(b), and O(c) in
sequence, the overall time complexity is O(a + b + c).
5. Analyzing Conditional Statements:
 The time complexity of conditional statements (if-else) depends on the condition
and the complexity of the code inside each branch.
 If the condition check and the code inside each branch have constant time
complexity, the overall time complexity is often expressed as the maximum of the
complexities of the branches.
What is a Linked List?

 A Linked List is a linear data structure which looks like a chain of nodes, where each
node is a different element. Unlike Arrays, Linked List elements are not stored at a
contiguous location.
 It is basically chains of nodes, each node contains information such as data and a pointer
to the next node in the chain. In the linked list there is a head pointer, which points to
the first element of the linked list, and if the list is empty then it simply points to null or
nothing.

Why linked list data structure needed?

Here are a few advantages of a linked list that is listed below, it will help you understand why it
is necessary to know.

Dynamic Data structure: The size of memory can be allocated or de-allocated at run time based
on the operation insertion or deletion.

Ease of Insertion/Deletion: The insertion and deletion of elements are simpler than arrays since
no elements need to be shifted after insertion and deletion, Just the address needed to be
updated.

Efficient Memory Utilization: As we know Linked List is a dynamic data structure the size
increases or decreases as per the requirement, so this avoids the wastage of memory.

Implementation: Various advanced data structures can be implemented using a linked list like a
stack, queue, graph, hash maps, etc.

Types of linked lists:


There are mainly three types of linked lists:
Single-linked list
Double linked list
Circular linked list

1. Singly-linked list

Traversal of items can be done in the forward direction only due to the linking of every node to its next
node.
Commonly used operations on Singly Linked List:

Insertion: The insertion operation can be performed in three ways. They are as follows…

 Inserting At the Beginning of the list


 Inserting At End of the list
 Inserting At Specific location in the list

Deletion: The deletion operation can be performed in three ways. They are as follows…
 Deleting from the Beginning of the list
 Deleting from the End of the list
 Deleting a Specific Node

Search: It is a process of determining and retrieving a specific node either from the front, the end or
anywhere in the list.

Display: This process displays the elements of a Single-linked list.

Inserting At the Beginning of the list


• To insert a node at the start/beginning/front of a Linked List, we need to:

• Make the first node of Linked List linked to the new node

• Remove the head from the original first node of Linked List

• Make the new node as the Head of the Linked List.


Inserting At Specific location in the list
• To insert a node after a given node in a Linked List, we need to:

• Check if the given node exists or not.

• If it do not exists,

• terminate the process.

• If the given node exists,

• Make the element to be inserted as a new node

• Change the next pointer of given node to the new node

• Now shift the original next pointer of given node to the next pointer of new node

Inserting At End of the list


• To insert a node at the end of a Linked List, we need to:

• Go to the last node of the Linked List

• Change the next pointer of last node from NULL to the new node
• Make the next pointer of new node as NULL to show the end of Linked List

2)Doubly linked list

Traversal of items can be done in both forward and backward


directions as every node contains an additional prev pointer that
points to the previous node.

Commonly used operations on Double-Linked List:


Insertion: The insertion operation can be performed in three ways as follows:

1. Inserting At the Beginning of the list

2. Inserting after a given node.

3. Inserting at the end.

4. Inserting before a given node

Deletion: The deletion operation can be performed in three ways as follows…

1. Deleting from the Beginning of the list

2. Deleting from the End of the list

3. Deleting a Specific Node

Display: This process displays the elements of a double-linked list.

Add a node at the front in a Doubly Linked List:

1. Firstly, allocate a new node (say new_node).


2. Now put the required data in the new node.

3. Make the next of new_node point to the current head of the doubly linked list.

4. Make the previous of the current head point to new_node.

5. Lastly, point head to new_node.

Add a node in between two nodes:

1. Firstly create a new node (say new_node).

2. Now insert the data in the new node.

3. Point the next of new_node to the next of prev_node.

4. Point the next of prev_node to new_node.

5. Point the previous of new_node to prev_node.

6. Change the pointer of the new node’s previous pointer to new_node.


1.

Add a node before a given node in a Doubly Linked List:

2. Allocate memory for the new node, let it be called new_node.

3. Put the data in new_node.

4. Set the previous pointer of this new_node as the previous node of the next_node.

5. Set the previous pointer of the next_node as the new_node.

6. Set the next pointer of this new_node as the next_node.

7. Now set the previous pointer of new_node.

1. If the previous node of the new_node is not NULL, then set the next pointer of this
previous node as new_node.

2. Else, if the prev of new_node is NULL, it will be the new head node.

3. Circular linked lists


• A circular linked list is a type of linked list in which the first and the last nodes are also
connected to each other to form a circle, there is no NULL at the end.

Commonly used operations on Circular Linked List:


Insertion: The insertion operation can be performed in three ways:
1. Insertion in an empty list
2. Insertion at the beginning of the list
3. Insertion at the end of the list
4. Insertion in between the nodes
Deletion: The deletion operation can be performed in three ways:
• Deleting from the Beginning of the list
• Deleting from the End of the list
• Deleting a Specific Node
Display: This process displays the elements of a Circular linked list.

Implementation of circular linked list:


To implement a circular singly linked list, we take an external pointer that points to the last
node of the list. If we have a pointer last pointing to the last node, then last -> next will point to
the first node.

Locations and run times


The most obvious data structures for implementing an abstract list are arrays
and linked lists
• We will review the run time operations on these structures
We will consider the amount of time required to perform actions
such as finding, inserting new entries before or after, or erasing entries at
• the first location (the front)
• an arbitrary (kth) location
• the last location (the back or nth)
The run times will be Θ(1), O(n) or Θ(n)

Linked List vs. Array

Linked List vs. Array in Time Complexity

Operation Linked list Array

Random Access O(N) O(1)

Insertion and deletion at


O(1) O(N)
beginning
Insertion and deletion at end O(N) O(1)

Insertion and deletion at a


O(N) O(N)
random position

Advantages of Linked Lists:


• Dynamic nature: Linked lists are used for dynamic memory allocation.

• Memory efficient: Memory consumption of a linked list is efficient as its size can grow
or shrink dynamically according to our requirements, which means effective memory
utilization hence, no memory wastage.

• Ease of Insertion and Deletion: Insertion and deletion of nodes are easily implemented
in a linked list at any position.

• Implementation: For the implementation of stacks and queues and for the
representation of trees and graphs.

• The linked list can be expanded in constant time.

Disadvantages of Linked Lists:


• Memory usage: The use of pointers is more in linked lists hence, complex and requires
more memory.

• Accessing a node: Random access is not possible due to dynamic memory allocation.

• Search operation costly: Searching for an element is costly and requires O(n) time
complexity.

• Traversing in reverse order: Traversing is more time-consuming and reverse traversing


is not possible in singly linked lists.

Applications of Linked List:


Here are some of the applications of a linked list:
• Linear data structures such as stack, queue, and non-linear data structures such as hash
maps, and graphs can be implemented using linked lists.

• Dynamic memory allocation: We use a linked list of free blocks.

• Implementation of graphs: Adjacency list representation of graphs is the most popular


in that it uses linked lists to store adjacent vertices.

• In web browsers and editors, doubly linked lists can be used to build a forwards and
backward navigation button.

• A circular doubly linked list can also be used for implementing data structures like
Fibonacci heaps.

Introduction to stack data structure


• A stack is a linear data structure in which the insertion of a new element and removal of an
existing element takes place at the same end represented as the top of the stack.

• To implement the stack, it is required to maintain the pointer to the top of the stack, which is
the last element to be inserted because we can access the elements only on the top of the
stack.

LIFO( Last In First Out ):

• This strategy states that the element that is inserted last will come out first. You can take a pile
of plates kept on top of each other as a real-life example. The plate which we put last is on the
top and since we remove the plate that is at the top, we can say that the plate that was put last
comes out first.

Basic Operations on Stack

• In order to make manipulations in a stack, there are certain operations provided to us.

• push() to insert an element into the stack

• pop() to remove an element from the stack

• top() Returns the top element of the stack.

• isEmpty() returns true if stack is empty else false.

• size() returns the size of stack.


Push:
Algorithm for push:

Begin
if stack is full
Return
Endif
else
increment top
stack[top] assign value
end else
end procedure

Pop:
Algorithm for pop:
Begin
if stack is empty
return
endif
else
store value of stack[top]
decrement top
return value
end else
end procedure

Top:
Returns the top element of the stack.

Algorithm for Top:


begin
return stack[top]
end procedure

isEmpty:
Returns true if the stack is empty, else false.
Algorithm for isEmpty:
Begin
if top < 1
return true
else
return false
end procedure

Types of Stacks:
• Fixed Size Stack: As the name suggests, a fixed size stack has a fixed size and cannot grow or
shrink dynamically. If the stack is full and an attempt is made to add an element to it, an
overflow error occurs. If the stack is empty and an attempt is made to remove an element from
it, an underflow error occurs.

• Dynamic Size Stack: A dynamic size stack can grow or shrink dynamically. When the stack is full,
it automatically increases its size to accommodate the new element, and when the stack is
empty, it decreases its size. This type of stack is implemented using a linked list, as it allows for
easy resizing of the stack.

Implementation of Stack:
• A stack can be implemented using an array or a linked list

• In an array-based implementation, the push operation is implemented by incrementing the


index of the top element and storing the new element at that index. The pop operation is
implemented by decrementing the index of the top element and returning the value stored at
that index.

• In a linked list-based implementation, the push operation is implemented by creating a new


node with the new element and setting the next pointer of the current top node to the new
node. The pop operation is implemented by setting the next pointer of the current top node to
the next node and returning the value of the current top node.

Array implementation:
Advantages:
Disadvantages:
Advantages:

1. Random Access: Arrays provide constant-time access to elements based on their index.
This means you can directly access any element in the array by using its position, which
makes retrieval efficient.
2. Memory Efficiency: Arrays have a contiguous memory allocation, which makes them
memory-efficient. Elements are stored in adjacent memory locations, allowing for easy
traversal.
3. Simple Implementation: Arrays are simple and straightforward to implement. They are
supported by most programming languages and are often part of the core language
features.
4. Predictable Performance: Due to constant-time access and straightforward memory
organization, the performance characteristics of arrays are predictable and generally
reliable.

Disadvantages:

1. Fixed Size: The size of an array is fixed at the time of declaration, which can lead to
inefficiencies when the number of elements is not known in advance or changes
dynamically. This limitation is addressed by dynamic arrays or other data structures like
linked lists.
2. Insertion and Deletion Overhead: Inserting or deleting elements in an array, especially
in the middle, can be inefficient. This is because existing elements may need to be
shifted to accommodate the new element or fill the gap left by the deleted one.
3. Wasted Memory: If the array size is larger than the number of elements it contains,
there is wasted memory. This is particularly significant if memory is a critical resource.
4. Sequential Storage Requirement: Elements in an array must be stored in contiguous
memory locations. This requirement can limit the efficiency of memory utilization,
especially in scenarios where memory is fragmented.
5. Lack of Dynamic Resizing: Traditional arrays have a fixed size, and resizing them
requires creating a new array and copying elements. This operation can be time-
consuming, especially for large arrays.

Linked List implementation:


Advantages:
Disadvantages:
Advantages:

1. Dynamic Size:
 Linked lists can easily grow or shrink in size during program execution, as
memory can be allocated or deallocated dynamically.
2. Easy Insertion and Deletion:
 Inserting or deleting elements in a linked list is efficient and can be done in
constant time, given a reference to the node where the operation is to be
performed.
3. No Wastage of Memory:
 Linked lists do not waste memory like arrays do when there is a need to allocate a
fixed size in advance. Memory is allocated as needed.
4. No Need for Contiguous Memory:
 Unlike arrays, linked lists do not require contiguous memory. Each element can
be located at a different memory location, allowing for more efficient memory
utilization.
5. Ease of Implementation:
 Implementing a basic linked list is simpler than managing arrays, especially when
it comes to resizing or adding/removing elements.

Disadvantages:

1. Random Access is Slow:


 Unlike arrays, accessing elements in a linked list is not as efficient. To access an
element, you have to traverse the list from the beginning until you reach the
desired node, resulting in O(n) time complexity for random access.

2. Extra Memory Overhead:


 Each element in a linked list requires additional memory to store the reference or
pointer to the next element, leading to higher memory overhead compared to
arrays.
3. Sequential Access:
 Iterating through a linked list is efficient for sequential access but not as cache-
friendly as arrays. This can result in slower performance, especially in scenarios
where data locality is crucial.
4. More Complex Implementation:
 Implementing advanced features, such as doubly linked lists or circular linked
lists, can be more complex than managing arrays.
5. Not Suitable for Small Data Sets:
 For small data sets, the overhead associated with linked lists may outweigh their
benefits. In such cases, simpler data structures like arrays might be more
appropriate.
Queue
• A queue is a linear data structure that is open at both ends and the operations are
performed in First In First Out (FIFO) order.

• We define a queue to be a list in which all additions to the list are made at one end, and
all deletions from the list are made at the other end. The element which is first pushed
into the order, the delete operation is first performed on that.

FIFO Principle of Queue:


• A Queue is like a line waiting to purchase tickets, where the first person in line is the first person
served. (i.e. First come first serve).

• Position of the entry in a queue ready to be served, that is, the first entry that will be removed
from the queue, is called the front of the queue(sometimes, head of the queue), similarly, the
position of the last entry in the queue, that is, the one most recently added, is called the rear (or
the tail) of the queue

Characteristics of Queue:

1. FIFO (First In, First Out): The foremost characteristic of a queue is that it follows the
FIFO principle. The first element added to the queue is the first one to be removed.
2. Enqueue: Elements are added to the rear or back of the queue. This operation is often
called "enqueue."
3. Dequeue: Elements are removed from the front or head of the queue. This operation is
often called "dequeue."
4. Single-ended: A queue is single-ended, meaning that operations are performed at only
one end of the data structure.
5. Dynamic Size: Queues can dynamically adjust their size to accommodate varying
amounts of data.
6. Limited Access: Unlike arrays or linked lists, where you can access elements at any
position, queues allow access to only the front and rear elements.

Basic Operations on Queue:

• enqueue(): Inserts an element at the end of the queue i.e. at the rear end.

• dequeue(): This operation removes and returns an element that is at the front end of the queue.

• front(): This operation returns the element at the front end without removing it.

• rear(): This operation returns the element at the rear end without removing it.

• isEmpty(): This operation indicates whether the queue is empty or not.

• isFull(): This operation indicates whether the queue is full or not.

• size(): This operation returns the size of the queue i.e. the total number of elements it contains.

Enqueue():

Enqueue() operation in Queue adds (or stores) an element to the end of the queue.
The following steps should be taken to enqueue (insert) data into a queue:

• Step 1: Check if the queue is full.

• Step 2: If the queue is full, return overflow error and exit.

• Step 3: If the queue is not full, increment the rear pointer to point to the next empty space.

• Step 4: Add the data element to the queue location, where the rear is pointing.

• Step 5: return success


Dequeue():

• Removes (or access) the first element from the queue.


The following steps are taken to perform the dequeue operation:

• Step 1: Check if the queue is empty.

• Step 2: If the queue is empty, return the underflow error and exit.

• Step 3: If the queue is not empty, access the data where the front is pointing.

• Step 4: Increment the front pointer to point to the next available data element.

• Step 5: The Return success.

Simple Queue:

• Simple queue also known as a linear queue is the most basic version of a queue.
• Insertion of an element i.e. the Enqueue operation takes place at the rear end and removal of an
element i.e. the Dequeue operation takes place at the front end.

• Here problem is that if we pop some item from front and then rear reach to the capacity of the
queue and although there are empty spaces before front means the queue is not full but as per
condition in isFull() function, it will show that the queue is full then. To solve this problem we
use circular queue.

Circular Queue:

• In a circular queue, the element of the queue act as a circular ring.

• The working of a circular queue is similar to the linear queue except for the fact that the last
element is connected to the first element.

• Its advantage is that the memory is utilized in a better way. This is because if there is an empty
space i.e. if no element is present at a certain position in the queue, then an element can be
easily added at that position using modulo capacity(%n).

• Priority Queue: This queue is a special type of queue.

• Its specialty is that it arranges the elements in a queue based on some priority. The priority can
be something where the element with the highest value has the priority so it creates a queue
with decreasing order of values. The priority can also be such that the element with the lowest
value gets the highest priority so in turn it creates a queue with increasing order of values.

• Dequeue:

• Dequeue is also known as Double Ended Queue.

• As the name suggests double ended, it means that an element can be inserted or removed from
both ends of the queue, unlike the other queues in which it can be done only from one end.

• Because of this property, it may not obey the First In First Out property.

Applications of Queue:

• Queue is used when things don’t have to be processed immediately, but have to be processed in
First In First Out order like Breadth First Search. This property of Queue makes it also useful in
following kind of scenarios.

• When a resource is shared among multiple consumers. Examples include CPU scheduling, Disk
Scheduling.

• When data is transferred asynchronously (data not necessarily received at same rate as sent)
between two processes. Examples include IO Buffers, pipes, file IO, etc.
• Queue can be used as an essential component in various other data structures.

Stack:
A stack is a linear data structure in which elements can be inserted and deleted only from one
side of the list, called the top. A stack follows the LIFO (Last In First Out) principle, i.e., the
element inserted at the last is the first element to come out. The insertion of an element into
the stack is called push operation, and the deletion of an element from the stack is called pop
operation. In stack, we always keep track of the last element present in the list with a pointer
called top.

Queue :
Queue is a linear data structure in which elements can be inserted only from one side of the list
called rear, and the elements can be deleted only from the other side called the front. The
queue data structure follows the FIFO (First In First Out) principle, i.e. the element inserted at
first in the list, is the first element to be removed from the list. The insertion of an element in a
queue is called an enqueue operation and the deletion of an element is called a dequeue
operation. In queue, we always maintain two pointers, one pointing to the element which was
inserted at the first and still present in the list with the front pointer and the second pointer
pointing to the element inserted at the last with the rear pointer.
Difference between Stack and Queue Data Structures

Stacks Queues

A stack is a data structure that stores a collection of elements, with A queue is a data structure that stores a collection of elements, with
operations to push (add) and pop (remove) elements from the top of the operations to enqueue (add) elements at the back of the queue, and
stack. dequeue (remove) elements from the front of the queue.

Stacks are based on the LIFO principle, i.e., the element inserted at the last, Queues are based on the FIFO principle, i.e., the element inserted at the
is the first element to come out of the list. first, is the first element to come out of the list.

Stacks are often used for tasks that require backtracking, such as parsing Queues are often used for tasks that involve processing elements in a
expressions or implementing undo functionality. specific order, such as handling requests or scheduling tasks.

Insertion and deletion in queues takes place from the opposite ends of the
Insertion and deletion in stacks takes place only from one end of the list
list. The insertion takes place at the rear of the list and the deletion takes
called the top.
place from the front of the list.

Insert operation is called push operation. Insert operation is called enqueue operation.

Queue is of three types – 1. Circular Queue 2. Priority queue 3. double-


Stack does not have any types.
ended queue.

Can be considered as a vertical collection visual. Can be considered as a horizontal collection visual.

Examples of queue-based algorithms include Breadth-First Search (BFS) and


Examples of stack-based languages include PostScript and Forth.
printing a binary tree level-by-level.

Applications of stack:
1. Function Call Management:
 Stacks are used to manage function calls in a program. When a function is called,
its local variables and execution context are pushed onto the stack.

2. Expression Evaluation:
 Stacks are employed in the evaluation of expressions, especially those involving
parentheses. Operators and operands are pushed onto the stack,
3. Undo Mechanisms:
 Many applications, such as text editors or graphic design software, use stacks to
implement undo mechanisms.
4. Browser History:
 Web browsers use stacks to maintain the history of visited pages. Each new page
visited is pushed onto the stack.
5. Parsing and Syntax Checking:
 Stacks are employed in parsing and syntax checking of programming languages.
6. Memory Management:
 The call stack is used for managing the execution of programs and plays a crucial
role in memory allocation and deallocation.
7. Undo and Redo in Software:
 Similar to undo mechanisms, redo functionality can also be implemented using
stacks.

You might also like