Professional Documents
Culture Documents
DSA Sample2
DSA Sample2
Higher Nationals
Internal verification of assessment decisions – BTEC (RQF)
INTERNAL VERIFICATION – ASSESSMENT DECISIONS
Give details:
LO1 Examine different concrete data structures and it’s valid operations.
Pass, Merit & P1 P2 M1 M2 D1
Distinction Descripts
LO2 Discuss the advantages, complexity of Abstract Data Type and importance concepts of Object
orientation.
Pass, Merit & P3 M3 D2
Distinction Descripts
LO4 Examine the advantages of Independent data structures and discuss the need of asymptotic
analysis to assess the effectiveness of an algorithm.
Pass, Merit & P6 P7 M5 D4
Distinction Descripts
* Please note that grade decisions are provisional. They are only confirmed once internal and external moderation has taken place and grades
decisions have been agreed at the assessment board.
Assignment Feedback
Formative Feedback: Assessor to Student
Action Plan
Summative feedback
• A Cover page or title page – You should always attach a title page to your assignment. Use
previous page as your cover sheet and make sure all the details are accurately filled.
• Attach this brief as the first section of your assignment.
• All the assignments should be prepared using a word processing software.
• All the assignments should be printed on A4 sized papers. Use single side printing.
• Allow 1” for top, bottom , right margins and 1.25” for the left margin of each page. Word
Processing Rules
• The font size should be 12 point, and should be in the style of Time New Roman.
• Ensure that all the headings are consistent in terms of the font size and font style.
• Use footer function in the word processor to insert Your Name, Subject, Assignment No, and
Page Number on each page. This is useful if individual sheets become detached for any reason.
• Use word processing application spell check and grammar check function to help editing your
assignment.
Important Points:
• It is strictly prohibited to use textboxes to add texts in the assignments, except for the
compulsory information. eg: Figures, tables of comparison etc. Adding text boxes in the body
except for the before mentioned compulsory information will result in rejection of your work.
• Carefully check the hand in date and the instructions given in the assignment. Late submissions
will not be accepted.
• Ensure that you give yourself enough time to complete the assignment by the due date.
• You must take responsibility for managing your own time effectively.
• If you are unable to hand in your assignment on time and have valid reasons such as illness, you
may apply (in writing) for an extension.
• Non-submission of work without valid reasons will lead to an automatic RE FERRAL. You will
then be asked to complete an alternative assignment.
• If you use other people’s work or ideas in your assignment, reference them properly using
HARVARD referencing system to avoid plagiarism. You have to provide both in-text citation and a
reference list.
• If you are proven to be guilty of plagiarism or any academic misconduct, your grade could be
reduced to A REFERRAL or at worst you could be expelled from the course
I hereby, declare that I know what plagiarism entails, namely to use another’s work and to present it as my
own without attributing the sources in the correct form. I further understand what it means to copy
another’s work.
• I know what the consequences will be if I plagiarize or copy another’s work in any of the
assignments for this program.
• I declare therefore that all work presented by me for every aspect of my program, will be my
own, and where I have made use of another’s work, I will attribute the source in the correct way.
• I acknowledge that the attachment of this document signed or not, constitutes a binding
agreement between myself and Pearson, UK.
• I understand that my assignment will not be considered as submitted if this document is not
attached to the assignment.
Submission format
The submission should be in the form of a report, which contains code snippets (which must be
described well), text-based descriptions, and diagrams where appropriate. References to external
sources of knowledge must be cited (reference list supported by in-text citations) using the Harvard
Referencing style.
LO1. Examine abstract data types, concrete data structures and algorithms.
LO2. Specify abstract data types and algorithms in a formal notation.
LO3. Implement complex data structures and algorithms.
LO4. Assess the effectiveness of data structures and algorithms.
Scenario
ABC Pvt Ltd organizing Car Racing event across western province and they decided to have maximum
of 6 cars(participants) to compete.
There are totally 3 Rounds and at the end of each round lowest rank car will be eliminated from the
Race.
Each car has unique number, brand, sponsor and driver details.
In order to manage this particular event ABC Pvt Ltd decided to develop an Application.
Application functions are listed down.
Task 1: Examine and create data structure by simulating the above scenario and explain the valid
operations that can be carried out on this data structure.
Determine the operations of a queue and critically review how it is used to implement function calls
related to the above scenario.
Task 2: Implement the above scenario using the selected data structure and its valid operations for
the design specification given in task 1 by using java programming. Use suitable error handling and
Test the application using suitable test cases and illustrate the system. Provide evidence of the test
cases and the test results.
Task 3 : Registered Car details are stored from oldest to newest. Management of ABC Pvt Ltd should
be able to find from the newest to oldest registered car details. Using an imperative definition,
specify the abstract data type for the above scenario and implement specified ADT using java
programming and briefly discuss the complexity of chosen ADT algorithm. List down the advantages
of Encapsulation and Information hiding when using an ADT selected for the above scenario.
“Imperative ADTs are basis for object orientation.” Discuss the above view stating whether you agree
or not. Justify your answer.
Task 4: ABC Pvt Ltd plans to visit all of these participants through the shortest path within a day.
Analyse the above operation by using illustrations, of two shortest path algorithms, specify how it
operates using a sample graph diagram. Sort the cars based on numbers with two different sorting
algorithms and critically review the performances of those two algorithms by comparing them.
Task 5: Evaluate how Asymptotic analysis can be used to assess the effectiveness of an algorithm
and critically evaluate the different ways in which the efficiency of an algorithm can be measured.
Critically review the sort of trade-offs exists when you use an ADT for implementing programs. You
also need to evaluate the benefits of using independent data structures for implementing programs.
Grading Rubric
Table of Contents
Acknowledgement.............................................................................................................................22
1. Data Structure and Algorithm.......................................................................................................23
1.1 Design Specifications for Data Structure..............................................................................23
1.2 Fundamental kinds of Data Structures..................................................................................25
1.3 Algorithms............................................................................................................................. 26
1.4 The accompanying PC issues can be settled using Data Structures.......................................28
2. Create data structure by simulating the above scenario and explain the valid operations that
can be carried out on this data structure............................................................................................29
2.1 Data structure for given scenario...........................................................................................29
2.1.1 Linked list Data structure.................................................................................................29
2.1.1.1 Linked List Representation.......................................................................................30
2.1.1.2 Uses of Linked list....................................................................................................30
2.1.1.3 Types of Linked list..................................................................................................31
2.1.1.4 Performing various operations on linked list............................................................32
2.1.2 LIFO principle of Stack....................................................................................................34
2.1.2.1 Basic operation of stack............................................................................................36
2.1.2.2 Flowchart for implementing stack using array..........................................................39
2.1.2.3 Representation of stack.............................................................................................42
2.1.3 Stack memory implementation for PCs............................................................................43
3. Illustrate, with an example, a concrete data structure for a First in First out (FIFO) queue........47
3.1 Queue Data Structure.............................................................................................................47
4. Illustrate, with an example, a concrete data structure for a First in First out (FIFO) queue........52
8.1.2 Space
complexity .....................................
................................................................... 129
9. Conclusion ...........................................................................................................................
140
List of Figures
Figure 1: LIFO ..............................................................................................................................................
34 Figure 2: Flowchart for Push ()
Operation .................................................................................................. 39 Figure 3: Flowchart for Pop ()
Operations .................................................................................................. 40
Figure 4: Flowchart for Peek () Operations .................................................................................................
41 Figure 5: Stack
Representation ................................................................................................................... 42 Figure 6:
Stack functioning Principle ........................................................................................................... 45 Figure
7: Car class........................................................................................................................................ 62
Figure 8: Main Class ....................................................................................................................................
63 Figure 9:
Function ....................................................................................................................................... 63 Figure
10: Case 1 ......................................................................................................................................... 64
Figure 11: Case 2 .........................................................................................................................................
65 Figure 12: Case
3 ......................................................................................................................................... 66 Figure 13:
Case 4 ......................................................................................................................................... 67 Figure
14: Case 5 ......................................................................................................................................... 68
Figure 15: Function list ................................................................................................................................
83 Figure 16: Case 1
Output ............................................................................................................................ 83 Figure 17: Case
1 Output 2.......................................................................................................................... 84 Figure 18:
Delete a car Case 2 ..................................................................................................................... 85 Figure
19: Case 3 ......................................................................................................................................... 86
Figure 20: Case 4 .........................................................................................................................................
Acknowledgement
First and foremost I wish to thank my lord and my parents who is my constant source of help,
strength and support and who is truly is my all in all. A very big thank you to our course
Coordinator Miss. Chathurika and Our Data Structure & Algorithms module Lecturer Miss.
Navodi and all the other lecturers for all the support, patience guidance and valuable advice
they gave me throughout this assignment, without them this project would not have been
possible. I sincerely acknowledge and thank my mother, my father and my friends who gave
moral support for me to complete the project duly. I am thankful and fortunate enough to get
constant encouragement, support and guidance from my family who sacrificed their works and
helped me in successfully my project work. I would also like to take this opportunity to thank
Esoft for giving me such a great opportunity. Thank you all.
A set of instructions or procedures used to solve a problem or carry out a task is known as an
algorithm. In computer programming, algorithms are used to process data, carry out calculations,
and reach decisions. They can be used in a variety of programming languages and are an
essential component of computer science. Effective algorithms are crucial for tackling
complicated issues and can considerably boost the speed of computer applications. The study of
algorithms involves understanding their properties such as correctness, efficiency, and
complexity.
1. Purpose and Functionality: The purpose and functionality of the data structure should be
clearly defined. This includes what types of data it will store, how it will be accessed, and
what operations can be performed on it.
The design specifications for a data structure should, in general, give a clear and comprehensive
understanding of how the data structure is meant to operate as well as how it should be
implemented and evaluated.
1. Arrays: An array is a collection of elements of the same data type that are stored in
contiguous memory locations. It is used for storing and accessing a fixed number of
elements efficiently.
2. Linked Lists: A linked list is a collection of elements, where each element (node) contains
a value and a reference (pointer) to the next node. It is used for storing and accessing a
variable number of elements efficiently.
3. Stacks: A stack is a collection of elements that allows access only to the last element
inserted (LIFO – last in, first out). It is used for keeping track of the state of a program or
for solving problems that can be solved using the LIFO principle.
4. Queues: A queue is a collection of elements that allows access only to the first element
inserted (FIFO – first in, first out). It is used for managing resources, handling
interruptions, and scheduling tasks.
Many algorithms and computer programs use these data structures as their basic building
elements. Choosing the appropriate data structure for a given issue is essential for creating
effective algorithms and programs because each data structure has strengths and weaknesses.
1.3 Algorithms
An algorithm is a set of clear instructions for resolving a dilemma or carrying out a particular
job. Algorithms are the fundamental units of computer programs and are crucial for effectively
tackling challenging computational issues.
Algorithms are a collection of rules intended to solve a particular issue or carry out a particular
job. They are an essential component of computer science and are widely used in data
processing, programming, and other technological fields.
Sorting algorithms, searching algorithms, graph algorithms, optimization algorithms, and many
other kinds of algorithms can all be categorized. They can be represented in a variety of ways,
such as programming languages, flowcharts, and pseudo code.
An algorithm's accuracy, running time, and memory utilization are just a few examples of the
factors that affect an algorithm's efficiency and effectiveness. Algorithm design is therefore a
fundamental component of computer science, and researchers are constantly trying to create new
and improved algorithms to address ever-more-complex issues.
2. Searching algorithms: Searching algorithms are used to find a specific element or value
in a collection of elements. Some popular search algorithms include linear search and
binary search.
3. Path finding algorithms: Path finding algorithms are used to find the shortest path
between two points in a graph or a grid. Some popular path finding algorithms include
Dijkstra's algorithm and A* algorithm.
4. Compression algorithms: Compression algorithms are used to reduce the size of a file
or data without losing significant information. Some popular compression algorithms
include Huffman coding and Lempel-Ziv-Welch (LZW) algorithm
• Backpack issue
• Pinnacle of Hanoi
• All pair most limited way by Floyd-Warshall
• Most limited way by Dijkstra
• Venture booking
2. Create data structure by simulating the above scenario and explain the
valid operations that can be carried out on this data structure
Linked list can be visualized as a chain of nodes, where every node points to the next node.
As per the above illustration, the following are the important points to be considered.
2. Dynamic memory allocation: Linked lists are used to allocate and deallocate memory
dynamically at run-time. For example, in C programming language, the malloc and free
functions are used to allocate and deallocate memory for linked lists.
3. Implementing hash tables: Linked lists are used to resolve collisions in hash tables. In a
hash table, if two keys hash to the same index, they are stored in a linked list at that
index.
4. Representing sparse matrices: A sparse matrix is a matrix that contains mostly zeros.
Linked lists are used to represent sparse matrices because they can efficiently store only
the non-zero elements.
5. Music and video playlists: Linked lists can be used to implement music and video
playlists. Each node in the linked list represents a song or a video, and the links represent
the order in which they are played.
6. Navigation systems: Linked lists are used in navigation systems to represent the sequence
of directions to reach a destination. Each node in the linked list represents a direction, and
the links represent the order in which they are followed.
7. Graphs: Linked lists are used to represent graphs, where each node represents a vertex,
and the links represent the edges connecting the vertices.
Overall, linked lists are a versatile data structure that can be used in many different applications,
making them an essential tool for software developers and computer scientists.
1. Singly linked list: In a singly linked list, each node has only one link pointing to the next
node in the list.
2. Doubly linked list: In a doubly linked list, each node has two links: one pointing to the
next node and another pointing to the previous node in the list. This allows for efficient
traversal in both directions.
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 31
3. Circular linked list: In a circular linked list, the last node points to the first node in the list,
creating a circular structure. This allows for efficient traversal from the last node back to
the first node.
4. Singly linked list with a tail pointer: In this type of linked list, there is an additional pointer
to the last node in the list, which allows for efficient insertion at the end of the list. A
doubly linked list with a tail pointer combines two linked lists: a single linked list with a
tail pointer and a doubly linked list. With a tail pointer, it combines the advantages of a
doubly linked list and a single linked list.
1. Traversing the linked list: This involves visiting each node in the list sequentially, starting
from the head node and following the links until the end of the list is reached.
2. Inserting a node: This involves creating a new node and inserting it into the list at a
specific position. The node can be inserted at the beginning of the list (as the new head
node), at the end of the list, or in the middle of the list (before or after a specific node).
4. Searching for a node: This involves locating a specific node in the list based on a given
value or key. The search can start from the head node and continue until the end of the list
is reached or the node is found.
5. Reversing the linked list: This involves reversing the order of the nodes in the list, so that
the last node becomes the head node and vice versa.
6. Concatenating two linked lists: This involves joining two linked lists together, so that the
last node of the first list points to the head node of the second list.
7. Sorting the linked list: This involves arranging the nodes in the list in a specific order,
such as ascending or descending order based on the node values.
8. Splitting the linked list: This involves dividing a linked list into two separate lists based
on a specific condition or position. For example, a list can be split into two lists with odd
and even values.
Figure 1: LIFO
The LIFO principle, or "Last In, First Out," is the fundamental concept behind a stack data
structure. A stack is a linear data structure that allows elements to be added and removed from
one end only, which is called the top of the stack.
In a stack, elements are added to the top of the stack, and the most recently added element is
always at the top of the stack. Similarly, elements are removed from the top of the stack, so the
element that was most recently added is the first one to be removed.
The LIFO principle can be thought of as a stack of plates in a cafeteria. When plates are added to
the stack, they are placed on top of the other plates. When a customer wants a plate, they take the
top plate off the stack, which is the last plate that was added. As plates are used, they are
removed from the top of the stack in the reverse order that they were added.
In a computer program, stacks are often used to keep track of function calls, where each function
call creates a new stack frame that is added to the top of the call stack. When the function returns,
its stack frame is removed from the top of the call stack, allowing the program to continue
executing from the previous point in the program.
Here's an example to illustrate the LIFO principle of a stack:
Suppose you are at a cafeteria, and you need to stack a set of plates. The LIFO principle would
mean that you would place the most recently used plate at the top of the stack, and when a plate
is needed, it would be taken from the top of the stack, which is the last plate that was placed.
For example, imagine you have the following sequence of plate stacking and plate usage:
4. You take a plate from the top of the stack, which is the green plate.
6. You take a plate from the top of the stack, which is the yellow plate.
7. You take a plate from the top of the stack, which is the red plate.
In this example, the LIFO principle ensures that the most recently used plate is always at the top
of the stack. When a plate is needed, it is taken from the top of the stack, which is the last plate
that was placed. This is like how a stack data structure works in computer programming, where
elements are added to the top of the stack and removed from the top of the stack in reverse order
1. Push: This operation adds an element to the top of the stack. When an element is pushed
onto the stack, it becomes the new top element.
2. Pop: This operation removes the top element from the stack. When an element is popped
from the stack, the element just below it becomes the new top element.
3. Peek/Top: This operation returns the top element of the stack without removing it. It
simply returns the value of the element at the top of the stack.
4. Is Empty: This operation checks whether the stack is empty or not. It returns a Boolean
value (true or false) based on whether the stack is empty or not.
These four basic operations are sufficient to implement any other operations or algorithms that
may be required using a stack data structure. Here's an example implementation of a stack data
structure using these basic operations in Java:
}
public int pop() {
if (!isEmpty()) {
int topIndex = stack.size() - 1;
int topValue = stack.get(topIndex);
stack.remove(topIndex); return
topValue;
}
return -1; // Or throw an exception
}
public int peek() {
if (!isEmpty()) {
int topIndex = stack.size() - 1;
return stack.get(topIndex);
}
return -1; // Or throw an exception
}
public boolean isEmpty()
{ return stack.isEmpty();
}
}
In this implementation, we have defined the basic operations of push, pop, peek/top, and isEmpty
for a stack data structure using an ArrayList. The push operation adds an element to the end of
the list, while the pop operation removes and returns the last element of the list if the list is not
empty.
The peek operation returns the last element of the list without removing it if the list is not empty.
Finally, the isEmpty operation returns a boolean value based on whether the list is empty or not.
The stack of the computer resides in memory's very top regions. Stack is a stack information
structure, as suggested by its name, with the "top" of the stack moving from high value locations
towards poor quality addresses. We employ pop D to remove the top value from the stack and
place it into a destination, and we employ push S to push the source into the stack (for example a
register or memory area). As a pointer to the top of the stack, which is also the top of the topmost
stack outline, we use the stack register, %esp.
Stack memory is a sequential memory with clear access restrictions that only allow writing to
and reading from one location in the memory, known as the top of the stack. A shift of all data
stored in subsequent regions by one scenario down inside and out of this line is made
concurrently by the stack memory, which serves as a line to which information can be assembled
at the top of the stack.
Only looking at the top of the stack, the readout from the stack is plausible.
The readout causes data from the top of the stack to be removed, and new data from the depth of
the stack is created in its place, shifting all previously stored data one position upward. The line
then switches to a Last-In-First-Out format.
The stack memory operates similarly to a rifle's magazine in that presenting a new cartridge
causes the magazine's normally-remaining cartridges to move to the inside and outside, and
firing a cartridge (a shot) causes the top cartridge to be removed and replaced by the first
cartridge that was previously stored beneath the used one.
In the memory of a computer, information is kept in linear information structures called memory
stacks. They could also be referred to as lines. A stack of information should regularly contain
similar types of information. Every function on your computer, including the main() function,
creates temporary variables that are stored in a unique region of memory. The stack is a "LIFO"
(last in, first out) data structure that the CPU carefully monitors and advances. Another variable
is "pushed" into the stack every time a capacity declares another one. Then, at that point, every
variable that was added to the stack by a capacity is released whenever that capacity exists (in
other words, they are erased).
This region of memory becomes available for other stack variables after a stack variable is
released. Using the stack to hold variables has the advantage of managing memory for you.
Memory doesn't need to be allocated manually or released if you no longer require it. Also,
reading from and writing to stack variables happen very quickly because of how efficiently the
CPU organizes stack memory. The idea that when a capacity exists, each of its variables is
popped off of the stack (and subsequently lost till the end of time) is essential to comprehending
the stack. Stack variables are consequently close in nature. This is related to a previous concept
we discussed, factor extension, or local vs global variables. Trying to access a variable that was
created on the stack inside of a capacity from a location in your program outside of that capacity
(for example, after that capacity has left) is a common error in C programming. Another aspect of
the stack to keep in mind is that the maximum size of variables that can be stored on it has a
cutoff (which varies with OS). Variables allotted on the heap are an exception to this rule.
Queues are used in a variety of applications such as scheduling, resource allocation, and
breadthfirst search algorithms.
• FIFO: The first element added to the queue is the first one to be removed.
• Dynamic size: The size of the queue can be changed dynamically as elements are added
or removed.
• Limited access: The only way to access elements in the middle of the queue is to dequeue
elements until the desired element is reached.
Queues can be implemented using different data structures such as arrays, linked lists, or circular
buffers. The choice of data structure depends on the specific use case and performance
requirements.
Common operations on a queue include enqueue, dequeue, peek (viewing the element at the front
of the queue without removing it), checking if the queue is empty or full, and getting the size of
the queue.
Here's an example implementation of a First in First out (FIFO) queue in Java using an array-
based circular buffer:
• enqueue(int value): Adds an element to the end of the queue by incrementing rear and
adding the new element to the buffer array.
• dequeue(): Removes and returns the element at the front of the queue by incrementing
front and returning the value at the corresponding index of the buffer array.
• peek(): Returns the element at the front of the queue without removing it.
The circular buffer allows for efficient use of memory and avoids the need to shift elements in
the array when the front pointer changes.
In this example, I create a new Queue object with a maximum capacity of 3, add three elements
to the queue using enqueue, remove the first element using dequeue, and then peek at the front
element and get the current size of the queue using peek and size, respectively
For example, a web application software stack could include the following layers:
1. Operating system: The lowest layer of the stack that provides the foundation for running
the application. Examples of popular operating systems include Linux, Windows, and
macOS.
2. Server software: This layer includes software such as web servers, application servers,
and database servers. Examples of server software include Apache, Nginx, Tomcat, and
MySQL.
3. Frameworks and libraries: These are software components that provide pre-built
functionality to the application. Examples of frameworks include React, Angular, and
Django, while popular libraries include jQuery, React Native, and NumPy.
Overall, a software stack represents the complete set of technologies and software components
required to build, run, and manage a software application. It usually consists of several layers of
software, each of which provides a different level of functionality and interacts with the layers
above and below it
The definition of an ADT includes the set of operations that can be performed on it, their input
and output parameters, and any preconditions or postconditions that apply to those operations.
The behavior and properties of an ADT are defined by the behavior of its operations and any
constraints or invariants that apply to its data.
with their low-level details. This allows for modular, reusable, and extensible software design, as
ADTs can be used as building blocks to construct more complex data structures and algorithms.
Stacks are a useful abstract data type with a variety of applications in computer science and
programming. Here are some common employment of ADT Stack:
1. Function call stack: One of the most common employment of the stack is in the
implementation of function calls in a program. Each function call is pushed onto the
stack, and when the function completes, it is popped from the stack, allowing the program
to return to the previous point of execution.
2. Expression evaluation: Stacks can be used in expression evaluation, where operators and
operands are pushed onto the stack and then popped off in the correct order to evaluate
the expression.
5. Parsing: Stacks can be used in parsing, where tokens are pushed onto the stack and then
popped off as the parser processes them.
Overall, stacks are a versatile and powerful tool with many practical applications in computer
science and programming.
The list ADT can be implemented using different data structures such as arrays, linked lists, and
dynamic arrays. The choice of data structure depends on the specific requirements of the
application. The list ADT is widely used in many applications such as databases, file systems,
and programming languages, making it a fundamental concept in computer science.
Here are some common operations that are typically supported by the List ADT:
6. Set: Changes the value of the element at a specified position within the list.
8. Find: Searches for an element within the list and returns its position.
10. Slice: Extracts a subset of elements from the list, based on a specified range of indices.
2. Pop: Removes the most recently added element from the top of the stack.
3. Peek: Returns the element at the top of the stack without removing it.
Stacks are used extensively in programming and computer science, and they serve as a
fundamental building block for many other data structures, such as queues, trees, and graphs.
They are implemented using arrays or linked lists, and the choice of implementation depends on
the requirements of the application and the performance characteristics of the data structure.
Here are some common operations that are typically supported by the Queue ADT:
3. Peek: Returns the element at the front of the queue without removing it.
The queue ADT is useful in a wide variety of applications, including task scheduling, message
passing, network packet routing, and event handling.
1. Structure: Stacks and queues are both linear data structures, meaning that their elements
are arranged in a linear sequence. Lists can also be linear, but they can also be
implemented as linked lists, which are more flexible in terms of their structure.
2. Principle of operation: Stacks follow the Last-In-First-Out (LIFO) principle, meaning that
the most recently added element is the first one to be removed. Queues follow the FirstIn-
First-Out (FIFO) principle, meaning that the first element added is the first one to be
removed. Lists do not have a fixed principle of operation.
3. Main operations: The main operations supported by stacks are push, pop, and peek. The
main operations supported by queues are enqueue, dequeue, and peek. The main
4. Use cases: Stacks are often used in applications that require backtracking, expression
evaluation, undo/redo functionality, or function call management. Queues are often used
in applications that require task scheduling, message passing, or network packet routing.
Lists are often used in applications that require the storage and manipulation of ordered
collections of elements.
Stacks, queues, and lists are intended to serve different purposes and have different qualities that
make them more suitable for certain types of applications, even though they have some
similarities.
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 59
4.3 Selecting the suitable ADT for the scenario and implementing it
If stack data structures use the phrase "first in, last out," we can use them in our application.
Therefore, if we are entering the car's details from oldest to newest and need the results in
reverse chronological sequence, we can use stack data structure as our ADT. Consequently, we
can use a linked list in this instance for the Java version. Because the count that the corporation
gets cannot be measured and must be arranged dynamically, the answer should be implemented
as a linked list.
The process of importing libraries, making the Car class, and adding class properties like number,
brand, sponsor, and driver is illustrated by this snippet of code. And using this class, I created
objects to hold the data about the cars. When objects are made, initializing the properties of cars
is easier. This is the primary class, and it is in this class that I have all of the program's features.
Here, I made a collection of Car objects with the name "c" and a size of 6. Because there are only
six cars entered in the competition consequently, there can be a limit of 6 cars.
I created a scanner object to enter all user inputs with the name "input." I also initialized the
numbers select, I, and r at this location. I counts the number of registered cars, R counts the
number of rounds in the competition out of three, and "select" refers to the menu.
Figure 9: Function
This code snippet displays the program's menu as well as the switch statement that selects
choices on a case-by-case basis. I'll talk about the code snippets I used to apply the
functionalities in the following paragraphs.
This code snippet shows the way I register a car using the car object array. So, I will be
representing the count of the cars that registered. So, with that we can see if the car count
increases by 6. It will display an error, saying, “You can only register 6 cars for the race!”
This instance is where I got the delete car functionality. The vehicle number is being entered in
this box. That is followed by the reviewing phase. It will be removed from the array if any of the
registry numbers match the car number that we put there. As a result, the object array's index will
be changed to NULL. The quantity of cars will drop by one, as indicated by "i—." The next
action should be to fill the array starting at the deleted position if it is in the array's middle.
Therefore, I created a for loop to populate the null index in the array while preserving the
hierarchy of the registry's other information. And if there is no such a car like that I entered to the
search tool, it will output the result, as “No car found to delete!”
In this case, I developed the functionality for the round outcomes, and following one round, it
will be sorted by the car's rating. Additionally, it will reduce the range of the ongoing for loop by
1. After a round has concluded, the final car is unable to fight against the other vehicles. I applied
the solution for the race's rounds with rankings in this specific case in the same manner.
In this case, this shows the winners of the race. It is going to happen, only if they complete all 3
rounds. If we were not completed all 3 rounds and asks for the winners, it will show the current
ranking after with the round rounds.
Therefore, this snippet of code illustrates the "search car" feature. So, in this case, we input the
car's number and use a for loop to find the car. And in the for loop, if any object numbers match
the number that we entered, all information about the car that number possesses will be
displayed. And if there isn't a car matching that description, it will say "No car found." And if
there isn't a vehicle listed, it will say so in the prints.
import java.util.Scanner;
String brand;
String sponsor;
String driver;
choose=0,i=0,r=0;
=7) {
System.out.println("4.
Find out winners ");
System.out.println("6.Display registered
if(i<6) { c[i]=new
Car();
c[i].number=input.nextInt();
input.nextLine();
c[i].brand=input.nextLine();
"); c[i].sponsor=input.nextLine();
c[i].driver=input.nextLine(); i++;
} else {
break; case 2:
+) { if(carNum==c[m].number) {
i--;
=6) { c[k]=c[k+1];
} else if(m+1==i) {
break;
case 3:
k<6-(r-1); k++) {
num=input.nextInt(); for(int
{ if(num==c[m].number)
+]=temp;
break;
case 4:
if(r==3) {
"+c[0].driver);
"+c[1].driver);
"+c[2].driver);
} else {
+){
System.out.println((k+1)+" place:"+c[k].number);
break;
case 5:
k++) { if(n==c[k].number) {
System.out.println("Brand: "+c[k].brand);
System.out.println("Sponsor: "+c[k].sponsor);
System.out.println("Driver: "+c[k].driver);
} else if(k+1==i) {
} else if(k==0) {
break; case 6:
System.out.println(c[m].number+"\
t"+c[m].brand+"\t"+c[m].spons or+"\t"+c[m].driver);
break; case 7:
System.out.println("Exit!");
break;
String brand;
String sponsor;
String driver;
Node top;
= null;
} public void push(int x, String br, String sp, String dr) // enter in the
beginning.
Node tmp = new Node();// create new node tmp and allocate memory if
(tmp == null) {
return;
tmp.next = top; // put top reference into tmp next top = tmp;
stack
else
null;
the beginning
if (top == null) {
System.out.print("\nStack
if (top == null) {
else
(tmp != null) {
} } public
class atd {
Implementing class
int choose=0;
System.out.println("\t***ABC
Scanner(System.in); while(choose!=3){
"); choose=input.nextInt();
switch(choose){ case 1:
int n=input.nextInt();
input.nextLine();
String brand=input.nextLine();
String sponsor=input.nextLine();
obj.push(n,brand,sponsor,driver); break;
case 2:
System.out.println(obj.peek())
System.out.println("Exit!");
break;
As per Scenario only 6 people can register to the Race if more then 6 people are join to the even
can get this message.
Above Image we can see Car details after deleted and before delete by entering car number can
delete particular car.
Above image is the way of inserting places of every end of each round this is help to know
finally who the winner is
As I said earlier by inserting places to end of each round finally can identify who is the winner
and that is shown on above picture.
Search car by car‟s Number this is very helpful for identify the details of car brand,sponsor and
Driver‟s Name.
In the above picture can find the details of all registered car.
1. Array-based implementation:
• Push: O(1) amortized time complexity, O(n) worst-case time complexity when the array
needs to be resized.
• Push: O(1) amortized time complexity, O(n) worst-case time complexity when the
dynamic array needs to be resized.
In general, the stack operations have a time complexity of O(1) for push, pop, and peek.
However, the search operation has a time complexity of O(n) for all implementations.
5.6 Encapsulation
In object-oriented programming (OOP), the term "encapsulation" refers to the grouping of data
and the methods that act on it into a single entity, such as a class. Encapsulation is a technique
used to provide a clearly specified interface for interacting with an object while keeping the
implementation details of the object hidden from the outside world.
Encapsulation is the process of containing data and behavior within a singular entity and
allowing controlled access to that entity, to put it another way. By forbidding direct access to an
object's data and methods and requiring access through clearly defined interfaces, it allows the
developer to create reliable and secure software systems.
This approach ensures that the implementation details of an object are not exposed to the outside
world and that the object is responsible for maintaining its own integrity.
Encapsulation helps in achieving data hiding, which means that the internal representation of an
object is hidden from the outside world. This allows the developer to modify the internal
implementation of an object without affecting the code that uses it. Encapsulation also facilitates
code reuse and modularity by providing a clear separation between the interface and the
implementation of an object.
By returning the contents of the secret variables, the getAccountNumber() and getBalance()
methods allow for controlled access to the data that is encapsulated. Other classes may use these
methods to get the account number and amount.
There are two ways to change the account balance: transfer() and withdraw(). These methods are
used to guarantee that the balance is updated in a controlled and consistent way and can only be
called from within the BankAccount class.
By encapsulating the data and methods within the BankAccount class, the developer can ensure
that the account information is accessed and modified in a controlled and secure way, reducing
the risk of errors and vulnerabilities in the software system
5. Abstraction: Through the provision of a clearly specified interface for interacting with an
object, encapsulation makes abstraction easier. Because of this notion, it is simpler to
comprehend an object's behavior without having to comprehend how it is implemented
internally.
The procedure used to deal with the possibility for disappointment is called error handling. For
instance, it would be risky to use a paper even after realizing it was terrible and choosing to
disregard it. Recognizing and fixing those issues right away shields the remainder of the software
from additional pitfalls.
The accompanying subchapters illustrate various approaches to error handling in Rust. They all
have a variety of use cases and rather unpretentious contrasts. As a general guideline:
The Option type should be used when a value is optional or when the absence of a value is not an
error circumstance. For example, neither C: nor -/ have a parent directory. Unwrap works well
when managing Options for experimentation and situations where there is unquestionably going
to be a benefit. The ability to define an error message with expect, should something go wrong,
makes it more useful.
1. Exception handling: This technique involves detecting errors and throwing exceptions to
handle them. It allows developers to control the flow of the program and handle errors
gracefully without crashing the application.
2. Logging: Logging is the process of recording events, errors, and other information that
occurs during the execution of an application. It helps developers identify errors and
troubleshoot issues that occur during runtime.
Overall, error handling is an essential part of software development, and developers should
implement robust error handling techniques to ensure that their applications are reliable and
resilient.
Similarly to a lemonade, we could compare this to the invalid text. (""). We would prefer to have
the compiler flag cases when there isn't a drink because we are using Rust.
When nonattendance is reasonable, the sexually transmitted disease library's Option<T> Enum is
used. It manifests as one of two possible "options":
These situations can be dealt with either categorically using coordinate or tangibly using unwind.
Either the internal part will be returned or there will be a panic.
// The adult has seen it all, and can handle any drink well.
fn give_adult(drink: Option<&str>) {
drink {
fn drink(drink: Option<&str>) {
"lemonade" { panic!
give_adult(lemonade); give_adult(void);
6.1.2 Panic
Panic is the error management feature that is the easiest to understand. It usually exits the
program after printing an error message and starting to free up the stack. Here, we categorically
declare that our error state is panic:
fn drink(beverage: &str) {
} fn main()
{ drink("water");
drink("lemonade");
6.1.3 Result
Result is a more extravagant form of the Option type that portrays conceivable error rather than
conceivable nonappearance. (Anon., n.d)
Similar to Option, Result also has a variety of connected strategies. For instance, unwind () either
produces the element T or panics. There are many combinators available for case processing.
You will undoubtedly come across strategies that yield the Result type while working with Rust,
such as the parse () strategy. Since it might not typically be possible to imagine parsing a string
into the other sort, parse () gives a Result indicating potentially disappointing expectations.
We should see what happens when we effectively and fruitlessly parse () a string:
Let's try using `unwrap()` to get the number out. Will it bite us? let
second_number = second_number_str.parse::<i32>().unwrap();
first_number * second_number
{}",
When parse() fails, an error is left behind for unwrap() to worry on. Additionally, the panic exits
our application and issues an awful error message. (Anon., n.d)
We should consider handling the error explicitly and being more clear about the return type in
order to improve the nature of our error message.
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 101
In the accompanying code, two cases of unwrap create distinctive error types. Vec::first returns
an Option, while parse::<i32> returns a Result<i32, ParseIntError>:
2 * first.parse::<i32>().unwrap() //
Generate error 2
.into_iter()
.map(|s| s.parse::<i32>())
numbers);
.into_iter()
.filter_map(|s| s.parse::<i32>().ok())
numbers);
turned into a result with a vector (Result<Vec<T>, E>). Once an Result::Err is found, the
fn main() { let
.into_iter()
.map(|s| s.parse::<i32>())
numbers);
.into_iter()
.map(|s| s.parse::<i32>())
.partition(Result::is_ok); println!
{:?}", errors);
When you look at results, we will note that everything is still wrapped in results. A little more
boilerplate is needed for this.
.into_iter()
.map(|s| s.parse::<i32>())
errors.into_iter().map(Result::unwrap_err).collect(); println!("Numbers:
1. do {
2. try expression
3. statements
4. } catch pattern 1 {
5. statements
6. } catch pattern 2 where condition {
7. statements
8. } catch pattern 3 , pattern 4 where condition {
9. statements
10. } catch {
11. statements
12. }
7. let y: Int?
8. do {
9. y = try someThrowingFunction()
10. } catch {
11. y = nil
12. }
result *= base;
return result;
The code above is what it would have looked like without error handling because in the scenario
described above, a user might have accidentally entered a string value in place of an integer
when the system was asking for an integer instead. This would cause the system to malfunction
as shown in the above picture. So, let's see how I corrected this mistake.
So the above image can see using try catch method can avoid the error in this case we can alive
our system when user give any kind of input.
These are just a few examples of what could be included in a test report. The specific contents of
the report will depend on the testing objectives, the testing methodology used, and the needs of
the stakeholders who will be reviewing the report.
In summary, test reports are important because they provide stakeholders with the information,
they need to make informed decisions about the software, ensure the quality of the software, and
comply with regulatory and industry standards.
1. Introduction: This section provides an overview of the testing process, including the
objectives, scope, and approach used.
2. Testing Environment: This section describes the testing environment, including the
hardware, software, and network configurations used during the testing process.
3. Test Cases: This section provides a list of the test cases that were executed during the
testing process, including the expected and actual results.
4. Defects and Issues: This section provides a list of any defects or issues that were
identified during the testing process, including their severity, priority, and steps to
reproduce them.
5. Test Metrics: This section provides quantitative data about the testing process, including
the number of test cases executed, the number of defects identified, and the test coverage
achieved.
7. Conclusions: This section provides a summary of the key findings of the test report, and
any conclusions or recommendations based on those findings.
Depending on the goals, scope, and methodology of the testing process as well as the
requirements of the stakeholders who will be evaluating the report, a test report's precise contents
may change. To give a thorough overview of the testing procedure and results, most test reports
will include the data listed above.
As the size of an algorithm's input approaches infinity, asymptotic analysis is a method used in
computer science to assess an algorithm's effectiveness. It aids in assessing an algorithm's
efficiency and foretelling how it will scale as input size increases.
Big O notation is used in asymptotic analysis to indicate the maximum time and space
complexity of an algorithm. The performance growth rate of an algorithm as the input size
approaches infinite can be expressed using the Big O notation.
Think about an algorithm that sorts an array with n items, for instance. As the input quantity (n)
grows, the algorithm's sorting time for the array will also grow. We can predict how fast the
amount of time needed to sort the array will increase as n grows by using asymptotic analysis.
We can state that the algorithm has an O(n2) time complexity if we observe that the amount of
time needed by the algorithm to sort an array of size n is proportional to n2. This indicates that
to sort an array of size n is proportional to n.(n). This means that the time needed to sort the array
will only rise linearly as n increases.
Asymptotic analysis is a powerful tool for analyzing the efficiency of algorithms and comparing
the performance of different algorithms for a given problem. It helps in choosing the most
efficient algorithm for a particular use case and optimizing the performance of existing
algorithms.
Worst-case analysis is another name for the situation analysis that is the most pessimistic. It
entails examining an algorithm's behavior while assuming that the input is in the worst-case
situation. The worst-case scenario is the one in which the algorithm needs the most time or space
to finish. We can determine an upper bound on the algorithm's time and space complexity by
considering the worst-case situation. This higher bound ensures that the algorithm won't perform
worse than a specific threshold.
On the other hand, average-case analysis examines an algorithm's performance under the
presumption that the data is uniformly distributed and random. With the help of common inputs,
this method estimates how well the algorithm should work. In average-case analysis, we
determine the algorithm's typical time or area complexity across all feasible inputs. When the
Most pessimistic scenario analysis is typically more conservative and offers a more solid upper
limit on the algorithm's performance. On the other hand, average-case analysis offers a more
accurate assessment of the algorithm's real performance on typical inputs.
The constraints of both worst-case and average-case analyses must be kept in mind. The
worstcase situation might not accurately depict the algorithm's typical behavior, and the best-case
scenario might not accurately depict how the algorithm responds to particular inputs. As a result,
it is frequently necessary to combine worst-case and average-case analysis to get a complete
grasp of an algorithm's effectiveness.
While measuring an algorithm's efficiency for a particular input size can provide us with a
precise indication of how well it performs, it may not be very helpful to predict how the
algorithm will perform for bigger input sizes. This is where asymptotic analysis comes in, as it
gives us a method to forecast an algorithm's overall effectiveness as the input size increases.
Think about an algorithm that sorts an array with n items, for instance. We can state that an
algorithm has O(n2) time complexity if the amount of time it takes to sort an array is
proportional to n2. This indicates that the time needed to sort the array will only rise
quadratically as n increases. Similar to this, we can state that the algorithm's space complexity is
O if the amount of space needed by the algorithm to sort the array is proportional to n.(n).
Asymptotic analysis allows us to compare the effectiveness of various algorithms and identify
the one that is most appropriate for a particular issue. The actual performance of an algorithm
may be better or worse depending on the particular input data and the implementation of the
algorithm, and asymptotic analysis only gives an upper limit on the growth rate of an algorithm's
efficiency. Therefore, in order to obtain a more accurate estimate of an algorithm's performance,
it is always advised to also carry out real measurements of an algorithm's efficiency for a
particular input size.
1. System load: The system load, which can change based on the other processes operating
on the system, can have an impact on how well a program performs. The timing code
results may be inaccurate and deceptive if other resource-intensive operations are running
on the system at the same time.
2. Warm-up time: When timing code, it's crucial to give enough warm-up time to make sure
that any JIT (Just-in-Time) compilation has taken place and the code has been completely
loaded into memory. Inaccurate findings can result from not allowing enough warm-up
time because early executions might take longer because of compilation or memory
allocation overhead.
3. Measurement overhead: The process of determining the amount of time it takes for an
algorithm to run can add extra overhead, which can skew the results. The operating
system's time-keeping device or the measurement code itself may be the source of this
overhead.
5. Input data variability: The raw data that an algorithm is processing can also affect how
well it performs. Timing code may not correctly capture an algorithm's performance for a
particular set of inputs, producing inaccurate results.
It's crucial to measure the execution time several times and average the findings in order to
address these problems. Controlling system stress, warming up the code, and reducing
measurement overhead are also essential. To better understand the code's performance
characteristics, it can also be useful to test it on various hardware and input data sets.
1. Define the problem: Any analysis framework should start with a precise definition of the
issue or research topic. This entails determining the analysis's goals, objectives, and
learning algorithms. Finding patterns, trends, and relationships in the data that can help us
better comprehend the issue is the goal of this step.
4. Identify solutions: The next stage is to find potential solutions or suggestions to deal with
the issue based on the findings of the analysis. To assess various scenarios and their
potential effects, this may entail using brainstorming, modeling, or simulation techniques.
5. Implement and monitor: The next stage after finding a solution is to put it into practice
and track its success over time. This might entail creating an execution strategy, keeping
an eye on the outcomes, and making changes as necessary.
6. Evaluate the results: Evaluating the findings of the analysis and the viability of the
solution is the last stage in the analysis framework. This entails evaluating how the
problem has been resolved in comparison to the initial goals and objectives, comparing
the results to those goals and objectives, and identifying areas that could use
improvement.
1. Big O notation
• Big O notation is a type of mathematical writing used to express the maximum growth
rate of a function. It is frequently used in computer science to characterize an algorithm's
worstcase time complexity in terms of the size of its input.
• If an algorithm, for instance, has a time complexity of O(n), where n is the size of the
input, this indicates that the algorithm's worst-case running time will not increase any
faster than a linear function of n. In other words, the algorithm's running time only grows
maximally linearly with the size of the input.
• An algorithm's worst-case running time will not increase any quicker than a quadratic
function of n if it has a time complexity of O(n2). The algorithm's running time grows at
• "Order of" is what the "O" in Big O writing stands for. Therefore, when we say that an
algorithm has a time complexity of O(f(n)), we imply that as the input size approaches
infinity, the algorithm's running time grows at a rate on the order of f(n).
• It's essential to remember that Big O notation does not specify the precise running time of
an algorithm; rather, it only gives an upper bound on the growth rate of a function.
Furthermore, Big O notation does not take into account constant factors or lower-order
terms, which means that two algorithms with the same time complexity could still operate
very differently in reality.
2. Omega notation:
• The bottom limit of a function's growth rate is expressed using the mathematical notation
known as "omega notation." It is a standard practice in computer science to express an
algorithm's worst-case time complexity in terms of the size of its input.
• If an algorithm, for instance, has a time complexity of (n), where n is the size of the input,
it indicates that its best-case running time increases no more slowly than a linear function
of n. In other words, the algorithm's execution time grows at least linearly as the size of
the input rises.
• In a similar vein, if an algorithm has a time complexity of n2, it implies that its best-case
running time increases no more slowly than n's quadratic function. The algorithm's
execution time rises at least quadratically as the input size does.
• Omega notation does not specify the precise running time of an algorithm; rather, it only
gives a lower limit on the growth rate of a function. Omega notation does not take into
account constant factors or higher-order terms, which means that two algorithms with the
same time complexity could still operate very differently in practice.
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 126
3. Theta notation:
• The top and lower bounds of a function's growth rate are expressed using the
mathematical notation theta. In computer science, the size of an algorithm's input is
frequently used to characterize the average-case time complexity of an algorithm.
• For instance, if an algorithm's time complexity is n, where n is the size of the input, it
implies that the algorithm's running time increases linearly with n. In other words, the
algorithm's running time increases at most linearly and at least linearly as the input size
grows.
• In a similar vein, if an algorithm's time complexity is n2, it implies that its running time
increases at the same rate as n's quadratic function. The algorithm's running time rises at
most quadratically and at least quadratically as the input size increases.
• The "Θ" in Theta notation stands for "theta". Therefore, when we say an algorithm has a
time complexity of Θ(f(n)), we mean that the growth rate of the algorithm's running time
is on the order of f(n) as the input size approaches infinity, both in the best and worst case
scenarios.
The effectiveness of an algorithm is frequently assessed in terms of its space and temporal
complexity in computer science. Time complexity is the amount of time an algorithm needs to
solve a problem as the size of the input increases, whereas space complexity is the amount of
memory an algorithm needs to solve a problem as the size of the input increases.
Because they can produce results more quickly, conserve computational resources, use less
energy, and allow the processing of bigger data sets more quickly, efficient algorithms are
desirable. Many fields of computer science, such as artificial intelligence, computer graphics,
An algorithm's time complexity plays a significant role in deciding how effective it is. Because it
can handle larger inputs or finish tasks quicker, an algorithm with a lower time complexity will
typically be more effective than an algorithm with a higher time complexity.
For instance, the worst-case scenario for a linear search algorithm entails sequentially checking
each element of an input list until a match is discovered or the complete list is searched. This is
because a linear search algorithm has a time complexity of O(n). The target element in a sorted
list can be found using a binary search algorithm, which has a time complexity of O(log n),
which means that it can do so by splitting the search space in half at each stage. This results in a
much faster search time for big input sizes.
The time complexity of an algorithm should be taken into account when developing and
analyzing it to make sure it is suitable for the issue at hand and the anticipated input size.
Algorithms with lower time complexity are better suited for larger input sizes or when faster
performance is necessary, while algorithms with greater time complexity may be acceptable for
small input sizes or when only a small number of computations are needed.
Similar to time complexity, space complexity is typically expressed using Big O notation, which
sets a limit on how quickly the algorithm's memory utilization will increase. The quantity of
extra
memory the algorithm needs above and beyond the size of the input itself is a common metric for
space complexity.
Since they use the least amount of memory to solve a problem, efficient algorithms generally
have low space complexity. An algorithm's ability to manage larger input sizes, use less
expensive hardware or cloud resources, and execute more quickly and reliably are just a few
advantages that can result from reducing an algorithm's space complexity.
A straightforward algorithm that adds up a list of numbers, for instance, has a space complexity
of O (1), which means that it only requires a fixed quantity of memory to store the sum as it is
computed. A recursive algorithm for computing the factorial of a number, however, has a space
complexity of O(n), which means that it must store intermediate results for each recursive call,
leading to a linear rise in memory utilization as the input size increases.
It is crucial to take space complexity into account when creating and analyzing algorithms and to
make sure that it is suitable for the issue at hand and the anticipated input size. Algorithms with
lower space complexity are better suited for larger input sizes or when memory usage must be
There are three asymptotic notations that are utilized to address the time complexity of an
algorithm. They are:
1. Θ Notation (theta)
2. Big O Notation
3. Ω Notation
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 130
o Θ Notation (theta)
The Notation is used to visualize what is considered to be an algorithm's normal limit. For
instance, it defines an upper bound and a lower bound, and your algorithm will fall somewhere
between these two levels. This means that if a capacity is g(n), the theta portrayal will be
displayed as (g(n)) and the link will be displayed as:
The above articulation can be perused as theta of g(n) is characterized as set of the multitude of
capacities f(n) for which there exists some certain constants c1, c2, and n0 with the end goal that
c1*g(n) is not exactly or equal to f(n) and f(n) is not exactly or equal to c2*g(n) for all n that is
more noteworthy than or equal to n0.
For example:
o Ω Notation
The Ω documentation signifies the lower bound of an algorithm for example the time taken by
the algorithm can't be lower than this. At the end of the day, this is the quickest time in which the
algorithm will return an outcome. Its the time taken by the algorithm when given its best-case
input. Along these lines, on the off chance that a capacity is g(n), then, at that point, the omega
The above articulation can be perused as omega of g(n) is characterized as set of the multitude of
capacities f(n) for which there exist a few constants c and n0 with the end goal that c*g(n) is not
exactly or equal to f(n), for all n more noteworthy than or equal to n0.
o Big O notation
The Big O documentation characterizes the upper bound of any algorithm for example you
algorithm can't take additional time than this time. All in all, we can say that the big O
documentation means the most extreme time taken by an algorithm or the most pessimistic
scenario time complexity of an algorithm. Thus, big O documentation is the most utilized
documentation for the time complexity of an algorithm. In this way, assuming a capacity is g(n),
then, at that point, the big O portrayal of g(n) is displayed as O(g(n)) and the connection is
displayed as:
The above articulation can be perused as Big O of g(n) is characterized as a bunch of capacities
f(n) for which there exist a few constants c and n0 to such an extent that f(n) is more noteworthy
than or equal to 0 and f(n) is more modest than or equal to c*g(n) for all n more prominent than
or equal to n0. For example if f(n) = 2n² + 3n + 1 and g(n) = n² then for c = 6 and n0 = 1, we
The most popular notation for expressing an algorithm's temporal complexity is called Big O. We
will examine the big O notation of different algorithms in this blog section.
In this model, we need to observe the amount of first n numbers. For instance, assuming n = 4,
our output should be 1 + 2 + 3 + 4 = 10. Assuming n = 5, the ouput should be 1 + 2 + 3 + 4 + 5 =
15.
How about we try various solutions to this code and try to compare that large number of codes.
O(1) solution
findSum(int n)
c1 }
In the above code, there is only one articulation and we know that an assertion requires some
investment for its execution. The essential thought is that assuming the assertion is taking
constant time, then, at that point, it will set aside a similar amount of effort for all the information
size and we denote this as O(1).
findSum(int n)
{ int sum = 0; // -----------------> it takes some constant time "c1" for(int i = 1; i <= n; ++i)
time
In this solution, we will run a loop from 1 to n and we will add these qualities to a variable
named "aggregate".
The big O notation of the above code is O(c0*n) + O(c), where c and c0 are constants. So, the
overall time complexity can be composed as O(n).
O(n²) solution
findSum(int n)
In this solution, we will augment the worth of total variable "I" times for example for I =1, the
total variable will be augmented once for example total = 1. For I = 2, the total variable will be
augmented twice. So, how about we see the solution.
The big O notation of the above algorithm is O(c1*n²) +O( c2*n) + O(c3). Since we take the
higher order of growth in big O. So, our expression will be decreased to O(n²).
So far, we have seen 3 solutions to an issue of a similar nature. Which method would you prefer
to use to find the total number of first "n" numbers? If your response is an O(1) solution, we have
one bonus part for you at the end of this blog post. Because the algorithm's execution time will be
constant regardless of the size of the information, we would lean toward the O(1) answer.
Different methods are used to evaluate algorithms. We examine the algorithm's effectiveness
through both time and space research. Sometimes it's necessary to look at how much memory or
room is being used. For instance, when managing vast amounts of data or developing embedded
devices. Furthermore, time complexity/efficiency shows that the quicker a program or function
completes a job, the better.
9. Conclusion
We summarize each module's key ideas and go over its subjects once more in these concluding
notes. You should have gained a deeper understanding of programming from this lesson, with a
focus on the item-centered approach to programming. It ought to encourage you to think about
programming in terms of what you want to do to solve a problem or illustrate a situation rather
than just the features of the programming language you're using, and then how you would go
about doing that in your chosen programming language. You "invent your own instructions," or
write methods, and you "invent your own types," or write classes, to organize a program. Most of
the time, after considering the situation you're trying to portray, you'll determine what kind of
instructions you need. You will then engage in programming on two levels: The code that gives
Design specification for data structures that describes the legitimate operations that can be
carried out on the structures as well as how a memory stack functions and how it is used to
execute determined function calls in a computer. Reports on the test outcomes, error handling,
and a challenging ADT implementation are all included in the paper. This paper shows how to
assess an algorithm's effectiveness using asymptotic analysis. Finally, the author offered two
approaches for assessing an algorithm's effectiveness, using an illustration to support your
answer.
This subject covers a lot of material, so if you learn everything there is to know about it, it might
be difficult to comprehend. But since this is a practical part, you can use the executable code that
has been provided to test your own theories. That is the most efficient way to study this material,
create your own code or modify already-existing code, and see how what is covered in these
notes
M.S.MARYAM RUQSHA /COL00088920 – DATA STRUCTURES & ALGORITHMS 140
is used in practice. Like many practical abilities, they become simpler the more you use them.
When you use them, what seems difficult to describe in words turns out to be much simpler.