Unit-1 Daa

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

UNIT-1

1. Asymptotic Notations for Time and Space Complexity:

- Big O Notation (O): Represents the upper bound on the running time or space usage of
an algorithm. It provides an approximation of the worst-case scenario. For example, O(n)
signifies that the algorithm's running time or space usage grows linearly with the input size.

- Omega Notation (Ω): Represents the lower bound on the running time or space usage. It
provides an approximation of the best-case scenario. For example, Ω(n) indicates that the
algorithm's performance is at least linear.

- Theta Notation (Θ): Represents both upper and lower bounds. For example, Θ(n) means
the algorithm has linear time complexity, and there is a tight relationship between the input
size and the running time.

2. Methods for Solving Recurrence Relations:

- Substitution Method: This method involves guessing a solution to the recurrence relation
and then proving it correct using mathematical induction. For instance, if you guess that T(n)
= O(n^2), you must prove this by induction.

- Master Theorem: It provides a convenient way to analyze the time complexity of


divide-and-conquer algorithms with a specific form. It deals with recurrence relations of the
form T(n) = aT(n/b) + f(n), where 'a' and 'b' are constants.

- Recurrence Tree Method: This approach entails creating a tree to visualize the recursive
calls made by an algorithm. By summing up the work at each level of the tree, you can
determine the overall time complexity.

3. Brief Review of Graphs:

- Graph Components: Nodes (vertices) and edges are the basic components of graphs.
Edges can be directed or undirected, and they can have weights.

- Types of Graphs:
- Directed Graphs (DiGraphs): Edges have a direction.
- Undirected Graphs: Edges are bidirectional.
- Weighted Graphs: Edges have associated weights.
- Trees: A connected, acyclic graph.
- Forests: A collection of trees.
- Cyclic vs. Acyclic: Graphs can have cycles (cyclic) or not (acyclic).

4. Sets and Disjoint Sets:

- Sets: Collections of unique elements. Operations include union, intersection, and


difference.
- Disjoint Sets: Sets that have no common elements. They are often used in various
algorithms and applications.

5. Union-Find (Disjoint Set) Data Structure:

- A data structure that efficiently represents disjoint sets.


- It includes two primary operations: union (joining two sets) and find (determining which
set an element belongs to).
- Useful for algorithms like Kruskal's Minimum Spanning Tree and cycle detection in
graphs.

6. Sorting Algorithms and Their Analysis:

- Bubble Sort: Compares adjacent elements and swaps them if they are in the wrong order.
Inefficient with O(n^2) time complexity.

- Selection Sort: Selects the minimum element and swaps it with the first unsorted
element. O(n^2) time complexity.

- Insertion Sort: Builds the sorted array one item at a time. Efficient for small datasets,
O(n^2) time complexity.

- Merge Sort: A divide-and-conquer algorithm that divides the array into smaller subarrays,
sorts them, and then merges them. O(n log n) time complexity in the worst and average
cases.

- Quick Sort: Also a divide-and-conquer algorithm, it partitions the array and recursively
sorts subarrays. It has an average-case O(n log n) time complexity but can degrade to
O(n^2) in the worst case.

7. Searching Algorithms and Their Analysis:

- Linear Search: Sequentially checks each element in the dataset to find the target
element. O(n) time complexity.

- Binary Search: Requires a sorted dataset and repeatedly halves the search space. O(log
n) time complexity.

- Hashing: Uses a hash function to map elements to specific locations in a data structure
for efficient retrieval. The time complexity depends on the quality of the hash function and
data distribution.

Divide and Conquer: General Method

- Divide and Conquer (D&C) is a general algorithmic paradigm that involves breaking a
problem into smaller subproblems of the same type.
- The three key steps in the D&C approach are:
1. Divide: Break the problem into smaller subproblems.
2. Conquer: Solve the subproblems recursively.
3. Combine: Combine the solutions of the subproblems to solve the original problem.

Binary Search:

- Binary Search is an efficient algorithm for finding a specific element in a sorted array.
- Algorithm:
- Compare the target value with the middle element.
- If the middle element matches the target, return its index.
- If the middle element is smaller, search the right half; otherwise, search the left half.
- Repeat the process until the element is found or the search space is empty.
- Time Complexity: O(log n) in the worst case.

Merge Sort:

- Merge Sort is a sorting algorithm that uses D&C.


- Algorithm:
1. Divide the unsorted list into n sublists, each containing one element.
2. Repeatedly merge sublists to produce new sorted sublists until there's only one sublist.
- Time Complexity: O(n log n) in the worst, average, and best cases.
- Stable and guarantees a consistent performance.

Quick Sort:

- Quick Sort is another sorting algorithm that uses D&C.


- Algorithm:
1. Choose a pivot element from the array.
2. Partition the array into two subarrays: elements less than the pivot and elements greater
than the pivot.
3. Recursively sort the subarrays.
4. Combine the sorted subarrays.
- Time Complexity: O(n^2) in the worst case, but O(n log n) on average (with good pivot
selection strategies).
- Not stable but has a small constant factor.

Selection Sort:

- Selection Sort is a simple sorting algorithm.


- Algorithm:
- Repeatedly select the minimum element from the unsorted portion and move it to the
sorted portion.
- Time Complexity: O(n^2) in the worst, average, and best cases.
- Not suitable for large datasets but easy to implement.

Strassen's Matrix Multiplication:


- Strassen's Matrix Multiplication is a more efficient way to multiply matrices using D&C.
- Algorithm:
- Divide each input matrix into four submatrices.
- Use these submatrices to calculate seven multiplications and 18 additions/subtractions.
- Combine the results to form the product matrix.
- Time Complexity: O(n^log2(7)) ≈ O(n^2.81) using Strassen's method. It's faster for large
matrices, but practical applications are limited due to constant factors.

Analysis of Algorithms for These Problems:

1. Binary Search:
- Time Complexity: O(log n) in the worst case, where n is the size of the array.
- Space Complexity: O(1) as it only requires a few variables.

2. Merge Sort:
- Time Complexity: O(n log n) in the worst, average, and best cases.
- Space Complexity: O(n) for additional memory to store sublists.

3. Quick Sort:
- Time Complexity: O(n^2) in the worst case (e.g., when selecting a bad pivot), but O(n log
n) on average.
- Space Complexity: O(log n) for the recursive call stack.

4. Selection Sort:
- Time Complexity: O(n^2) in the worst, average, and best cases.
- Space Complexity: O(1) as it sorts the elements in-place.

5. Strassen's Matrix Multiplication:


- Time Complexity: O(n^2.81) using Strassen's method, which is faster than the naive
matrix multiplication method for large matrices.
- Space Complexity: O(n^2) for the additional matrices created during the computation.

You might also like