Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

CHAPTER FOUR

COMPUTATIONAL COMPLEXITY

1
Introduction to Computational Complexity
 In computer science, the computational complexity or simply complexity of an
algorithm is the amount of resources required to run it.
 Particular focus is given to time and memory requirements.
 The complexity of a problem is the complexity of the best algorithms that allow
solving the problem.
 The study of the complexity of explicitly given algorithms is called analysis of
algorithms,
 while the study of the complexity of problems is called computational
complexity theory.
 Computational complexity theory focuses on classifying computational problems
according to their resource usage, and relating these classes to each other.
2
 Both areas are highly related, as the complexity of an algorithm is always an upper

bound on the complexity of the problem solved by this algorithm.

 Moreover, for designing efficient algorithms, it is often fundamental to compare the

complexity of a specific algorithm to the complexity of the problem to be solved.

 Also, in most cases, the only thing that is known about the complexity of a problem

is that it is lower than the complexity of the most efficient known algorithms.

 Therefore, there is a large overlap between analysis of algorithms and complexity

theory.
3
Complexity Classes
 A complexity class is a collection of languages determined by three things:

 A model of computation (such as a deterministic Turing machine, or a

nondeterministic TM, or a parallel Random Access Machine).

 A resource (such as time, space or number of processors).

 A set of bounds. This is a set of functions that are used to bound the amount of

resource we can use.


5
Polynomial Bounds
 By making the bounds broad enough, we can make our definitions fairly independent of
the model of computation.
 The collection of languages recognized in polynomial time is the same whether we
consider Turing machines, register machines, or any other deterministic model of
computation.
 The collection of languages recognized in linear time, on the other hand, is different on a
one-tape and a two-tape Turing machine.
 We can say that being recognizable in polynomial time is a property of the language,
while being recognizable in linear time is sensitive to the model of computation.
6
Basics of Algorithm Analysis
 In theoretical analysis of algorithms, it is common to estimate their complexity in the
asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input.
 Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a specific
computational problem.
 Most algorithms are designed to work with inputs of arbitrary length.
 Analysis of algorithms is the determination of the amount of time and space resources
required to execute it.
 Usually, the efficiency or running time of an algorithm is stated as a function relating the
input length to the number of steps, known as time complexity, or volume of memory,
known as space complexity.

7
 Analysis of algorithm is the process of analyzing the problem-solving capability of the
algorithm in terms of the time and size required.
 However, the main concern of analysis of algorithms is the required time or

performance.

Generally, we perform the following types of analysis −


 Worst-case − The maximum number of steps taken on any instance of size a.

 Best-case − The minimum number of steps taken on any instance of size a.

 Average case − An average number of steps taken on any instance of size a.

 Amortized − A sequence of operations applied to the input of size a averaged over

time.
8
 To solve a problem, we need to consider time as well as space complexity as the

program may run on a system where memory is limited but adequate space is
available or may be vice-versa.

 In this context, if we compare bubble sort and merge sort.

 Bubble sort does not require additional memory, but merge sort requires additional

space.

 Though time complexity of bubble sort is higher compared to merge sort, we may

need to apply bubble sort if the program needs to run in an environment, where
memory is very limited.
9
Big-O Notation
 Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular value
or infinity.
 In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the input
size grows.
Is Big-O Useful?
 Big-O notation is really only most useful for large n.
 The suppression of low-order terms and leading constants is misleading
for small n.
10
 Big O notation characterizes functions according to their growth rates:

different functions with the same growth rate may be represented using
the same O notation.
 The letter O is used because the growth rate of a function is also referred

to as the order of the function.


 A description of a function in terms of big O notation usually only

provides an upper bound on the growth rate of the function.

11
12
Polynomial-Time Algorithms
 A polynomial-time algorithm is an algorithm whose execution time is either

given by a polynomial on the size of the input, or can be bounded by such a

polynomial.

 Problems that can be solved by a polynomial-time algorithm are called

tractable problems.

 For example, most algorithms on arrays can use the array size, n, as the

13 input size.
 To find the largest element in an array requires a single pass through the
array, so the algorithm for doing this is O(n), or linear time.
 Sorting algorithms usually require either O(n log n) or O(n square)
time.
 Bubble sort takes linear time in the best case, but O(n square) time in
the average and worst cases.
 Heap sort takes O(n log n) time in all cases.
 Quicksort takes O(n log n) time on average, but O(n square) time in
the worst case.

14
 Regarding O(n log n) time, note that
 The base of the logarithms is irrelevant, since the difference is a constant factor,
which we ignore; and
 Although n log n is not, strictly speaking, a polynomial, the size of n log n is
bounded by n square, which is a polynomial.
 Probably all the programming tasks you are familiar with have polynomial-time
solutions.
 This is not because all practical problems have polynomial-time solutions.
 Rather, it is because your courses and your day-to-day work have avoided problems
for which there is no known practical solution.
 An algorithm is said to be of polynomial time if its running time is upper bounded
by a polynomial expression in the size of the input for the algorithm.
15
Polynomial Time Reduction
 In computational complexity theory, a polynomial-time reduction is a method

for solving one problem using another.

 One shows that if a hypothetical subroutine solving the second problem exists, then

the first problem can be solved by transforming or reducing it to inputs for the
second problem and calling the subroutine one or more times.

 If both the time required to transform the first problem to the second, and the

number of times the subroutine is called is polynomial, then the first problem is
polynomial-time reducible to the second.
16
 A polynomial-time reduction proves that the first problem is no more

difficult than the second one, because whenever an efficient algorithm


exists for the second problem, one exists for the first problem as well.
 By contraposition, if no efficient algorithm exists for the first problem,

none exists for the second either.


 Polynomial-time reductions are frequently used in complexity theory

for defining both complexity classes and complete problems for those
classes.
17
Types of reductions
 The three most common types of polynomial-time reduction, from the most to the least
restrictive, are:
 many-one reductions,
 truth-table reductions, and
 Turing reductions

 The most frequently used of these are the many-one reductions, and in some cases the phrase

"polynomial-time reduction" may be used to mean a polynomial-time many-one reduction.

 The most general reductions are the Turing reductions and the most restrictive are the Many-

one reductions with Truth-table reductions occupying the space in between.

18
Definitions and Ideas
 A decision problem is in NP if it can be solved by a non-deterministic algorithm in

polynomial time.

 An instance of the Boolean satisfiability problem is a Boolean expression that combines Boolean

variables using Boolean operators.

 An expression is satisfy-able if there is some assignment of truth values to the variables that

makes the entire expression true.

 Given any decision problem in NP, construct a non-deterministic machine that solves it in

polynomial time.

19
 For each input to that machine, build a Boolean expression which computes

whether that specific input is passed to the machine, the machine runs correctly,
and the machine halts and answers "yes".

 Then the expression can be satisfied if and only if there is a way for the machine

to run correctly and answer "yes", so the satisfy-ability of the constructed


expression is equivalent to asking whether or not the machine will answer "yes".

20
Examples of NP-Complete problems
 The list below contains some well-known problems that are NP-
complete when expressed as decision problems.
 Boolean satisfiability problem (SAT)
 Knapsack problem.
 Hamiltonian path problem.
 Travelling salesman problem (decision version)
 Subgraph isomorphism problem.
 Subset sum problem.
 Clique problem.
 Vertex cover problem.

21
Knapsack Problem
 The knapsack problem is a problem in combinatorial optimization:
 Given a set of items, each with a weight and a value, determine the number of each
item to include in a collection so that the total weight is less than or equal to a
given limit and the total value is as large as possible.

22
Cook's theorem
 In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem,

states that the Boolean satisfiability problem is NP-complete.

 That is, it is in NP, and any problem in NP can be reduced in polynomial time by a deterministic Turing

machine to the Boolean satisfiability problem.


 An important consequence of this theorem is that if there exists a deterministic polynomial time
algorithm for solving Boolean satisfiability, then every NP problem can be solved by a deterministic
polynomial time algorithm.
 The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P

versus NP problem, which is widely considered the most important unsolved problem in theoretical
computer science.

23
Problem Solvability
 A problem is solvable if there is a program that always stops
and gives the answer.
 The number of steps it takes depends on the input.
 Solvable means that there’s a program that takes an input, runs
for a while, but eventually stops and gives the answer.
 There are lots of programs for any given problem.
Some are faster than others.
We can always artificially slow them down.
24
25
26
Summary
 Computational complexity, a measure of the amount of computing

resources (time and space) that a particular algorithm consumes when it


runs.
 Computer scientists use mathematical measures of complexity that allow

them to predict, before writing the code, how fast an algorithm will run
and how much memory it will require.
 Such predictions are important guides for programmers implementing and

27
selecting algorithms for real-world applications.
Summary
 Computational complexity is a continuum, in that some algorithms

require linear time (that is, the time required increases directly with the
number of items or nodes in the list, graph, or network being processed),
 whereas others require quadratic or even exponential time to complete

(that is, the time required increases with the number of items squared or
with the exponential of that number).
28
29
Individual Assignment(10%) Due Date: 20/06/2022
1. Define the following terms/terminologies and elaborate with
examples.
a. Solvability and Un-solvability of the Problem
b. Computational Complexity
c. Computable Functions
d. Decidability and Un-decidability
e. Computability
Note: Mode of submission only softcopy.
30

You might also like