Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Ethio-Lens College

Complexity Theory Lecture Notes, Department CS, Class Year IV


1. Introduction to complexity theory
1.1 Introduction
Complexity theory is concerned with the resources, such as time and space,
needed to solve computational problems. After the success of the general
theory of computability, that gave us a precise definition of the notion of
algorithm and fundamental insights into the notion of mathematical proof
and undecidability, it was only natural to also consider the notion of efficient
computation, and ask why some problems that are computable in principle
resist all attempts to compute them in a reasonable amount of time. There
is an abundance of problems from mathematics and computer science that
are trivially solvable by brute force search of an exponential number of instances,
but for which currently no efficient algorithm is known. Complexity
theory is the appropriate setting for the study of such problems. It is also the
home of one of the most fundamental open problems in mathematics, namely
the famous NP versus P problem. Some 40 years after the discovery of this
problem, complexity theory has matured into an extremely rich and fascinating
subject, despite the lack of progress on some of its most fundamental
problems.

o In computer science, computational complexity theory is the branch of the theory of


computation that studies the resources, or cost, of the computation required to solve a given
computational problem.
o The relative computational difficulty of computable functions is the subject matter of
computational complexity.
o Complexity theory analyzes the difficulty of computational problems in terms of many
different computational resources.
o Example: Mowing grass has linear complexity because it takes double the time to mow double
the area. However, looking up something in a dictionary has only logarithmic complexity

1|Page
because a double sized dictionary only has to be opened one time more (e.g. exactly in the
middle then the problem is reduced to the half).
1.2 Examples of Problems
A simple problem, familiar to most school children, namely that of adding two numbers. Say
10234843 to the number 2345639. Then starting with the rightmost column, you add the two
digits in the column. If this leads to a number less than ten, then write the number below the two
digits and repeat on the next column to the left, else write the last digit of the sum below the two
digits, and then repeat with the next column to the left, retaining a carry of 1. The description
above is roughly what an algorithm is.
Analysis" of the above \algorithm" for adding numbers. Suppose the numbers to be added are
each at most n digits long, then how many simple steps does computing their sum take? you may
say it takes about n steps (one for each column of the input) or n + 1 steps (to write down the
possibly n + 1 digits of the output) or 5n + 1 steps where each column of the input took at most
five elementary calculations (adding the digits, adding the carry-in if necessary, seeing if the sum
is less than ten or more, splitting the result into two digits, writing the lesser digit and moving the
rest into the carry) and the final column of the output took one step. No matter the steps, it has
linear form, a.n+b. where a and b are some constants. What is important is that the answer can
not be found in less than n steps (not say root(n) steps which would be much smaller than n) nor
does it require n2 steps. This leads to a crude, but extremely useful, approximation to the time
complexity of this algorithm - it takes Q(n) steps (where Q(n) roughly suppresses the a and the b
above and other such \lesser order terms" and just describes the leading term).
b. Problem 2: Multiplication. Now lets consider a somewhat more complex \computational
problem", that of multiplying two numbers. So the input to the task is two numbers, say X and
Y , each at most n digits long. The goal of the task is to produce or \output" the product X x Y ,
the result of multiplying X and Y . XxY=X+X… Y number of copies..If X&Y are n digtis long,
then require 10n additions. (for ten digits requires n*10n =10*1010 =100 Billion steps too slow)
The quicker one is, This is the method which roughly computes n numbers, the first being the
product of X with the least significant digit of Y , the second being the product of X with its ten
times its next signicant digits and so on, and then adds these n numbers together. Which
computes, in Q(n2) steps, in this case 100 steps for ten digit numbers, but take much more time

2|Page
than adding two numbers. But no algorithm found that can compute in linear time Q(n) since
multiplication is more complex than addition.
Computational complexity
Two very broad classes of running time complexity that turn out to take a central role in
computing are \polynomial time computable problems" (also known by the single letter P) and
\exponential time computable problems". The former include all problems that can be solved in
time Q(nC) for some constant c (independent of the input size n), while the latter is a broader
class and includes all functions that can be solved in time 2Q(nc) for constant c.
To understand the difference between the two cases, consider some problem X whose
complexity is in P (polynomial time). Let n0 be the largest input length for which this problem
can be solved in an hour on a computer. Now suppose we double the amount of time available.
How will the size of the largest solvable input grow? The feature of P is that this input size will
grow by a constant multiplicative factor. For problems with algorithms in time Q(n), the n0 will
double. For problems with algorithms in time Q(n3), the n0 will only grow by a factor of cub root
2 . If this seems bad, now consider what happens to a problem that is only solvable in xponential
time. For such a problem the corresponding input length would only grow additively, when the
running time grows by a multiplicative factor. Indeed for algorithms running in time 2n, doubling
the running time only adds 1 to n0.
Complexity classes

Complexity Classes
o A complexity class is the set of all of the computational
problems which can be solved using a certain amount of a certain computational resource.
o The complexity class P is the set of decision problems that can be solved by a deterministic
machine in polynomial time. This class corresponds to an intuitive idea of the problems which
can be effectively solved in the worst cases.
o The complexity class NP is the set of decision problems that can be solved by a
nondeterministic machine in polynomial time. This class contains many problems that people
would like to be able to solve effectively. All the problems in this class have the property that
their solutions can be checked effectively.
Complexity Class P

3|Page
 P is the class already described informally above, i.e., all computational problems that can be
solved in polynomial time. (Strictly speaking the class P only contains computational
problems described by Boolean functions, i.e., by functions whose output is either 0 or 1, but
the difference is not significant to this article.)
o P is the complexity class containing decision problems which can be solved by a deterministic
Turing machine using a polynomial amount of computation time, or polynomial time.
o P is often taken to be the class of computational problems which are "efficiently solvable" or
"tractable“. Problems that are solvable in theory, but cannot be solved in practice, are called
intractable.
o There exist problems in P which are intractable in practical terms; for example, some require at
least n 1000000 operations.
o P is known to contain many natural problems, including the decision versions of linear
programming, calculating the greatest common divisor, and finding a maximum matching. In
2002, it was shown that the problem of determining if a number is prime is in P.
Complexity Class NP

 In computational complexity theory, NP ("Nondeterministic Polynomial time") is the set of


decision problems solvable in polynomial time on a nondeterministic Turing machine.
o It is the set of problems that can be "verified" by a deterministic Turing machine in polynomial
time.
o All the problems in this class have the property that can be searched for some solution that
satisfies various constraints (and optimizes some objective function their solutions can be
checked effectively if the solution is valid.
Example: Map Coloring: This is the cartographer's problem: Given a map of potential
nations in some region of a world, can you color all nations with one of three colors so that
nations sharing a border not have the same color? To feed such a problem as input to a
computer, one only needs to list all the nations, and for each nation, list all other nations that
share a border with this nation. It is well known (a major result of 20th century mathematics) that
if nations are connected regions then every map can be colored with four colors. But in some
cases maps can be colored with three or fewer colors, and in others four colors are necessary.
The decision version of the problem simply asks the computer to output true if the map is

4|Page
colorable with 3 colors and false otherwise. The search problem would ask for a coloring of the
nations with the three colors when the decision version returns true.

1.3 Turing machine:


They were described in 1936 by Alan Turing. Turing machines were not meant to be a practical
computing technology, but a thought experiment about the limits of mechanical computation;
thus they were not actually constructed.
o Studying their abstract properties yields many insights into computer science and complexity
theory.
o Turing machines capture the informal notion of effective method in logic and mathematics, and
provide a precise definition of an algorithm or 'mechanical procedure'.
Alan Turing characterized computable functions by building a machine. Though theoretical this
gave rise to the idea of computers.
A Turing machine (TM) is an abstract model of computation, with a finite number of states, each
labeled Y, N, H, L, or R and transitions between states, each labeled with a read/write pair of
symbols.
• Read an input symbol, move to the indicated state and write the indicated output on the tape
• If the state is L, it moves left; if it is R moves right
• The TM lights "YES" if it recognizes the string, "NO" otherwise.
• The TM may halt, leaving the result of the computation on the tape.

An ordinary (deterministic) Turing machine (DTM) has a transition


function that, for a given state and symbol under the tape head,
Specifies three things:
 the symbol to be written to the tape
 the direction (left or right) in which the head should move
 the subsequent state of the finite control
2. Space complexity
Complexity theory:
Classification of problems based resources they require: put sets of problems that Take more
time in one group; in another group sets of problems taking less time; similarly with space also.
For a TM is halting, given t: N—N; for all t, M must halt at most O(t(n)); similarly, with spaces
for s:N—N, for every S, M must use at most O(s(n)) in its working spaces.

5|Page
2.1 Introduction

2.2 The class PSPACE

6|Page
P problems can be solved efficiently, can easily develop algorithms; NP are problems
whose solutions can be verified efficiently. Can be solved by an inefficient manner, if
some one gave us the solution, but easy to check whether solution is correct or not.

2.3 PSPACE Completeness

2.4 NL completeness

7|Page
3. Circuit Complexity

3.1 Polynomial size circuits

8|Page
9|Page
4. Polynomial Hierarchy
4.1 Polynomial Hierarchy

10 | P a g e
5. Randomized computations

5.1 One sided error: RP or CoRP and ZPP

11 | P a g e
12 | P a g e

You might also like