Professional Documents
Culture Documents
Lecture #2: P & NP Problems
Lecture #2: P & NP Problems
Lecture #2: P & NP Problems
Text
Imagine, for instance, that you have an unsorted list of numbers, and you want to write
an algorithm to find the largest one. The algorithm has to look at all the numbers in the
list: there’s no way around that. But if it simply keeps a record of the largest number it’s
seen so far, it has to look at each entry only once. The algorithm’s execution time is
thus directly proportional to the number of elements it’s handling, which is designated as
n. Of course, most algorithms are more complicated, and thus less efficient, than the
one for finding the largest number in a list; but many common algorithms have
execution times proportional to n2, or n times the logarithm of n, or the like.
A mathematical expression that involves n’s and n 2’s and n’s raised to other powers is
called a polynomial, and that’s what the “P” in “P = NP” stands for. P is the set of
problems whose solution times are proportional to polynomials involving n's. Obviously,
an algorithm whose execution time is proportional to n 3 is slower than one whose
execution time is proportional to n. But such differences dwindle to insignificance
compared to another distinction, between polynomial expressions — where n is the
number being raised to a power — and expressions where a number is raised to the nth
power, like, say, 2n. If an algorithm whose execution time is proportional to n takes a
second to perform a computation involving 100 elements, an algorithm whose execution
time is proportional to n 3 takes almost three hours. But an algorithm whose execution
time is proportional to 2 n takes 300 quintillion years. And that discrepancy gets much,
much worse the larger n grows.
NP (which stands for nondeterministic polynomial time) is the set of problems whose
solutions can be verified in polynomial time. But as far as anyone can tell, many of
those problems take exponential time to solve. Perhaps the most famous exponential-
time problem in NP, for example, is finding prime factors of a large number. Verifying a
solution just requires multiplication, but solving the problem seems to require
systematically trying out lots of candidates.
Formal definitions:
Informally, NP is set of decision problems which can be solved by a polynomial time via
a “Lucky Algorithm”, a magical algorithm that always makes a right guess among the
given set of choices. Problem which can't be solved in polynomial time like TSP
(travelling salesman problem) or subset sum: given a set of numbers, does there exist a
subset whose sum is zero? but NP problems are checkable in polynomial time which
means that given a solution of a problem , we can check that whether the solution is
correct or not in polynomial time.
Since deterministic algorithms are just a special case of nondeterministic ones, it can be
concluded that P⊆NP. What we do not know, and what has become perhaps the most
famous unsolved problem in computer science, is whether P = NP or P ≠ NP.
Before diving into the concepts of NP-hard and NP-complete, let’s define reducibility.
Reducibility:
Let L1 and L2 two be problems. Problem L1 reduces to L2 (also written L1 α L2) if and only
if there is a way to solve L 1 by a deterministic polynomial time algorithm using a
deterministic algorithm that solves L2 in polynomial time.
XϵNP &
XϵNP-Hard
Satisfiability:
Let x1, x2, x3 ….denote Boolean variables (their value is either true or false). Let x i’ denote
the negation of xi. A literal is either a variable or its negation. A formula in the
propositional calculus is an expression that can be constructed using literals and the
operations AND and OR. Examples of such formulas are (x l˄x2) ˅ (x3˄x4’) and (x3˄x4’) ˅
(xl˄x2’). The symbol ˅ denotes OR and ˄ denotes AND. The satisfiability problem is to
determine whether a formula is true for some assignment of truth values to the
variables. A 3-SAT problem consists of at most three literals in every clause. It was
proved by Cook in 1970’s that 3-SAT is NP-Complete.