Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Tuesday, January 27, 2015 1

ANALYSIS OF ALGORITHMS AND BIG-


O
CS16: Introduction to Algorithms & Data Structures
Tuesday, January 27, 2015 2

Outline
1) Running time and theoretical analysis
2) Big-O notation
3) Big-Ω and Big-Θ
4) Analyzing Seamcarve runtime
5) Dynamic programming
6) Fibonacci sequence
Tuesday, January 27, 2015 3

What does it mean for an algorithm


to be fast?
•Ways to measure speed:
• Low memory usage?
• Small amount of time measured on a
stopwatch?
• Low power consumption?
•We’ll revisit this question after developing
the fundamentals of algorithm analysis
Tuesday, January 27, 2015 4

Running Time
•The running time of an algorithm varies
with the input and typically grows with the
input size
•Average case difficult to determine
•In most of computer science we focus on
the worst case running time
• Easier to analyze
• Crucial to many applications: what would
happen if an autopilot algorithm ran drastically
slower for some unforeseen, untested inputs?
Tuesday, January 27, 2015 5

How to measure running time


• Experimentally
• Write a program implementing
the algorithm
• Run the program with inputs of
varying size
• Measure the actual running
times and plot the results

• Why not?
• You have to implement the algorithm which isn’t always
doable!
• Your inputs may not entirely test the algorithm
• The running time depends on the particular computer’s
hardware and software speed
Tuesday, January 27, 2015 6

Theoretical Analysis
•Uses a high-level description of the
algorithm instead of an implementation
•Takes into account all possible inputs
•Allows us to evaluate speed of an algorithm
independent of the hardware or software
environment
•By inspecting pseudocode, we can
determine the number of statements
executed by an algorithm as a function of
the input size
Tuesday, January 27, 2015
7

Elementary Operations
• Algorithmic “time” is measured in elementary operations
• Math (+, -, *, /, max, min, log, sin, cos, abs, ...)
• Comparisons ( ==, >, <=, ...)
• Variable assignment
• Variable increment or decrement
• Array allocation
• Creating a new object
• Function calls and value returns
• (Careful, object's constructor and function calls may have
elementary ops too!)
• In practice, all of these operations take different amounts of time
• For the purpose of algorithm analysis, we assume each of these
operations takes the same time: “1 operation”
Tuesday, January 27, 2015 8

Example: Constant Running


Time

Tuesday, January 27, 2015 9

Example: Linear Running Time


function argmax(array):
// Input: an array
// Output: the index of the maximum value
index = 0 // assignment, 1 op
for i in [1, array.length): // 1 op per loop
if array[i] > array[index]: // 3 ops per loop
index = i // 1 op per loop, sometimes
return index // 1 op

• How many operations if the list has ten elements? 100,000


elements?
• Varies proportional to the size of the input list: 5n + 2
• We’ll be in the for loop longer and longer as the input list grows
• If we were to plot, the runtime would increase linearly
Tuesday, January 27, 2015 10

Example: Quadratic Running


Time
function possible_products(array):
// Input: an array
// Output: a list of all possible products
// between any two elements in the list
products = [] // make an empty list, 1 op
for i in [0, array.length): // 1 op per loop
for j in [0, array.length): // 1 op per loop per loop
products.append(array[i] * array[j]) // 4 ops per loop per loop
return products // 1 op

• Requires about 5n2 + n + 2 operations (okay to approximate!)


• If we were to plot this, the number of operations executed grows
quadratically!
• Consider adding one element to the list: the added element must
be multiplied with every other element in the list
• Notice that the linear algorithm on the previous slide had only one
for loop, while this quadratic one has two for loops, nested. What
would be the highest-degree term (in number of operations) if
there were three nested loops?
Tuesday, January 27, 2015 11

Summarizing Function Growth


• For very large inputs, the
growth rate of a function
105n2 + 108n
becomes less affected by: n2
• constant factors 10n + 105
• lower-order terms n
T(n)

• Examples
• 105n2 + 108n and n2 both
grow with same slope despite
differing constants and lower-
order terms
n
• 10n + 105 and n both grow
with same slope as well In this graph (log scale on both axes),
the slope of a line corresponds to the
growth rate of its respective function
Tuesday, January 27, 2015 12

Big-O Notation
• Given any two functions f(n) and g(n), we say that
f(n) is O(g(n))
if there exist positive constants c and n0 such that
f(n) ≤ cg(n) for all n ≥ n0

• Example: 2n + 10 is O(n)
• Pick c = 3 and n0 = 10
2n + 10 ≤ 3n
2(10) + 10 ≤ 3(10)
30 ≤ 30
Tuesday, January 27, 2015 13

Big-O Notation (continued)


•Example: n2 is not O(n)
• n2 ≤ cn
•n ≤ c
• The above inequality cannot be satisfied
because c must be a constant, therefore for any
n > c the inequality is false
Tuesday, January 27, 2015 14

Big-O and Growth Rate


• Big-O notation gives an upper bound on the
growth rate of a function
• We say “an algorithm is O(g(n))” if the growth
rate of the algorithm is no more than the
growth rate of g(n)
• We saw on the previous slide that n2 is not O
(n)
• But n is O(n2)
• And n2 is O(n3)
• Why? Because Big-O is an upper bound!
Tuesday, January 27, 2015 15

Summary of Big-O Rules


•If f(n) is a polynomial of degree d, then f(n)
is O(nd). In other words:
• forget about lower-order terms
• forget about constant factors

•Use the smallest possible degree


• It’s true that 2n is O(n50), but that’s not a helpful
upper bound
• Instead, say it’s O(n), discarding the constant
factor and using the smallest possible degree
Tuesday, January 27, 2015 16

Constants in Algorithm Analysis


• Find the number of primitive operations executed as a
function (T) of the input size
• first: T(n) = 2
• argmax: T(n) = 5n + 2
• possible_products: T(n) = 5n2 + n + 3
• In the future we can skip counting operations and replace
any constants with c since they become irrelevant as n
grows
• first: T(n) = c
• argmax: T(n) = c0n + c1
• possible_products: T(n) = c0n2 + n + c1
Tuesday, January 27, 2015 17

•Easy to express T in big-O by dropping constants


and lower-order terms
• In big-O notation
• first is O(1)
• argmax is O(n)
• possible_products is O(n2)
•The convention for representing T(n) = c in big-O
is O(1).
Tuesday, January 27, 2015 18

Big-Omega (Ω)

Big-Omega
Tuesday, January 27, 2015 19

Big-Theta (Θ)

Big-Theta
Tuesday, January 27, 2015 20

Some More Examples


Function, f(n) Big-O Big-Ω Big-Θ

an + b ? ? Θ(n)

an2 + bn + c ? ? Θ(n2)

a ? ? Θ(1)

3n + an40 ? ? Θ(3n)

an + b log n ? ? Θ(n)
Tuesday, January 27, 2015 21

How fast is the seamcarve algorithm


?
•How many distinct seams are there for an
w × h image?
• At each row, a particular seam can go down to the
left, straight down, or down to the right: three
options
• Since a given seam chooses one of these three
options at each row (and there are h rows), from the
same starting pixel there are 3h possible seams!
• Since there are w possible starting pixels, the total
number of seams is: w × 3h
•For a square image with n total pixels, that
means there are possible seams
Tuesday, January 27, 2015 22

Seamcarve
•An algorithm that considers every possible
solution is known as an exhaustive
algorithm
•One solution to the seamcarve problem
would be to consider all possible
seams and choose the minimum
•What would be the big-O running time of
that algorithm in terms of n input pixels?
• : exponential and not good
Tuesday, January 27, 2015 23

Seamcarve
• What’s the runtime of the solution we went over last
class?
• Remember: constants don’t affect big-O runtime
• The algorithm:
• Iterate over every pixel from bottom to top to populate the
costs and dirs arrays
• Create a seam by choosing the minimum value in the top row
and tracing downward
• How many times do we evaluate each pixel?
• A constant number of times
• Therefore the algorithm is linear, or O(n), where n is the
number of pixels
• Hint: we also could have looked back at the pseudocode
and counted the number of nested loops!
Tuesday, January 27, 2015 24

Seamcarve: Dynamic Programming


• How did we go from an exponential algorithm to a
linear algorithm!?
• By avoiding recomputing information we already
calculated!
• Many seams cross paths, and we don’t need to
recompute the sum of importances for a pixel if we’ve
already calculated it before
• That’s the purpose of the additional costs array
• This strategy, storing computed information to
avoid recomputing later, is what makes the
seamcarve algorithm an example of dynamic
programming
Tuesday, January 27, 2015 25

Fibonacci: Recursive
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, …
• The Fibonacci sequence is usually defined by
the following recurrence relation:
F0 = 0, F1 = 1
Fn = Fn-1 + Fn-2
• This lends itself very well to a recursive
function for finding the nth Fibonacci number
function fib(n):
if n == 0:
return 0
if n == 1:
return 1
return fib(n-1) + fib(n-2)
Tuesday, January 27, 2015 26

Fibonacci: Recursive
• In order to calculate fib(4), how many times does
fib() get called?
fib(4)

fib(3) fib(2)

fib(2) fib(1) fib(1) fib(0)

fib(1) fib(0) fib(1) alone gets recomputed 3 times!

• At each level of recursion, the algorithm makes


twice as many recursive calls as the last. So for
fib(n), the number of recursive calls is
approximately 2n, making the algorithm O(2n)!
Tuesday, January 27, 2015 27

Fibonacci: Dynamic
Programming
• Instead of recomputing the same Fibonacci numbers
over and over, we’ll compute each one only once,
and store it for future reference.
• Like most dynamic programming algorithms, we’ll
need a table of some sort to keep track of
intermediary values.
function dynamicFib(n):
fibs = [] // make an array of size n
fibs[0] = 0
fibs[1] = 1

for i from 2 to n:
fibs[i] = fibs[i-1] + fibs[i-2]

return fibs[n]
Tuesday, January 27, 2015 28

Fibonacci: Dynamic Programming


(2)
•What’s the runtime of dynamicFib()?

•Since it only performs a constant number of


operations to calculate each Fibonacci
number from 0 to n, the runtime is clearly O
(n).

•Once again, we have reduced the runtime


of an algorithm from exponential to linear
using dynamic programming!
Tuesday, January 27, 2015 29

Readings
• Dasgupta Section 0.2, pp 12-15
• Goes through this Fibonacci example (although without
mentioning dynamic programming)
• This section is easily readable now
• Dasgupta Section 0.3, pp 15-17
• Describes big-O notation far better than I can
• If you read only one thing in Dasgupta, read these 3
pages!
• Dasgupta Chapter 6, pp 169-199
• Goes into detail about Dynamic Programming, which it
calls one of the “sledgehammers of the trade” – i.e.,
powerful and generalizable.
• This chapter builds significantly on earlier ones and will
be challenging to read now, but we’ll see much of it this
semester.

You might also like