36C L2 (1)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 105

ECS 36C

Data Structures
Spring 2024 - Instructor Siena Saltzen
Administrative
https://canvas.ucdavis.edu/courses/877554/assignments/syllabus
● No discussion/OH the first week.
● This week I am deciding the homework format, and I will update the canvas then
applicable. It will either be ~8HWs with 2 drops or ~5 HW with one drop.
● There will be a getting started survey worth 2 points up this evening.
● And a practice homework assignment, up this weekend.

Let’s do Lists
An array is a series of elements of the same type placed in contiguous memory
locations that can be individually referenced by adding an index to a unique identifier.

A typical declaration for an array in C++ is: ```type name [elements];```


where type is a valid type (such as int, float...), name is a valid identifier and the
elements field (which is always enclosed in square brackets []), specifies the length of
the array in terms of the number of elements.
Using Arrays

Arrays need to have a predefined size that if we try to do:


needs to be specified in its declaration and
can't change afterwards:
The value of a will be undefined since the
memory in the array is not initialized to any
value when it's created. Also if we try to do:
The same as with any other type, the previous
declaration already reserves memory for 10 ints,
we don't need to use new, and that memory will
be uninitialized. To access them, you can just most probably our application will crash if the
do: memory address at arr + 25 is outside the
memory that the operating system has
assigned to our application.
Why this format?

int: describes the size of the “chunks” needed


10: describes how many chunks

int = 4 bytes
size of array = 40 bytes of memory

If it starts at 0x1000, then 0x1000 - 0x1039 to be reserved for that array.


Accessing Stuff Review
When you say just arr, you are actually calling out the memory address for arr[0],
because the variable name for an array is always a pointer to the first element in the
array (if you don't specify the index). Putting an asterisk before a pointer means to tell
you the contents of memory at that address.

So *(arr) will return the contents of memory at 0x1000, which happens to be the same
as arr[0].

When you do arr + i, the compiler is smart enough to know how many bytes to add to
arr because it knows the type of the array. Since it is int, it will add 4 * i.
Okay Cool, so what?

C++ Program to add an element in the array at the beginning

Given array A = [10, 14, 65, 85, 96, 12, 35, 74, 69]

After inserting 23 at the “beginning”, the array will look like this:

[23, 10, 14, 65, 85, 96, 12, 35, 74, 69]

https://www.tutorialspoint.com/cplusplus-program-to-add-an-element-in-the-array-at-the-beginning
Adding a Thing
So, we have an array A with nine elements in it. We are going to insert another element 23, at the
beginning of array A.
To insert an element at the beginning, we must shift all elements towards the right for one place,
then the first slot will be empty and we put the new element in that position.
Show Code Here
Okay Great
Could you use vectors? Yes. Not the point.

However, consider these concepts:


● Is it efficient to add to the front of an array?
● Is shifting values efficient?
● What about deleting?
● What about assigning memory for very large arrays?
● Reversing an array in place?
Array vs Linked List

What is a Linked List?

A linked list is a fundamental data structure in computer science. It


consists of nodes where each node contains data and a reference (link)
to the next node in the sequence.
https://www.geeksforgeeks.org/data-structures/linked-list/#
Why is this a helpful concept?

Array: Arrays store elements in contiguous memory locations, resulting


in easily calculable addresses for the elements stored and this allows
faster access to an element at a specific index.

Linked List: Linked lists are less rigid in their storage structure and
elements are usually not stored in contiguous locations, hence they
need to be stored with additional tags giving a reference to the next
element.
Let’s Look at Linked Lists

https://www.explainxkcd.com/wiki/index.php/2483:_Linked_List_Interview_Problem
For now, let start with singly linked lists

A singly linked list is a linear data structure in which the elements are
not stored in contiguous memory locations and each element is
connected only to its next element using a pointer.
How to Insert a Node at the Front/Beginning of LL

To insert a node at the start/beginning/front of a Linked List, we need to:


Make the first node of Linked List linked to the new node
Remove the head from the original first node of Linked List
Make the new node as the Head of the Linked List.
Add to the head
How many steps was that?

We will come back to LL in more detail and work through the other operations,
however, I find that a general understand of these two structures helps lay the
foundation for ALGORITHM ANALYSIS.
List Representations

What’s this? It’s a logical


representation of this list:
[ ‘B’, ‘F’, ‘G’, ‘D’, ‘A’, ‘C’, ‘E’]
List

You may not have seen the first representation before, but you’ve
certainly seen the second representation. The list with the square
brackets represents a series of contiguous locations in memory, each
containing a pointer to its respective list element (we could call them
“objects”).
Given this list and a character, how do you determine if the
character is in the list?
Given this list and a character, how do you
determine if the character is in the list?
1. Start with the first element of the list, look at
the value there,
a. if it’s what I’m looking for then success
2. Else move to the next element of the list and
a. do it again.
3. If I run out of list
a. then failure.

What does that look like in code?


https://www.geeksforgeeks.org/cpp-pr
ogram-for-linear-search/
B

How many times


does the
comparison
happen when
looking for the
letter “B”?
E

How many times


does the
comparison
happen when
looking for the
letter “B”? How
about “E”?
How about “Q”?
What if the list is
bigger?
How many comparisons to find “B” now? To find “X”?
Both
Can we generalize over these two examples?
In both lists, the best case number of comparisons to find what we’re looking for
is 1.
In both lists, the worst case number of comparisons would be n, where n is the
length of the list.

What do you think the average number of comparisons would be? Something like n/2?
Linked List

Now let’s look at a different linear collection of items. This is a singly linked list (sometimes called
a one-way linked list).
You can create this kind of list in C++, we just did! But it definitely not the default of type list()
It is, on the other hand, how fundamental lists are represented in other programming
languages, such as Lisp, Scheme, and Haskell.
Linked lists are more flexible and adaptable and are best suited for situations where the size of
the collection is not known.
Linked List Cont.

In this list model, the pairs of squares represent small memory blocks, each
containing two pointers, one to the corresponding list object and one to the
next item (i.e., the next memory block) in the list.
What to do?

Now, given this new list and a character, how do you determine if the
character is in the list? We don’t expect you to know how to do this…yet.
So here’s a pseudo-code function that might work:
How many comparisons now will be required to find B?
How many comparisons to find E?
What happens when the list is bigger? Generalize again…
Two different data structures, with
associated algorithms for
finding something in the structure.

Which one is better?


Two different data structures, with associated algorithms for finding
something in the structure. Which one is better?

If we accept that number of comparisons is an approximation of time, then


the time to find something in either of these structures is the same:
proportional to the size of the structure itself. So given either list of length n,
the time to find something is proportional to n.

Can we make the search substantially more efficient?


● What feature of a dictionary makes it easy to find the meaning of a

given word?

● What feature of a phone book makes it easy to find someone’s phone

number?

● They are both lists whose contents are sorted.


Describe a search algorithm that could take advantage of
this sorting. Where would you start the search?
FIND THE THING

If you have run out of list (i.e. empty list) then failure
• Look at the item in the middle of the list
• If that’s the target then success
• Otherwise, does the target come before or after what’s at the
middle?
• Based on the answer to that question, the sublist to the left
or right of the middle becomes the new list to search
• Go back to the top and do it agai
Binary Search!
Binary Search

This is a type of divide-and-conquer search. If the search always begins in the


middle, it’s called a binary search.

How many comparisons to find E?


Binary Search

Note to me: Do some examples


How many comparisons to find E?
Is it here? No.
Is it here? No.
Is it here? Yes.
Best case: 1 comparison
Worst case: 3 comparisons
What if the sorted list is bigger?
How many comparisons to find L? 1 comparison - best
How many comparisons to find E? 4 comparisons - worst
Can you figure out the relationship between the size of the
list and the worst case search performance?
worst case comparisons = 3 list size = 7
worst case comparisons = 4 list size = 15
worst case comparisons = 5 list size = 31
The time T to do the search (which correlates to the number of
comparisons) on the sorted list of size n is some function f of n. What’s
the function f? T(n) = some f(n)
Can you figure out the relationship between the size of the
list and the worst case search performance?
worst case comparisons = 3 list size = 7
worst case comparisons = 4 list size = 15
worst case comparisons = 5 list size = 31

T(n) = some f(n)


f(n) = log2(n+1)
T(n) = f(n) = log2(n+1)
That’s pretty good search performance. Each time we double the size of
the list being searched, we only incur at most one extra comparison!

(And if you’re not comfortable with logarithms, start brushing up. We’ll be
using them in this class.)
Big O

This is the basis for Analysis of


Algorithms!
Often referred to as Big O in
computer science.
“Big O notation is a mathematical
notation that describes the limiting
behavior of a function when the
argument tends towards a
particular value or infinity.”
Back to LL

Do we gain the same benefit by sorting the elements of this list? (Bonus Point)
Do we gain the same benefit by sorting the elements of this list?
What would be the algorithm for performing a binary style of search on
this list?
Let’s pose the question a different way: is there a simple one-step way
to get at the middle element of this list?
Do you see a problem now with getting to the middle of this list in one
simple step?
With the original Python list, finding the midpoint of the unsearched
remainder of the list was simple arithmetic.
Here it requires traversing half the links, and if we get to the middle how
do we go left? And wouldn’t we have already searched to the left anyway?
This isn’t promising.
Just for fun! Let's think about how to get to the middle of the list.

In regular pythonese, it's simple arithmetic. n//2*

Can we get to the center of this list with some more thinking? (+0.1 bonus points)

*(Can use different calculation to determine where the split falls in even len lists)
Reminder
Unlike arrays, linked list elements are not stored at a contiguous location;
the elements are linked using pointers. Each node in the linked list will have
two things, one is the data and the other is a reference to the next element.
The last node will have a null in the reference.
Pointers! (python style)
Tortoise and Hare

In this method, two-pointers are used to traverse a linked list.


One pointer (slow) should be moved one space, while the other pointer (fast) should be
moved two spaces.
When the fast pointer reaches the end of the linked list, the slow pointer will reach the
center.
Return the slow pointer value to get the middle element.
Communication & Forums

● Time Complexity: O(n)

● Why?


Let’s compare the sorted Python list to the sorted singly linked list.
Now which one is better? Why?
Binary Search Array vs LL

Array = O(log(n)) vs. LL = O(n)


Why? Well, to fully understand, we would want to code out both
options, but that requires Recursion which we will get to later.
Just thinking about it, it makes sense though. In an array, you get to
throw out half of the list every time you make a comparison with a
simple calculation for the middle, reducing the problem set.
In a LL, you need to traverse the whole array each time to find the
middle, which really, does not give you any benefits over a linear
search. And may take more actual time.

*I tend to use array/list interchangeably when talking about python. Please interrupt me if
there is any confusion.
LL

So. Are Linked Lists Legitimately Lame?


Well for binary search /probably/.
But what if you want to insert a new element, C, in this
sorted list? What’s the algorithm for insertion? What if
the list only has room for 7 elements? And what if you
want to delete an element from the middle of the list?
Normal Array
Think of how a normal array is created and used.
● You sets aside a chunk of memory of your requested size
● Stuff is added into it.
● Oh no! No more room! RESIZE
○ Allocate a larger chunk of memory
■ potentially need to find room
○ Copy all previous values
○ Rewrite them in the new space
○ How much extra space did you allocate? Is it enough? Will you do this again?

What about inserting an Item?


● Have a full array
● Find location of new item
● Copy Everything after it and rewrite in new location
○ Is there room? Do we need to resize?
What if you want to insert a new element, C, in this sorted list? How do
you do the insertion in this case?
Start at the front of the list and traverse to the insertion point, create
the new list element,
Copy the link from B into the link from C, and finally modify the link
from B so that it points to C.
If you’re doing lots of insertions, which data structure would you rather be
using now?

The question of “which is better?” is an intentional red herring.

Which is “better” depends on the context: how will you be using the data structure
and its associated algorithms?

Lots of searching/few insertions? The Python list seems better.

Many insertions/little searching? The linked list seems better.

Realistically, if you want both (and you probably do, plus deletions) there are better
choices. More about these in the future.
Questions like “which data structure is better?” or “which algorithm is
better?” may not have absolute answers.
“It depends...” may be the beginning of many such analyses.
When you try to answer these questions, you may have to wrestle with
trade-offs, just like the trade-off in the previous example:
the linked list representation trades space (i.e., uses extra memory) to
gain time (i.e., faster insertion and deletion).
Time-space trade-offs are common.
Why
We just talked about why LL are kinda Lame right? So why are we studying the classic
algorithms when there are better options out there?
Modern programming languages usually provide abstractions which interact with the
sequential data at the memory level, providing access to this data while using arrays, linked
lists, hybrids of the aforementioned technologies, or other approaches, and the programmer
doesn't necessarily need to care one way or another.
Knowing the underlying concepts is still useful, however, when creating fast running code
which scales well to large data, avoiding (e.g.) traversing the list over and over again, or
performing particularly inefficient operations.
They are useful! - like prepackaged intelligence in a can - don’t have to work hard to come
up with your own solution.
Also… The interviewers still love this stuff.
O of What

So, at this point we’ve talked about complexity a little bit, and showed how to start
generalizing some runtimes.
If you read your textbook (or payed attention yesterday), you’ve see things like O(n)
and O(1).
What does O(n) mean? O(1)?
They're how we compare algorithms regardless of operating system/hardware.

In computer science, the time complexity is the computational complexity that describes the amount of time it takes to run an
algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm,
supposing that each elementary operation takes a fixed amount of time to perform https://en.wikipedia.org/wiki/Time_complexity
Back Ground
Computational complexity is a field from computer science which analyzes
algorithms based on the amount resources required for running it. The
amount of required resources varies based on the input size, so the
complexity is generally expressed as a function of n, where n is the size of
the input.
It is important to note that when analyzing an algorithm we can consider the
time complexity and space complexity. The space complexity is basically the
amount of memory space required to solve a problem in relation to the input
size. Even though the space complexity is important when analyzing an
algorithm, in this story we will focus only on the time complexity.
Best Worst Average

When analyzing the time complexity of an algorithm we may find three


cases: best-case, average-case and worst-case. Let’s understand what
it means. Consider our previous linear search example:
● Best-case: this is the complexity of solving the problem for the best
input.
● Average-case: this is the average complexity of solving the problem.
This complexity is defined with respect to the distribution of the values
in the input data.
● Worst-case: this is the complexity of solving the problem for the worst
input of size n.

Usually, when describing the time complexity of an algorithm, we are talking about the worst-case.
Big-O notation, sometimes called “asymptotic notation”, is a mathematical
notation that describes the limiting behavior of a function when the argument
tends towards a particular value or infinity.

In computer science, Big-O notation is used to classify algorithms according to how


their running time or space requirements grow as the input size (n) grows. This
notation characterizes functions according to their growth rates: different functions
with the same growth rate may be represented using the same O notation.

This is independent of the actual time that the algorithm may take. For example I
could rewrite the linear search to print out each value, ask the user what they think
of each item and then calculate item to the 100th power. This would still be a O(n)
function as the growth rate is still linear proportional to the input…

https://towardsdatascience.com/understanding-time-complexity-with-python-examples-2bda6e8158a7
Common Complexities and Their Names

● We can assume that the Big-O notation gives us the


algorithm’s approximate run time in the worst case.
There is a lot more math involved in the formal definition (
Luckily I will only make you do some of it…)
● When using the Big-O notation, we describe the
algorithm’s efficiency based on the increasing size of the
input data (n).
● We will keep talking about algorithms efficiency like this
through the class while showing you some of the flagship
examples.
Visual Representation
O of What
Why learn about comparing algorithms?
• You want to make intelligent choices. A poor choice may prevent the software
you develop from completing its task in reasonable time or in reasonable space
What do you judge them on?
• Time (how long to run?)
• Space (how much memory does it use?)
• Other attributes - Expensive operations (e.g. Input/Output) - Elegance,
cleverness - Energy, power (ask Google or bitcoin miners) - Ease of programming
Additionally, Big O is extremely important when trying to communicate with other
programmers.
How Long Do Things Take
An intuitive way of stating this problem is that
given a list of cities and the distances between
pairs of them, the task is to find the shortest
possible route that visits each city exactly once
and then returns to the origin city.

A naïve solution solves the problem in O(n!) time


(where n is the size of the list), simply by checking
all possible routes, and selecting the shortest
one. However, this approach will take a long time
as the number of possible routes increases
superexponentially as more nodes are included.
* Show Timing Module Python
Just for fun, how can you find how long series of
operations take in python?

https://www.explainxkcd.com/wiki/index.php/399:_Travelling_Salesman_Problem
Concrete Example Time
Timing and Fibonacci
How long does this take? A second? A
minute?
And what is this Fibonacci thing you’re
talking about?
An algorithm for computing the nth
Fibonacci number.
Why Fibonacci numbers? No special
significance...it's just a program that's
easy to analyze.
Where did this Fibonacci thing come
from? I'm glad you asked...
Bunny Math

Leonardo of Pisa, or Leonardo Pisano, or Leonardo


Bonacci (1170 - 1250), also known as Fibonacci,
came up with a model of growth in an idealized
bunny (really) population.
Assuming that
● In the first month there is just one newly-born
pair
● New-born pairs become can have kids in their
second month
● Each month every pair spawns a new pair,
● and the bunnies never die

*No promises of historically accurate art or bunny depictions


Infinite Bunnies
SO:
● if we have A pairs of breeding and newly-born bunnies in month N
● and we have B pairs in month N+1,
● then in month N+2 we'll have A+B pairs.
The numbers are 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 and so on.
Fibonacci was wrong about the growth of bunny populations, but his numbers live on in mathematical history.
Here's how to compute the Fibonacci numbers*:
fib(1) = 1
fib(2) = 1
fib(n) = fib(n - 1) + fib(n - 2) This recursive version is horrifyingly slow, so for efficiency we use an iterative
version. Plus, you may not know recursion yet. (And we're saving the really slow version for later.)
Fib Cont
How long does this take? A second? A minute? It depends on n So
we’ll approximate the runtime as a function of the value of input, n.
There are lots of factors we could consider:
● What machine is it running on?
● What language is it written in?
● What compiler was used?
● How was it programmed?
But we want a basis for comparison that’s independent of these
implementation details. Consequently, we want to count just “basic
operations” like arithmetic, and memory access, while ignoring the
details given above.
How many operations when this algorithm is run? 2 going
into the loop
4 each time the loop is executed
the loop is executed n - 2 times
1 returning from function
2 + (4)(n-2) + 1
If we’re ignoring details, does it make sense to be so
precise? It’s educational now, but later we’ll see that we
can do this more simply and ignore the details.
Run as fun() of input
The run time of iterative Fibonacci is (depending on the details of how we count and our
implementation):
2 + (4)(n - 2) + 1
which simplifies to
4n - 5
Since we’ve abstracted away exactly how long the different operations take, and on
what computer we’re running, does it make sense to say 4n - 5 instead of 4n - 6 or 5n -
3 or ...? What matters here is n. As n gets bigger, the run time grows linearly in
proportion to n. (We’ll formalize this in the very near future.) The 4 and the 5 are just
details
What if there are many possible inputs, as in the case of linear search in a
list? Does that change anything?

What if the item is the first in the list? The last in the list? Not in the list?
Which run time should we report?
There are different kinds of analysis we could report:
• Best Case
• Worst Case
Amortized:
• Average Case In accounting, amortization refers to expensing the acquisition cost minus the residual value of
intangible assets in a systematic manner over their estimated "useful economic lives" so as to
reflect their consumption, expiry, and obsolescence, or other decline in value as a result of use

• Common Case or the passage of time. Wikipedia

In computer science and algorithms, amortized analysis is a technique used to estimate the
average time complexity of an algorithm over a sequence of operations, rather than the
• Amortized worst-case complexity of individual operations.

For example, for a dynamic array that doubles in size when needed, normal asymptotic analysis
• and so on.. would only conclude that adding an item to it costs O(n), because it might need to grow and
copy all elements to the new array. Amortized analysis takes into account that in order to have
to grow, n/2 items must have been added without causing a grow since the previous grow, so
adding an item really only takes O(1) (the cost of O(n) is amortized over n/2 actions).
Big-O notation

Going back to the iterative Fibonacci example, we calculated the function for the run
time to be 4n - 5. Then we argued that the 4 and the 5 are noise for our purposes.
One way to say all that is “The running time for the Fibonacci algorithm finding the
nth Fibonacci number is on the order of n.”
In Big-O notation, that’s just T(n) is O(n)
T(n) is our shorthand for the runtime of the function being analyzed.
The O in O(n) means “order of magnitude”, so Big-O notation is clearly, and
intentionally, not precise. It’s a formal notation for the approximation of the time (or
space) requirements of running algorithms.
Formal Definition

There’s a formal mathematical definition for Big-O that we’re obliged to


discuss:

T(n) is O(f(n)) if there are two positive constants, n0 and c, and a


function f(n) such that cf(n) >= T(n) for all n > n0
In other words, as n gets sufficiently large (larger than n0), there is some
constant c for which the processing time will always be less than or equal
to cf(n), so cf(n) is an upper bound on the performance. The performance
will never be worse than cf(n) and may be better.

positive means greater than zero...it does not include zero


https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
One more time. Here’s what this

T(n) is O(f(n)) if there are two positive constants, n0 and c, and a function f(n)
such that cf(n) >= T(n) for all n > n0
really means in a practical sense:

If you want to show that T(n) is O(f(n)) then find two positive constants, n0 and c,
and a function f(n) that satisfy the constraints above. For example...
Big-O arithmetic

For our iterative Fibonacci example, T(n) = 4n - 5. We


want to show that T(n) is O(n). So we set up our inequality
like this:
4n - 5 <= cn
For our iterative Fibonacci example, T(n) = 4n - 5. We
want to show that T(n) is O(n). Then we do some algebra
to isolate n:
4n - 5 <= cn
4n <= cn + 5
n <= cn/4 + 5/4
Now pick some value for c. Let’s use 4 to cancel the denominator.
n <= 4n/4 + 5/4
n <= n + 5/4
c=4
Great.
Now we pick some value for n0, substitute it for n, and see if the inequality holds.

* Desmos
Hmmm, let’s try 1.
1 <= 1 + 5/4

1 is always less than or equal to 1 + 5/4, so in choosing c = 4 and n0 = 1,


we have shown that 4n - 5 is O(n)

because 4n - 5 <= 4n for n >= 1. T(n) is O(n). Yippee!


This is an easy example, but what we expect you to do on your homework.

For the first section: For each problem in code, find T(n). The same way we
did for the fibonacci sequence and search examples. Then decide and
prove O(n) by providing C and n0.

Let do another Example to find T(n):


https://nedbatchelder.com/text/bigo.htmlhttps://nedbatchelder.com/text/bigo.html
Okay. Next Steps again:
Examining some code, we determine that T(n) = 3n2 + 6n and we think that
T(n) is O(n2). How do we prove it?
3n^2 + 6n <= cn^2
3n (n + 2) <= cn^2
n + 2 <= cn/3
n <= cn/3 - 2
n <= cn/3 - 2
Now we let c = 3 to cancel the denominator. That gives us
n <= n - 2. That won’t work.
Can we pick another c? Remember proof is “a” c exists that allows the inequality.
Let’s try c = 6, a multiple of 3,
that will also simplify things by cancelling the denominator.
That gives n <= 2n - 2. That’s much more promising...
3n2 + 6n <= cn2
3n (n + 2) <= cn2
n + 2 <= cn/3
n <= cn/3 - 2
n <= 2n - 2 with c = 6
If we let n0 = 1 and substitute it for n, we get
1 <= 2 - 2
1 <= 0
That won’t do.
3n2 + 6n <= cn2
3n (n + 2) <= cn2
n + 2 <= cn/3
n <= cn/3 - 2
n <= 2n - 2 with c = 6
If we let n0 = 2 and substitute it for n, we get
2 <= 4 - 2
2 <= 2
That will do nicely.
Recap
Examining some code, we determine that T(n) = 3n2 + 6n and we think that T(n) is
O(n2). How do we prove it?
3n2 + 6n <= cn2
3n (n + 2) <= cn2
n + 2 <= cn/3
n <= cn/3 - 2
n <= 2n - 2 with c = 6
Choosing c = 6 and n0 = 2, we have shown that 3n2 + 6n is O(n2) because 3n2 + 6n
<= 6n2 for n >= 2.
But wait! This is a little bit misleading, as was the previous example. Are we sure that
this works for all n >= 2?

It is possible that you find values for n and c where it all appears to work, but if you
then try out larger values of n and the inequality no longer holds. Your proof isn't a
proof.

So in this class, make sure that, once you have found a working n and c, you try some
larger values of n. There's an example of how you might be deceived coming up.
The Big-O definition simply(?) says that there is a point n0
such that for all values of n that are past this point, T(n) is
bounded by some multiple of f(n). Thus, if the running time
T(n) of an algorithm is O(n2), we are guaranteeing that at
some point we can bound the running time by a quadratic
function (a function whose high-order term involves n2).
Thus, if the running time T(n) of an algorithm is O(n2),

we are guaranteeing that at some point we can bound the running time by a
quadratic function (a function whose high-order term involves n2).

Big-O says there’s a function that is an upper bound to the worst-case performance
for the algorithm.
Note however that if T(n) is linear and not quadratic, you
could still say that the running time is O(n2).

It’s technically correct because the inequality holds. However, O(n) would
be the more precise claim because it’s an even lower upper bound.

Desmos Example
Big-O is for expressing how run time or memory requirements grow as a function of the
problem size. Your book* has a nice table listing commonly-encountered rates.
O(1) Constant
O(log n) Logarithmic
O(n) Linear
O(n log n) Log-linear
O(n2) Quadratic
O(n3) Cubic
O(nk) Polynomial – k is constant
O(2n) Exponential
O(n!) Factorial

You might also like