Class Notes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

# Data Structure: Section-B: 7 Questions

- MCQ's based & mostly all questions are concept oriented


- 1-2 questions which are based on pseudocode/algorithm
-------------------------------------------------------------------------
- To learn Data Structures is not learn any programming lanaguge, to learn DS is to learn an
"algorithms".
- As we can implement DS algorithms by using any programming language, we will implement
algorithms by using "C programming Language".
------------------------------------------------------------------------------------
Q. Why there is a need of data structure?

Q. What is computer?
- Computer is a machine/hardware/digital device can be used to perform different tasks
efficiently and effectively.
- basic functions of computer:
1. data storage
2. data processing
3. data movement
4. control

- there is a need of "data structure" to achieve three things in programming:


1. efficiency
2. abstraction
3. reusability

Q. What is data structure?


- it is a way to store data into the memory (i.e. into the main memory/RAM) in an organized manner
so that operations (like addition, deletion, searching, sorting, traversal etc...) can be performed on it
efficiently.
- "structure of a data"

- there are two types of data structures:


1. "linear/basic data structures": data ele's gets stored into the memory in a linear manner and hence
can be accessed in a linear manner.
- array
- structure & union
- class
- linked list
- stack
- queue

2. "non-linear/advanced data structures": data ele's gets stored into the memory in a non-linear
manner and hence can be accessed in a non-linear manner.
- tree (heirachical)
- graph (network)
- hash table (associative i.e. key-value pairs)
- binary heap
etc...

+ "array": it is a collection/list of logically related similar type of elements which gets stored into
the memory in a contiguos manner.
+ "structure": it is a collection of logically related similar and disimilar type of
elements, gets stored into the memory collectively.

+ "union":

+ "class": it is a collection of logically related data members as well as member functions


--------------------------------------------------------------------------------
Q. What is an algorithm?
- an algorithm is a finite set of instructions written in human understandable language (e.g. english),
if followed, acomplishes a given task.

- "algorithm": an algorithm is a finite set of instructions written in human understandable language


(e.g. english) with some programming constraints, if followed, acomplishes a given task, such
algorithm is reffered as "pseudocode".

- "program": it is a set of instructions written in a any programming language, given


to the machine to do specific task.

e.g. Algorithm to do sum of array ele's

Algorithm ArraySum(A, n)
{
sum=0;
for( index = 1 ; index <= n ; index++ )
sum += A[index];

return sum;
}

//program- function
int ArraySum(int *arr, int size)
{
int sum=0;
int index;

for( index = 0 ; index < size ; index++ )


sum += arr[index];

return sum;
}

- an algorithm is a solution for a given problem


- "algorithm" = "solution"

- Problem: Sorting: it is a process to arrange data ele's in a collection/list of ele's either in an


ascending order or in a descending order.
A1: Selection Sort
A2: Bubble Sort
A3: Insertion Sort
A4: Quick Sort
A5: Merge Sort
A6: Bucket Sort
A7: Shell Sort
A8: Radix Sort
A9: Binary Sort
A10: Heap Sort
etc...

- One problem may has many solutions, when one problem has many solutions we need to select an
efficient solution/algo.
- to decide an efficiency of an algo's we need to do their analysis.
- analysis of an algo: it is a work of calculating how much "time" i.e. computer time
and "space" i.e. computer memory it needs to run to completion.
- there are two measures of an analysis of an algo:
1. time complexity
2. space complexity

1. "time complexity": time complexity of an algo is the amount of "time" i.e. computer time it needs
to run to completion.

2. "space complexity": space complexity of an algo is the amount of "space" i.e. computer memory
it needs to run to completion.

-------------------------------------------------------------------------------------
# Array Operations:

"Searching": to search/find/check a particular key ele in a given collection/list (either in an array or


linked list or in other data sturcture as well) of elements.

- Two searching algorithms can be applied on an array:


1. Linear Search
2. Binary Search

1. Linear Search:
- in this algo, we start comparing key ele with first ele in a collection/list of ele's and we compare
key ele with each ele in a collection/list ele's sequentially either till match is found or maximum till
the last ele.

Algorithm LinearSearch(A, n, key)


{
for( index = 1 ; index <= n ; index++ )
{
if( key == A[index] )
return true;
}

return false;
}
- In Linear Search:
* "best case": occures if key is found at very first position, in this case only 1 comparison takes
place --> running time of an algo in this case = O(1).
best case : Big Omega(1)

* "worst case": occures either if key is found at last position or key does not exists, in this case "n"
no. comparison takes place, wheras n is size of an array
--> running time of an algo in this case = O(n).
worst case: Big Oh(n)

* "average case": occures if key is found at in between position, in this case


n/2 no. comparison takes place --> running time of an algo in this case = O(n/2).
average case: Big Theta(n)

# "Asymptotic Analysis": it is a "mathematical" way to calculate time complexity and space


complexity of an algorithm without implementing it in any programming lanaguage.

Asymptotic Notations:
1. Big Oh( O ): this notation can be used to represent worst case time complexity of an algo.

2. Big Omega( ): this notation can be used to represent best case time complexity of an algo

3. Big Theta ( ): this notation can be used to represent an average case time time complexity of an
algo.

- if running time of an algo is having any additive/subtractive/multiplicative/divisive


constant it can be neglected.
e.g.
O(n+2) => O(n)
O(n-3) => O(n)
O(2*n) => O(n)
O(n/2) => O(n)

2. Binary Search:
- this algo follows "divide-and-conquer" statergy
- if we want to apply binary search on array, prerequisite is that
array ele's must be in sorted manner

Algorithm BinarySearch(A, key, n)//whereas A is an array of size "n"


{
left=1
left=n

while( left <= right )//till subarray is valid


{
mid=(left+right)/2;

if( key == A[mid] )


return true;

if( key < A[mid] )


right = mid-1;
else
left = mid+1;
}

return false;
}

if is key is found in very first iteration -- best case


if key is found at leaf/key does not exists -- worst case
if key is exists in between -- average case

- In Binary Search:
* "best case": occures if key is found at mid pos in very first iteration, in this case only 1
comparison takes place --> running time of an algo in this case = O(1).
best case : Big Omega(1)

* "worst case": running time of an algo in this case = O(log n).


worst case: Big Oh( log n)

* "average case": Big Theta(log n)

- while comparing two algo's, we always decides effieciency of an algo depends on worst case time
complexity.
worst case time complexity of linear search = O(n)
worst case time complexity of binary search = O(log n)

O(log n) < O(n)


binary search algo is efficient than linear search
----------------------------------------------------------------------------------

1. selection sort:
- in this algo, in the first iteration first position gets selected and ele which is at selected position
gets compared with remaining all positions ele's, if we found any ele smaller than element which is
at selected pos then we will swap them with it, so in first iteration the smallest ele gets
setteled/fixed at first position
in second iteration, second position gets selected, and gets compared with remaining all its next
positions ele's, if we found any ele smaller than element which is at selected pos then we will swap
them with it, so in second iteration second smallest ele gets settled/fixed at second position, and so
on.... in max (n-1) no. of iterations all array ele's gets arranged in a sorted manner.

Algorithm SelectionSort(A, n)
{
for( sel_pos = 1 ; sel_pos <= n ; sel_pos++ )//O(n)
{
for( pos = sel_pos + 1 ; pos <= n-1 ; pos++ )//O(n*n)
{
if( A[ sel_pos ] > A[ pos ] )
SWAP(A[sel_pos], A[pos]);
}
}
}

2. Bubble Sort:
- in this algo, ele's which are at two consecutive positions gets compared with each other, if they are
in order then no need of swapping, but if they are not in order then swapping takes place between
them,
in first iteration the largest ele gets fixed/setteled at its appropriate pos i.e. at last pos, in seond
iteration next largest ele gets setteled at its appropriate pos i.e. at at second last pos and so on.... in
max (n-1) no. of iterations all array ele's gets arranged in a sorted manner.

Algorithm BubbleSort(A, size)


{
for( iteration = 1 ; iteration <= size-1 ; iteration++ )//O(n)
{
for( pos = 1 ; pos <= (size-iteration-1) ; pos++ )//O(n*n)
{
if( A[ pos ] > A[ pos+1 ] )//if ele's are not in order
SWAP(A[ pos ], A[ pos+1 ] );//swap them
}
}
}
----------------------------------------------------------------------------------------
3. Insertion Sort:

Algorithm InsertionSort(A, size)


{
for( i = 1 ; i < size ; i++ )
{
key = A[i];
j = i-1;

while( j >= 0 && key < A[j] )


{
A[j+1] = A[j];//shift ele towards its right by 1 position
j--;
}
//insert key ele at its appropriate position
A[j+1] = key;
}
}
- insertion sort algo is an efficient algo for smaller input size array.

4. Merge Sort:
- this algo follows "divide-and-conquer" stratergy
- we need to divide big size array into subarray's of smallest size
and after dividing merge all subarray's into a single array in a sorted manner.

5. Quick Sort:
- this algo follows "divide-and-conquer" stratergy
- "partitioning" is the main logic which gets applied in this algo on array
- this algo can be applied only on array data structure

- in first pass, we apply "partitioning" onto the big size array


- in partitioing, we need to select any one ele as a "pivot" ele
i.e. we can select either leftmost ele or rightmost ele or mid pos ele as
pivot ele, after selection of pivot ele, we need to shift all ele's which are
smaller than pivot towards its left side and ele's which are greater than pivot
need to shift towards its right side, by doing this, in first pass pivot ele
gets fixed/settled at its appropriate position, array gets divided into two
subrarray's, whichever subarray is left to pivot ele is reffered as left
partition" and subarray which is to right side of pivot is reffered as "right
partition",
then we need to apply partitioning on left partition as well as on right
partition and we need to same logic repeatly i.e. we need apply
partitioning on subarray as well till the size of an subarray is greater than
1.

1. selection sort
2. bubble sort
3. insertion sort
4. quick sort
5. merge sort

- for smaller input size array "insertion sort" an efficient algo


- for larger input size array "quick sort" an efficient algo

# Limitations of array data structure:


- array is "static" i.e. size of an array is fixed, so it cannot be grow or shrinked
during runtime.
e.g. int arr[10];
- addition & deletion operations on an array are not efficient and conveneint as well
and hence to overcome these limitations of an array data structure "linked list"
data structure has been designed.

# Asymptotic Analysis:
- Asymptotic Notations:
1. Big Oh (O): this notation can be used to represent worst case time complexity
i.e. asymptotic upper bound.

2. Big Omega (): this notation can be used to represent best case time complexity
i.e. asymptotic lower bound.

3. Big Theta (): this notation can be used to represent an average case time
complexity
i.e. asymptotic tight bound.

------------------------------------------------------------------------------------
Q. What is Linked List?
It is a collection/list of logically related similar type of elements in which,
- address of first element in a list gets stored into a pointer a variable
reffered as "head", and
- each element will going contains "actual data" (is of any primitive/non
primitive type) and addr of its next (as well as prev) element in it.

- In a linked list element is also reffered as a "node".


- Basically there are two types of linked list:
1. "Singly Linked List": in this type of linked list each node contains only addr
of its next node.
- futher singly linked list is divided into two types:
I. singly linear linked list
II. singly circular linked list

2. "Doubly Linked List": in this type of linked list each node contains addr of
its next node as well as addr of its prev node.
- futher doubly linked list is divided into two types:
I. doubly linear linked list
II. doubly circular linked list

-----------------------------------------------------------------------------------

i. singly linear linked list


ii. singly circular linked list
iii. doubly linear linked list
iv. doubly circular linked list

i. singly linear linked list: it is a linked list in which,


- head always contains an address of first node in a list, if list is not empty
- each node has two parts:
1. data part: contains actual data of any primitive/non-primitive type
2. pointer part(next): contains address of its next node
- last node points NULL, i.e. next part of last node contains NULL.

struct node
{
int data;
struct node *next;//self-referential pointer
};

- to create a linked list we need add nodes into the it.


- we can add node into the slll by three ways:
1. add node into list at last position
2. add node into list at first position
3. add node into list at specific position

- we can delete node from the slll by three ways:


1. delete node from list at first position
2. delete node from list at last position
3. delete node from list at specific position

* limitation of singly linear list, is that we cannot revisit any node


in it, to overcome this limitation singly circular linked list has been
designed.

ii. singly circular linked list: it is a linked list in which,


- head always contains an address of first node in a list, if list is not empty
- each node has two parts:
1. data part: contains actual data of any primitive/non-primitive type
2. pointer part(next): contains address of its next node
- last node points first node, i.e. next part of last node contains addr of first node.

- limitations of singly linked list (singly linear as well singly circular) linked list:
1. we can traverse singly linked list only in a forward direction
2. we cannot access prev node of any node from it

and hence to overcome these limitations doubly linked list has been designed.

iii. doubly linear linked list: it is a linked list in which,


- head always contains an address of first node in a list, if list is not empty
- each node has three parts:
1. data part: contains actual data of any primitive/non-primitive type
2. pointer part(prev): contains address of its prev node
3. pointer part(next): contains address of its next node
- next part of last node and prev part of first node points to NULL.

typedef struct node


{
struct node *prev;//4 bytes
int data;//4 bytes
struct node *next;//4 bytes
}node_t;
- we need to maintain forward link as well as backward link.

iv. doubly circular linked list: it is a linked list in which,


- head always contains an address of first node in a list, if list is not empty
- each node has three parts:
1. data part: contains actual data of any primitive/non-primitive type
2. pointer part(prev): contains address of its prev node
3. pointer part(next): contains address of its next node
- next part of last node contains addr of first node and prev part of first node addr of last node.

Q. Difference between array & linked list


- array is "static", whereas linked list is "dynamic"
- array ele's gets stored into the memory at contiguos locations, whereas linked list ele's gets stored
into the memory randomly anywhere and hence programmer need to maintain link between them.
- array ele's can be accessed by using random access, whereas linked list ele's can be accessed by
using sequential access.
- in a linked list extra memory is required to store ele's as there is need to maintain link between.
- array ele's gets stored into stack section, whereas linked list ele's gets stored into the heap section.
- searching operation is faster on a array, as we cannot apply binary search on a linked list
- quick sort cannot be applied on a linked list.

=======================================================================
=============
# Stack: it is a collection/list of logically related similar type of ele's in which
elements can be added as well deleted from only one end reffered as "top" end.
- in this list element which was inserted last can only be deleted first, so this list works in "last in
first out" manner, hence it is also called as "LIFO" list/"FILO" list.
- we can perform three basic operations onto the stack in O(1) time:
1. Push: to insert/add an ele into the stack at top end/position
2. Pop : to remove/delete an ele from the stack which is at top end/position
3. Peek: to get the value of an element which is at top end/position.

- we can implement stack by two ways:


1. static implementation of stack by using array
2. dynamic implementation of stack by using linked list

1. static implementation of stack by using array:

#define SIZE 5

typedef struct
{
int arr[SIZE];
int top;
}stack_t;
arr: int [] -> non-primitive data type
top: int -> primitive data type

- we can perform three basic operations onto the stack in O(1) time:
1. Push: to insert/add an ele into the stack at top end/position:
step1: check stack is not full
step2: increment the value of top by 1
step3: insert/add ele onto the at top position/end

2. Pop : to remove/delete an ele from the stack which is at top end/position:


step1: check stack is not empty
step2: decrement the value of top by 1, i.e. we are deleting an ele from the stack

3. Peek: to get the value of an element which is at top end/position.


step1: check stack is not empty
step2: return the value of an ele which is at top position/end
(without push or pop ele)

2. dynamic implementation of stack by using linked list

Push: add_last()
Pop: delete_last()

OR

Push: add_first()
Pop: deelete_first()

# applications of stack data structure:


- stack is used by an os to control flow of an execution of programs.
- stack gets used by an os to implement "undo" & "redo" functionalities.
- stack is used to implement advanced data structure algo's like:
"dfs traversal" (depth first search) in tree & graph.
- stack is used to implement algo's like:
- conversion of infix expression into its equivalent prefix and postfix expression
- postfix/prefix evaluation
- prefix expression into its equivalent postfix expression
etc...
- stack can be used in an applications whereever collection/list of elements should works in last in
first out manner.

# what is an expression?
- an expression is a combination of an operands and operators
e.g. there are three types of expression
1. infix expression : a+b
2. prefix expression : +ab
3. postfix expression : ab+
infix expression: a*b/c*d*e+f/g-h

prefix expression => a * b / c * d * e + f / g - h


prefix expression => *ab / c * d * e + f / g - h
prefix expression => /*abc * d * e + f / g - h
prefix expression => */*abcd * e + f / g - h
prefix expression => **/*abcde + f / g - h
prefix expression => **/*abcde + /fg - h
prefix expression => +**/*abcde/fg - h
prefix expression => -+**/*abcde/fgh

prefix expression : -+**/*abcde/fgh


-+**/*abcde/fgh
hgf/edcba*/**+-
-+**/*abcde/fgh
-+**/*abcde/fgh

-------------------------------------------------

infix expression: a*b/c*d*e+f/g-h


postfix expression => a * b / c * d * e + f / g - h
postfix expression => ab* / c * d * e + f / g - h
postfix expression => ab*c/ * d * e + f / g - h
postfix expression => ab*c/d* * e + f / g - h
postfix expression => ab*c/d*e* + f / g - h
postfix expression => ab*c/d*e* + fg/ - h
postfix expression => ab*c/d*e*fg/+ - h
postfix expression => ab*c/d*e*fg/+h-

paperwork => ab*c/d*e*fg/+h-


algo => ab*c/d*e*fg/+h-
output => ab*c/d*e*fg/+h-
res pre->post => ab*c/d*e*fg/+h-

topmost ele = *
cur ele = +

priority( peek_element(&s) ) >= priority(in[i])


2 >= 1
+ conversion of prefix to postfix:
step1: start scanning prefix expression/string from right to left
if( cur ele is an operand )
{
then push into the stack
}
else//if the cur ele is an operator
{
if cur ele = opr
- pop two ele's from the stack, in such a way that:
first popped ele = op1
second popped ele = op2

concatenate these three ele's into a string


string = op1+op2+opr
- push string into the stack again
}
- repeat above step1 till the end of prefix string
- pop the final result which is in the stack

prefix expression: - + * * / * a b c d e / f g h
--------------------------------------------------------------------------------------
- postfix evaluation:

infix expression: "4 + 5 * 6 / 7 + 8 * 3"


postfix expression => 4 + 5 * 6 / 7 + 8 * 3
=> 4 + 56* / 7 + 8 * 3
=> 4 + 56*7/ + 8 * 3
=> 4 + 56*7/ + 83*
=> 456*7/+ + 83*
=> 456*7/+83*+

postfix expression: 4 5 6 * 7 / + 8 3 * +

step1: start scanning postfix expression from left to right


step2: if( cur ele is an operand )
{
push it onto the stack
}
else//if cur ele is an operator
{
opr = cur ele
- pop two ele's from the stack, in a such a way that
first popped ele = op2
second popped ele = op1
- perform "opr" operation between op1 & op2
and push result back into the stack
}
step3: repeat step1 & step2 till the end of postfix expression.
step4: pop the final result from the stack and return it to the calling function.

456*7/+83*+

stack
-------------
32
=======================================================================
==============
# queue: it is a collection/list of logically related similar type of ele's into
which ele's can be added from one end reffered as "rear" end, whereas ele's can be deleted from
another end reffered as "front" end.
- in this list ele which was inserted first can be deleted first, so this list works in "first in first out"
manner, and hence this list is also called as "FIFO"/"LILO"
list.
- we can perform two basic operations onto this queue in O(1) time:
1. enqueue: to insert/add/push an ele into the queue from rear end/position
2. dequeue: to delete/remove/pop ele from the queue which is at front position/end

- there are different types of queue:


1. linear queue
2. circular queue
3. "priority queue": it is a type of queue in which ele's can be added into it from rear end randomly
(i.e. without checking priority of an ele), whereas ele which is having highest priority can only be
deleted first.
- priority queue can be implemented by using a linked list, and it can be implemented efficiently by
using "binary heap".
4. double ended queue (deque): it is a type of queue in which ele's can be added as well as deleted
from both the ends.
- we can perform four operations on deque:
1. push_back() - ad_last()
2. push_front() - add_first()
3. pop_back() - delete_last()
4. pop_front() - delete_first()
- deque can be implement by using doubly circular linked list
- there are two types of deque:
1. input restricted deque: it is a type of deque in which we can insert ele's into it only from one end
whereas ele's can be deleted from both the ends.

2. output restricted deque: it is a type of deque in which we can insert ele's into it from both the
ends, but we can delete ele's only from one end.
1. implementation of linear queue: by using an array

#define SIZE 5

typedef struct
{
int arr[SIZE];
int front;
int rear;
}queue_t;

arr : int []
front : int
rear : int

is_queue_full: rear == SIZE-1


is_queue_empty: rear == -1 || front > rear

- we can perform two basic operations onto this queue in O(1) time:
1. enqueue: to insert/add/push an ele into the queue from rear end/position:
step1: check queue is not full
step2: increment the value of rear by 1
step3: insert an ele into the queue at rear position
step4: if( front == -1 )
front = 0

2. dequeue: to delete/remove/pop ele from the queue which is at front position/end


step1: check queue is not empty
step2: increment the value of front by i.e. we are deleting ele from the queue

limitation of linear queue: vacant places cannot be reutilized.

2. implementation of circular queue: by using an array

is_queue_full: front == (rear + 1) % SIZE


is_queue_empty: rear == -1 && front == rear

- we can perform two basic operations onto this queue in O(1) time:
1. enqueue: to insert/add/push an ele into the queue from rear end/position:
step1: check queue is not full
step2: increment the value of rear by 1 [ rear = (rear+1)%SIZE ]
step3: insert an ele into the queue at rear position
step4: if( front == -1 )
front = 0
2. dequeue: to delete/remove/pop ele from the queue which is at front position/end
step1: check queue is not empty
step2: if( front == rear )//we are deleting last ele
{
front = rear = -1;
}
else
{
increment the value of front by i.e. we are deleting ele from the
queue
[front = (front+1) %SIZE ]
}

front == (rear + 1) % SIZE

for => front=0, rear=4


=> front == (rear + 1) % SIZE
=> 0 == (4+1)%5
=> 0 == 5%5
=> 0 == 0

for => front=1, rear=0


=> front == (rear + 1) % SIZE
=> 1 == (0+1)%5
=> 1 == 1%5
=> 1 == 1

for => front=2, rear=1


=> front == (rear + 1) % SIZE
=> 2 == (1+1)%5
=> 2 == 2%5
=> 2 == 2

for => front=3, rear=2


=> front == (rear + 1) % SIZE
=> 3 == (2+1)%5
=> 3 == 3%5
=> 3 == 3

for => front=4, rear=3


=> front == (rear + 1) % SIZE
=> 4 == (3+1)%5
=> 4 == 4%5
=> 4 == 4

rear++ => rear = rear + 1


rear = (rear+1)%SIZE

if rear=0 => rear = (rear+1)%SIZE => (0+1)%5 => 1%5 => 1


if rear=1 => rear = (rear+1)%SIZE => (1+1)%5 => 2%5 => 2
if rear=2 => rear = (rear+1)%SIZE => (2+1)%5 => 3%5 => 3
if rear=3 => rear = (rear+1)%SIZE => (3+1)%5 => 4%5 => 4
if rear=4 => rear = (rear+1)%SIZE => (4+1)%5 => 5%5 => 0

2. dynamic queue can be implemented by using a linked list:

enqueue: add_last()
dequeue: delete_first()

OR

enqueue: add_first()
dequeue: delete_last()

+ applications of queue:
- queue gets used to implement os data structures like job queue, ready queue, message queue etc...
- queue is used to implement os algorithms like priority cpu sched, FIFO page replacement algo,
fcfs cpu sched etc...
- queue is used to implement advanced data structure algo's like "bfs traversal" in tree & graph.
BFS: Breadth First Search
- queue can be used in any application/system program where collection/list of ele's should work in
first in first out manner.
=======================================================================
===============
+ tree: it is a non-linear, advanced data structure which is a collection of finite no. of logically
related similar type of elements in which,
- there is first ele which is specially designated element reffered as "root" ele, and
- remaining all ele's are connected to the root ele in a heirarchical manner, follows parent-child
relationship.

- in a tree data structure ele is also reffered as a "node".

* root node: specially designated first node in a collection


* parent node/father
* child node/son
* grand parent/grand father
* grand child/grand son
* siblings/brothers: child nodes of same parent
* ancestors: all the nodes which are in the path from root node to that node are called as its
ancestors.
- root node is an ancestor of all nodes
* descedents: all the nodes which are accessible from that node are reffered as its descedents.
- all the nodes are descedents for root node.
* degree of a node: no. of child nodes having that node
* degree of a tree: max degree of any node in a given tree
* leaf node/external node/terminal node: node which is not having any child node
OR node having degree zero
* non-leaf/internal/non-terminal node: node which is having child node OR node having non-zero
degree.

* level of a node = level of its parent node + 1


if we assume, level of root node = 0

* level of a tree = max level of any node in a given tree


* depth of a tree = max level of a tree

- tree data structure is "dynamic" in nature, i.e. any node can have any no. of child nodes and tree
data structure can grow in a downward direction at any level.
- as in tree data structure any node can have any no. of childs and it can grow at any level,
operations like addition, deletion, searching, traversal etc... on it cannot performed efficiently,
sorestrictions can be applied on tree data structure
to achieve efficiency and hence there are different type of tree data strucure.

* "binary tree": it is a tree in which each node can have max 2 no. of child nodes
i.e. each node can have either 0 OR 1 OR 2 no. of child nodes.
binary tree is a tree in which degree of each node is either 0 OR 1 OR 2.
- binary tree: it is tree which is a set of finite no. of ele's having three subsets
1. root node
2. left subtree(may be empty)
3. right subtree(may be empty)

* "binary search tree"/BST: it is a binary tree in which left child is always smaller than its parent
and right child is always greater or equal to its parent.

- there are three tree traversal methods:


1. PREORDER(VLR) :
2. INORDER(LVR) :
3. POSTORDER(LRV):

You might also like