Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Introduction:
What is Problem?
The perceived gap between the Current State and the Desired State.

Computer is a machine that not only used to develop the software. It is also used for solving
many real world problems.
Computers cannot solve a problem by themselves. It solves the problem on the basis of the
step-by-step instructions given by us.
Thus, the success of a computer in solving a problem depends on how correctly and precisely
we –
• Identify (define) the problem.
• Designing & Developing an algorithm and
• Implementing the algorithm (Solution) do develop a program using any programming
language.
Steps for Problem Solving
There are 7 steps in resolving problems
1. Identifying the Problem
2. Defining Goals
3. Brainstorming solutions
4. Assessing alternatives
5. Choosing the Solution
6. Active Execution
7. Evaluation
STEP 1: IDENTIFYING THE PROBLEM
Ask yourself what the problem is. There may be multiple issues within a single situation.
Make a list of these issues and define why each one is a problem to you.
STEP 2: DEFINING GOALS
Try to define your goals specifically, while making them as realistic and attainable as
possible. An example of a poor or broad goal is “I want to be happy.” First, define what
happiness means to you and what you can do to feel happier overall. Try to form your goals
in the sense of actions you can take to achieve the desired goal.

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

STEP 3: BRAINSTORMING
Take time to brainstorm possible ways to resolve the problem. Do not rush this process-
People often want to prevent and solve problems before they even appear. Write down all
ideas, even the ones that seem absurd or bizarre. Try to find 6-8 varying alternatives when
resolving a particular problem.
STEP 4: ASSESSING ALTERNATIVES
For every alternative you formed in the previous step, weigh the positive effects and negative
consequences that each solution would bring. For every and any option, determine its
advantages and its risks.
STEP 5: CHOOSING THE SOLUTION
Carefully weigh all solutions. The best solution is not necessarily the option with the most
pros and/or the least cons. Think about what means more to you, which solution can highlight
the positive effects that matter the most to you, and which solution produces the mildest
consequences. When you decide on a solution, it is important to create a timeline of when
you intend to achieve your ultimate goal.
STEP 6: ACTIVE EXECUTION OF THE CHOSEN SOLUTION
Don’t worry about failure. In this phase, concentrate on the journey that will lead you to your
goal- don’t worry yourself with potential problems.
STEP 7: EVALUATION
It’s time to evaluate your success. If you were successful, congratulations! If not, no worries.
Maybe you didn’t quite choose the right solution or the situation changed. You have
definitely learned something. Take this newfound knowledge, return to the beginning steps,
and try again!
The Role of Algorithms in Computing
Algorithms are fundamental to computing and play a crucial role in many aspects of the field.
Some of the key needs and applications of algorithms in computing include:
1. Data processing: Algorithms are used to process and analyze large amounts of data, such
as sorting and searching algorithms.
2. Problem solving: Algorithms are used to solve computational problems, such as
mathematical problems, optimization problems, and decision-making problems.
3. Computer graphics: Algorithms are used to create and process images and graphics, such
as image compression algorithms and computer-generated graphics algorithms.
4. Artificial Intelligence: Algorithms are used to develop intelligent systems, such as
machine learning algorithms, natural language processing algorithms, and computer vision
algorithms.
5. Database management: Algorithms are used to manage and organize large amounts of
data in databases, such as indexing algorithms and query optimization algorithms.
6. Network communication: Algorithms are used for efficient communication and data
transfer in networks, such as routing algorithms and error correction algorithms.
7. Operating systems: Algorithms are used in operating systems for tasks such as process
scheduling, memory management, and disk management.
In computing, algorithms are essential for solving complex problems and tasks, improving
efficiency and performance, and enabling new technologies and applications.

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Algorithms as a technology
Algorithms are used to find the best possible way to solve a problem, based on data storage,
sorting and processing, and machine learning. In doing so, they improve the efficiency of a
program. Algorithms are used in all areas of computing.
Analysing algorithms
What is meant by Algorithm Analysis?
Algorithm analysis is an important part of computational complexity theory, which provides
theoretical estimation for the required resources of an algorithm to solve a specific
computational problem. Analysis of algorithms is the determination of the amount of time
and space resources required to execute it.
Types of Algorithm Analysis:
1. Best case
2. Worst case
3. Average case
Why Analysis of Algorithms is important?
• To predict the behavior of an algorithm without implementing it on a specific
computer.
• It is much more convenient to have simple measures for the efficiency of an algorithm
than to implement the algorithm and test the efficiency every time a certain parameter
in the underlying computer system changes.
• It is impossible to predict the exact behavior of an algorithm. There are too many
influencing factors.
• The analysis is thus only an approximation; it is not perfect.
• More importantly, by analyzing different algorithms, we can compare them to
determine the best one for our purpose.
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
We must know the case that causes a maximum number of operations to be executed. For
Linear Search, the worst case happens when the element to be searched (x) is not present in
the array. When x is not present, the search () function compares it with all the elements of
arr [] one by one. Therefore, the worst-case time complexity of the linear search would be O
(n).
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running time of an algorithm.
We must know the case that causes a minimum number of operations to be executed. In the
linear search problem, the best case occurs when x is present at the first location. The number
of operations in the best case is constant (not dependent on n). So time complexity in the best
case would be Ω (1)
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the computing time for all
of the inputs. Sum all the calculated values and divide the sum by the total number of inputs.
We must know (or predict) the distribution of cases. For the linear search problem, let us
assume that all cases are uniformly distributed (including the case of x not being present in
the array). So we sum all the cases and divide the sum by (n+1).

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Designing algorithms
An important part of software engineering is designing algorithms and programs to solve
problems. The first step in creating a good algorithm is a robust design process. You need to
be able to break problems down and sequence the parts together in the right way to create a
solution.
You also need to be able to read and interpret algorithms to understand how they work and
evaluate their design. There are sometimes different algorithms that solve the same problem;
one algorithm may be more efficient than another at completing the given task.
1. Algorithmic thinking
An algorithm is a repeatable set of steps that can be used by anyone to perform a task and
solve part of a problem. The steps must have clear starting and end points and produce a
reliable output based on any appropriate input. An algorithm must be well defined and not
vague, with clearly communicated steps that could be understood and replicated by anyone,
not just the original designer.
In order to design an algorithm that successfully accomplishes a task, you have to think about
the problem that it is trying to solve in a certain way. This frame of mind is often
called algorithmic thinking. In order to produce repeatable and clearly defined steps, you
must be able to decompose the problem. You have to consider the inputs and outputs of your
algorithm, and the precise processes that are required to produce the desired output based on
incoming data.
Another part of algorithmic thinking is the ability to represent your design
and communicate your algorithm to other people. Design tools such
as pseudocode, flowcharts, or structured English make algorithms easy to interpret, whilst
also not requiring an understanding of any particular programming language (this is called
'language agnostic'). You can also communicate an algorithm in actual program code, but it is
important that the code be well documented. You can use both comments and formal
technical documentation, to help anyone who is not familiar with the particular language to
understand it.
All of these skills will also be of use to you when you read and interpret algorithms created
by other people. You need to be able to read pseudocode, structured English, flowcharts, and
well-documented program code so that you can understand algorithms that are presented to
you.

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

One way to start the process of designing an algorithm to solve a problem is to think about
the data available for the algorithm to take in (input), how that data needs to be manipulated
(process), and the desired resolution to the problem (output).
Taking a data-driven view of algorithm design will help you answer some key questions
about the way that you want to solve the problem. When considering a problem in this way, it
is often best to consider the input and output first, before looking at the process in the middle.
Input
• What type of data will the algorithm start with?
• What data is available to be used?
• Does the data need to be input by a user? Or could it be read from a file, or stored in
an array or list?
Answering these questions will help you identify the inputs for the algorithm that you are
designing. It will also help you define the first parts of the process that the algorithm will
need to follow. For example, if the problem requires user input, you know that the first step
of the algorithm is going to be prompting a user for input.
Pay particular attention to the data types of the inputs, as you may need to use
some casting to change the data types.
Output
• What format or data type is required for the output?
• Does the output need to be displayed to the user? Or does it need to be saved in a file?
Understanding the format and data type of an output is an important step in defining the end
point of your solution. You need to understand how you can take any given input and arrive
at that desired end point.
Process
• In what order do you need to do things?
• What conditions or other forms of selection are required?
• Is there a need for iteration?
• What are the start and end points of the algorithm?
In order to design the processes required for a potential solution, you need to take both the
inputs and outputs and plot a course between them. Consider how you will use the core
concepts of sequence, selection, and iteration in your solution. Pay particular attention to the
order in which you perform your processes, as certain operations will need to be performed
first (e.g. casting input to another data type), while some others will need to be performed last
(e.g. formatting an output to display to a user).
You could create many algorithms to solve the same problem — in programming, there are
always multiple ways of doing something. It can be very difficult to decide which algorithm
is best for a given problem. One way of comparing algorithms is by comparing
how efficiently they solve the problem.
When considering the efficiency of an algorithm, you need to think about two things: time
and space (memory). Every computer has a certain amount of resources available to it, such
as CPU power and memory. More efficient algorithms will use less of these resources.
There are a few factors that impact on how efficient an algorithm is, described below.
 Number of iterations
 Number of Comparisons

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

The number of iterations


One of the ways of improving the efficiency of an algorithm is to ensure that the algorithm
loops no more than is necessary. The number of iterations in an algorithm will have a direct
impact on the time it takes to complete the algorithm. Less efficient algorithms will use
count-controlled loops where a condition-controlled loop could be used, leading to
unnecessary iterations over code blocks that are no longer useful.
A good example of this is in the bubble sort algorithm used to sort a list of items into
ascending or descending order. One version of this algorithm continuously loops over the
items in a list a fixed number of times, regardless of whether the items are already in the
correct order. A more efficient version would only loop until there is no change in the list (i.e.
it is sorted), which would avoid unnecessary repetition.
The number of comparisons
Another way of improving the efficiency of an algorithm is to choose the more efficient form
of selection so that comparisons are only made when strictly necessary. A comparison is any
time two pieces of data are compared. Each of these comparisons requires a lot of work from
a computer, so limiting the number of comparisons will decrease the time it takes to complete
the algorithm, and make it more efficient.
An example comparison
To show you the difference between an inefficient and an efficient solution to a problem, here
are two algorithms that both complete the same task. The task in question is adding up all the
numbers up to a given number n, commonly called 'sum of n'.
INEFFICIENT
total = 0
FOR i = 1 TO n
total = total + i
NEXT i
PRINT total
EFFICIENT
total = n * (n + 1) / 2
PRINT total
Tracing the Algorithm
One way to test an algorithm, whether you have written it yourself or it has been given to you
by someone else, is to trace the steps involved. Tracing involves following the steps of an
algorithm and keeping track of the values stored in variables and the output as you do so.
This process allows you to spot logic errors in your own algorithms, and also helps you
understand any algorithm that has been presented to you.
A common tool used for tracing is the appropriately named trace table. These tables give
you a format to follow when tracing an algorithm. Columns in a trace table are used to track
data in the algorithm as you trace the steps. Typically, the data recorded is the values stored
in variables and the output of the algorithm. As you go through the algorithm, when the value
of a variable changes or data is output, you create a new row and write the new value in the
appropriate column.
Take the following algorithm written in pseudo code as an example:
count = 3
PRINT(count)
FOR i = 1 TO count

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

PRINT(count - i)
NEXT i
The trace table that you would use for this algorithm would look like this:
count i OUTPUT

There are two columns for variables, and one for any output generated by the program.
Growth of Functions
Growth functions are used to estimate the number of steps an algorithm uses as its input
grows. The largest number of steps needed to solve the given problem using an algorithm on
input of specified size is worst-case complexity.
Asymptotic notations
Asymptotic Notations are programming languages that allow you to analyze an algorithm’s
running time by identifying its behavior as its input size grows.
 This is also referred to as an algorithm’s growth rate.
 You can’t compare two algorithms head to head.
 You compare space and time complexity using asymptotic analysis.
 It compares two algorithms based on changes in their performance as the input size is
increased or decreased.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-
case complexity of an algorithm.
Theta (Average Case) You add the running times for each possible input combination and
take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always
between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also
requires that f(n) must be non-negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on the algorithm’s time
complexity.
It exist as both, most, and least boundaries for a given input value.
A simple way to get the Theta notation of an expression is to drop low-order terms and ignore
leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 = Θ(n3), the
dropping lower order terms is always fine because there will always be a number(n) after
which Θ(n3) has higher values than Θ(n2) irrespective of the constants involved. For a given
function g(n), we denote Θ(g(n)) is following set of functions.
Examples:
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it
gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-Oh(Worst Case) It is defined as the condition that allows an algorithm to complete
statement execution in the longest amount of time possible.

If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
It returns the highest possible output value (big-O)for a given input.
The execution time serves as an upper bound on the algorithm’s time complexity.

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥
n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the Insertion
sort is O(n2).
2
Note: O(n ) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:
• The worst-case time complexity of Insertion Sort is Θ(n2).
• The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity of
an algorithm. Many times we easily find an upper bound by simply looking at the algorithm.
Examples:
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides exact
or upper bounds .
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity.
It is defined as the condition that allows an algorithm to complete statement execution
in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥
n0

Mathematical Representation of Omega notation:


Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥
n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort
can be written as Ω(n), but it is not very useful information about insertion sort, as we are
generally interested in worst-case and sometimes in the average case.
Examples:
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Note: Here, U represents union, we can write it in these manner because Ω provides exact
or lower bounds.
Order of Algorithms

Fundamental Algorithms: Exchanging the values of two variables


Swap two numbers without using the third variable
This program is to swap/exchange two numbers without using the third number in the way as
given below:
Example: Suppose, there are two numbers 25 and 23.
Let X= 25 (First number), Y= 23 (second number)
Swapping Logic:
X = X + Y = 25 +23 = 48
Y = X - Y = 48 - 23 = 25
X = X -Y = 48 - 25 = 23 and the numbers are swapped as X =23 and Y =25.
Algorithm
• STEP 1: START
• STEP 2: ENTER x, y
• STEP 3: PRINT x, y
• STEP 4: x = x + y
• STEP 5: y= x - y
• STEP 6: x =x - y
• STEP 7: PRINT x, y
• STEP 8: END
Program
#include<stdio.h>
int main()
{
int x, y;
printf("Enter the value of x and y?");
scanf("%d %d",&x,&y);
printf("before swapping numbers: %d %d\n",x,y);

/*swapping*/
x = x + y;
y = x - y;
x = x - y;
printf("After swapping: %d %d",x,y);
return 0;
}

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Output:
Enter the value of x and y? 13 22
before swapping numbers: 13 22
After swapping: 22 13
Swapping Two Numbers Using Third Variable
Algorithm
Let’s find out how we should draw a solution step-by-step −
START
Var1, Var2, Temp
Step 1 → Copy value of Var1 to Temp
Step 2 → Copy value of Var2 to Var1
Step 3 → Copy value of Temp to Var2
STOP
Pseudo code
From the above algorithm, we can draw pseudo code for this program −
procedure swap(a, b)
set temp to 0
temp ← a
a ← b // a holds value of b
b ← temp // b holds value of a stored in temp
end procedure
Program
#include <stdio.h>
int main()
{
int a, b, temp;
a = 11;
b = 99;
printf("Values before swapping - \n a = %d, b = %d \n\n", a, b);
temp = a;
a = b;
b = temp;
printf("Values after swapping - \n a = %d, b = %d \n", a, b);
}
Output
Values before swapping - a = 11, b = 99
Values after swapping - a = 99, b = 11
Summation of digits of a number
Algorithm:
• Step 1: Get number by user
• Step 2: Get the modulus/remainder of the number
• Step 3: sum the remainder of the number
• Step 4: Divide the number by 10
• Step 5: Repeat the step 2 while number is greater than 0.

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Program
#include<stdio.h>
int main()
{
int n,sum=0,m;
printf("Enter a number:");
scanf("%d",&n);
while(n>0)
{
m=n%10;
sum=sum+m;
n=n/10;
}
printf("Sum is=%d",sum);
return 0;
}
Output:
Enter a number:654
Sum is=15
Enter a number:123
Sum is=6
Factorial Computation
Algorithm:
Step 1: Start
Step 2: Declare Variable n, fact, i
Step 3: Read number from User
Step 4: Initialize Variable fact=1 and i=1
Step 5: Repeat Until i<=number
5.1 fact=fact*i
5.2 i=i+1 Step
6: Print fact
Step 7: Stop
Program
#include<stdio.h>
int main()
{
int i,fact=1,number;
printf("Enter a number: ");
scanf("%d",&number);
for(i=1;i<=number;i++){
fact=fact*i;
}
printf("Factorial of %d is: %d",number,fact);
return 0;
}

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

Output:
Enter a number: 5
Factorial of 5 is: 120
Generating of the Fibonacci sequence
Algorithm:
Step 1: Start
Step 2: Declare variables i, a,b , show
Step 3: Initialize the variables, a=0, b=1, and show =0
Step 4: Enter the number of terms of Fibonacci series to be printed
Step 5: Print first two terms of series
Step 6: Use loop for the following steps
6.1: show=a+b
6.2: a=b
6.3: b=show
6.4: increase value of i each time by 1
6.5: print the value of show
Step 7: End
Program:
#include <stdio.h>
int main()
{
int a, b, c, i, n;
n = 4;
a = b = 1;
printf("%d %d ",a,b);
for(i = 1; i <= n-2; i++)
{
c = a + b;
printf("%d ", c);
a = b;
b = c;
}
return 0;
}
Output
1123
Reversing the digits of an integer
#include<stdio.h>
int main()
{
int n, reverse=0, rem;
printf("Enter a number: ");
scanf("%d", &n);
while(n!=0)
{
rem=n%10;

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

reverse=reverse*10+rem;
n/=10;
}
printf("Reversed Number: %d",reverse);
return 0;
}
Output:
Enter a number: 123
Reversed Number: 321
Character to number conversion
#include<stdio.h>
#include<stdlib.h>
int main(){
long int n,sum=0,r;
system("cls");
printf("enter the number=");
scanf("%ld",&n);
while(n>0)
{
r=n%10;
sum=sum*10+r;
n=n/10;
}
n=sum;
while(n>0)
{
r=n%10;
switch(r)
{
case 1:
printf("one ");
break;
case 2:
printf("two ");
break;
case 3:
printf("three ");
break;
case 4:
printf("four ");
break;
case 5:
printf("five ");
break;
case 6:
printf("six ");

Prof. Abhilash H.P PST - C DAMS, Bengaluru


UNIT 01 – INTRODUCTION TO PROBLEM SOLVING TECHNIQUES USING C

break;
case 7:
printf("seven ");
break;
case 8:
printf("eight ");
break;
case 9:
printf("nine ");
break;
case 0:
printf("zero ");
break;
default:
printf("tttt");
break;
}
n=n/10;
}
return 0;
}
Output:
enter the number=4321
four three two one

Prof. Abhilash H.P PST - C DAMS, Bengaluru

You might also like