Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 8

Introduction to Divide and Conquer Algorithm:

An early two-subproblem divide and conquer algorithm that was specifically developed for
computers and properly analyzed is the merge sort algorithm, invented by John von Neumann in
1945.
The divide-and-conquer paradigm is often used to find an optimal solution of a given problem.
Its basic idea is to divide a given problem into two or more similar, but simpler,
subproblems.Problems of sufficient simplicity are solved directly. A divide-and-
conquer algorithm recursively breaks down a problem into two or more sub-problems of the
same or related type, until these become simple enough to be solved directly. The solutions to the
sub-problems are then combined to give a solution to the original problem. Divide and conquer is
a powerful tool for solving conceptually difficult problems: all it requires is a way of breaking
the problem into sub-problems, of solving the trivial cases and of combining sub-problems to the
original problem. The divide-and-conquer paradigm often helps in the discovery of efficient
algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort
and mergesort algorithms

Fundamentals of the Given Paradigm


Karatsuba’s algorithm and Strassen’s algorithm are both examples of a “divide and conquer”
paradigm: break the problem into subproblems, solve each subproblem independently, and then
argue that the solutions to the subproblems can be stitched together into a solution to the original
problem. Merge-sort is probably the most iconic divide and conquer algorithm. Divide and
conquer is a basic but important algorithmic technique, always worth thinking about when you
have a new algorithmic problem to solve.

Notable Algorithms in the Given Paradigm:


The divide-and-conquer technique is the basis of efficient algorithms for many problems, such
as sorting like quicksort, merge sort, multiplying large numbers 

Introduction of the karatsuba’s Algorithms:


The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly
Karatsuba in 1960 and published in 1962. It reduces the multiplication of two n-digit numbers to
at most n^log2_3 =n^1.58 single-digit multiplications in general (and exactly n^log2_3 when n is
a power of 2). It is therefore asymptotically faster than the traditional algorithm, which
requires n^2 single-digit products. For example, the Karatsuba algorithm requires 310 = 59,049
single-digit multiplications to multiply two 1024-digit numbers (n = 1024 = 210), whereas the
traditional algorithm requires (210)2 = 1,048,576 (a speedup of 17.75 times).
The standard procedure for multiplication of two n-digit numbers requires a number of
elementary operations proportional to n^2 or O(n^2) in big-O notation. In 1960, Kolmogorov
organized a seminar on mathematical problems in cybernetics at the Moscow State University,
where he stated the Omega(n^2)  conjecture and other problems in the complexity of
computation. Within a week, a 23-year-old student, found an algorithm (later it was called
"divide and conquer") that multiplies two n-digit numbers in O(n^log2_3) elementary steps, thus
disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated
it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some
lectures on the Karatsuba result at conferences all over the world and published the method in
1962,

Psudocde:

karstsuba(num1,num2)
if(num1<10) or (num2<10)

return num1*num2;
*calculate the size of the numbers*
m=max(size_base10(num1),size_base10(num2))

m2=m/2

*split the digits sequences about the middle*

high1,low1=split_at(num1,m2)

high2,low2=split_at(num2,m2)

z0=karatsuba(low1,low2)
z1=karatsuba((low1+high1),(low2+high2))
z2=karatsuba(high1,high2)

return (z2*10^(2*m2))+((z1-z2-z0)*10^(m2))+z0
Implementation

1 #include <iostream>
2 #include <cmath>
3
4 using namespace std;
5
6
7 int max(int a,int b)
8 {
9 if(a>b)
10 return a;
11 else
12 return b;
13 }
14 int getLength(long long value) {
15 int counter = 0;
16 while (value != 0) {
17 counter++;
18 value /= 10;
19 }
20 return counter;
21 }
22
23 long long multiply(long long x, long long y) {
24 int xLength = getLength(x);
25 int yLength = getLength(y);
26
27
28 int N =max(xLength, yLength);
29
30
31 if (N < 10)
32 return x * y;
33
34
35 N = (N/2) + (N%2);
36
37 long long multiplier = pow(10, N);
38
39 long long b = x/multiplier;
40 long long a = x - (b * multiplier);
41 long long d = y / multiplier;
42 long long c = y - (d * N);
43
44 long long z0 = multiply(a,c);
45 long long z1 = multiply(a + b, c + d);
46 long long z2 = multiply(b, d);
47
48
49 return z0 + ((z1 - z0 - z2) * multiplier) + (z2 * (long long)(pow(10, 2 * N)));
50 }
51
52 int main()
53 {
54 long long int a,b;
55 cout<<"Enter Two Numbers : ";
56 cin>>a>>b;
57
58 cout << multiply(a,b) << endl;
59 return 0;
60 }

Complexity
Assuming that we replace two of the multiplications with only one makes the program faster.
The question is how fast.
Karatsuba improves the multiplication process by replacing the initial complexity of  O(n^2) by
O(n^log_2 3), which as you can see on the diagram below is much faster for big n.

T(n)=3∗T(n/2)+O(n)T(n)=3∗T(n/2)+O(n)
O(n^2) grows much faster than O(n^lg3)

Masters Theorem:

T(n)=3*T(n/2)+O(n)

Here a=3 ,b=2 k=1

a>b^k

So, T(n)=O(n^log_2 3)

Substitution Method:

T(n)=3T(n/2)+O(n)

T(n) =9T(n/4)+3n/2+n

So, T(n)=3^k(n/2^k) + n

n/2^k=1

k=log_2 n

so T(n)=n^log_2 3+n

=O(n^log_2 3)

Use cases,Partial uses and where and for what purpose the karatsuba’s algorithm is
being used.
Karatsuba algorithm for fast multiplication using Divide and Conquer algorithm. Given two
binary strings that represent value of two integers, find the product of two strings. For
example, if the first bit string is “1100” and second bit string is “1010”, output should be 120.
For simplicity, let the length of two strings be same and be n.

A Naive Approach is to follow the process we study in school. One by one take all bits of
second number and multiply it with all bits of first number. Finally add all multiplications.
This algorithm takes O(n^2) time.
If we take a look at the above formula, there are four multiplications of size n/2, so we
basically divided the problem of size n into four sub-problems of size n/2. But that doesn’t help
because solution of recurrence T(n) = 4T(n/2) + O(n) is O(n^2). The tricky part of this
algorithm is to change the middle two terms to some other form so that only one extra
multiplication would be sufficient. The following is tricky expression for middle two terms.

Recommendation:
I would recommend Karatsuba Algorithm for multiplication process.It’s obvious Karatsuba
Algorithm can be used in integer multiplication because it’s very efficent.It isn’t it’s only
advantage , it can be also used for polynomial multiplications.
Comparision with similar Algorithms:
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on
the size of the numbers, different algorithms are used. There are many types of multiplication
algorithm(karatsuba,long,polynomial).Karatsuba is the fastest multiplication algorithm but there
are other algorithms which are good for other processes.

Long Multiplication:
Multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted
results.Sometimes long multiplication is called Standard Algorithm: It requires memorization of
the multiplication table for single digits.
Example:

23958233
× 5830
———————————————
00000000 ( = 23,958,233 × 0)
71874699 ( = 23,958,233 × 30)
191665864 ( = 23,958,233 × 800)
+ 119791165 ( = 23,958,233 × 5,000)
———————————————
139676498390 ( = 139,676,498,390 )
Below pseudocode describes the process of above multiplication. It keeps only one row to
maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote
sum to existing value and store operation (akin to languages such as Java and C) for
compactness.

multiply(a[1..p], b[1..q], base)


product = [1..p+q]
for b_i = 1 to q
carry = 0
for a_i = 1 to p
product[a_i + b_i - 1] += carry + a[a_i] * b[b_i]
carry = product[a_i + b_i - 1] / base
product[a_i + b_i - 1] = product[a_i + b_i - 1] mod base
product[b_i + p] = carry
return product

Karatsuba Multiplication:
For systems that need to multiply numbers in the range of several thousand digits, such
as computer algebra systems and bignum libraries, long multiplication is too slow. These
systems may employ Karatsuba multiplicationThe heart of Karatsuba's method lies in the
observation that two-digit multiplication can be done with only three rather than the four
multiplications classically required. This is an example of what is now called a divide and
conquer algorithm. Suppose we want to multiply two 2-digit base-m numbers: x1 m +
x2 and y1 m + y2:

1. compute x1 · y1, call the result F


2. compute x2 · y2, call the result G
3. compute (x1 + x2) · (y1 + y2), call the result H
4. compute H − F − G, call the result K; this number is equal to x1 · y2 + x2 · y1
5. compute F · m2 + K · m + G.
To compute these three products of m-digit numbers, we can employ the same trick again,
effectively using recursion. Once the numbers are computed, we need to add them together (steps
4 and 5), which takes about n operations.
Karatsuba multiplication has a time complexity of O(n^log2_3)=O(n^1.58)making this method
significantly faster than long multiplication. Karatsuba's multiplication is slower than long
multiplication for small values of n;
Conclusion:
In above algorithm, the performance of Karatsuba multiplication algorithm is analysed. The
performance analysis can be carried out for different numbers. According to the results, the
length raises along with the number of multiplication owing to the processing of Karatsuba
algorithm. The more the length increases, the more the total process time rises. Karatsuba
algorithm has better results than classical multiplication method in terms of the number of
multiplication and the total process time, because the number of multiplication and the cost
required for performing multiplication operation are less than classical multiplication method.

You might also like