Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies

Indexed Array Algorithm for Sorting


Varinder Kumar Bansal Rupesh Srivastava Pooja
Department of Computer Department of Computer Department of Computer
Science and Engineering, Science and Engineering, Science and Engineering,
Motilal Nehru National Institute Motilal Nehru National Institute Shaheed Bhagat Singh College
Of Technology, Allahabad, 211004. Of Technology, Allahabad, 211004. of Engineering and Technology
varinder008@gmail.com srirupesh@gmail.com poojabansal90@gmail.com

II. THE ALGORITHM


Abstract—In this paper we present a sorting algorithm, which The Indexed – Array Sorting algorithm (IAS) is described
uses the methodology of indexing in insertion sort efficiently to below:
give a much better performance than the existing sorting
algorithms of the O( n 2 ) class. We prove the correctness of the • Input : An unsorted array A[ ] of size n.
algorithm, detailed time complexity analysis and applications of • Output : A sorted array A[ ] of size n.
the algorithm.(Abstract)
IAS ( A[ ], n )
I. INTRODUCTION 1. j ← 0, B[j] ← 0
2. for i ← 1 to n
There are mainly two types of comparison based sorting
3. do if(B[j] <= i)
( )
algorithms (i) O n 2 and (ii) O(n log n ) . In general O n 2 ( ) 4. j←0
sorting algorithms run slower than O(n log n ) algorithms, but 5. B[j] ← i
( )
still their importance can’t be ignored. Since O n 2 algorithms 6. for k ← i to n
are non recursive in nature, they require much less space on 7. do if (A[k] <= A[B[j]])
the RAM. Another importance of the algorithms is that they 8. j←j+1
can be used in conjunction with algorithms and the final 9. B[j] ← k
algorithm can be applied to match a suitable situation, for 10. else
example divide a big array into smaller arrays. Another 11. for k ← B[j+1] to n
( )
application of O n 2 sorting algorithms is in sorting small 12. do if(A[k] <= A[B[j]])
arrays. Since O(n log n ) sorting algorithms are recursive in 13. j←j+1
nature their use is not recommended for sorting small arrays as 14. B[j] ← k
they perform poorly. 15. exchange A[i] ↔ A[B[j]]
16. do if(B[j] > 1)
It is known that among O( n 2 ) sorting algorithms, selection 17. j←j–1
sort and insertion sort are the best performing algorithms in
general data distributions. For some data distributions The procedures from lines 7 – 9 and 12 – 14 simply
insertion performs better than selection and vice – versa. Other represent the indexing procedure, while those from line 15 is
algorithms are suited for very limited and particular data simply exchange procedure. To simplify the code we have
distributions. define the following:
In this paper we device a sorting algorithm, in which we
use to store indices of numbers smaller than the pivot element. • Lines 7–9: Insert k into the array B at j+1 if
Since the number of iterations required is lesser the overhead A[k]<=A[B[j]]. This function is for saving indices of
is reduced thus decreasing running time. array A so that we don’t have to start from ith element
for next iteration. Let’s call it INSERT(k,B[],j+1).
The paper is organized as follows: section–2 gives the
basic algorithm and an example to illustrate the working of the • Lines 12–14: Insert k into the array B at j+1 if
algorithm. Section–3 proves the correctness of the algorithm A[k]<=A[B[j]]. Let’s call it INSERT(k,B[],j+1).
using the method of loop invariant, section–4 gives a detailed • Lines 15: Exchange A[i] and A[B[j]] as j is the indices
time complexity analysis of the algorithm and the running of minimum element found out by INSERT procedure.
time comparison graphs, section–5 gives application of IAS, Let’s call it EXCHANGE(A[],i,B[j]).
section–6 concludes and gives an overview of the future work • Lines 16–17: Decrement j as we have used that indices
and finally section–7 gives important references. for Exchange and number at that indices does not
remain the same. Let’s call it EXCHANGE(A[],i,B[j]).

978-0-7695-3915-7/09 $26.00 © 2009 IEEE 34


DOI 10.1109/ACT.2009.18

Authorized licensed use limited to: Ingrid Nurtanio. Downloaded on October 01,2020 at 02:21:12 UTC from IEEE Xplore. Restrictions apply.
Now the shorter form of the algorithm can be stated as 2. As the percentage of sorted data increases in the array
follows: number of memory writes required also decreases
IAS ( A[ ], n ) ( )
compared to other existing O n 2 algorithm.
1. j ← 0, B[j] ← 0
2. for i ← 1 to n III. CORRECTNESS OF THE ALGORITHM
3. do if(B[j] <= i)
4. j←0
5. B[j] ← i Theorem – The necessary and sufficient condition for an
6. for k ← i to n array a[max] to be sorted is for any indices p and q, p ≤ q ↔
7. INSERT(k, B[],j+1) A[p] ≤ A[q] where p, q ∈[1 , n].
8. else We now prove that after completion of algorithm, resultant
9. for k ← B[j+1] to n array satisfies the above condition.
10. INSERT(k, B[],j+1)
11. EXCHANGE(A[], i , B[j]). Proof:- We would use loop invariant method to prove
12. do if(B[j] > 1) correctness of IAS ( ). We would use its shorter form for
13. j←j–1 reference.
Loop invariant: Before and after each iteration the sub
The working of the above algorithm can be understood by the array A[(q+1)…….(p – 1)] is sorted.
following example. Consider the following input array:

7 8 5 3 9 15 6 1 10 2 • Initialization:
In this part we take an array B and initialize its first
Array A and B (used to save indices of array A, which element with 0. Hence loop invariant holds before the first
will be used in sorting) during n iterations of INSERT loop iteration.
function after line - 7 is as follows:
A[] B[] • Maintenance:
Here starting with B[0]=0 now in lines 6-9 we store the
7 8 5 3 9 15 6 1 10 2 0 index of element in B[j]which is lesser than element at the
7 8 5 3 9 15 6 1 10 2 0 2 index of A stored in B[j-1]. Now after storing indices in array
7 8 5 3 9 15 6 1 10 2 0 2 3 B we exchange A[i] ↔ A[B[j]], this makes smallest element
7 8 5 3 9 15 6 1 10 2 0 2 3 7 in subarray A[i…..n] placed to its required position i.e. at
A[i]. For the next invariant to find the smallest element in
Now we show the array status after lines 14, for each further array we use B[j-1]. Hence the loop invariant holds.
iteration of the outermost for loop. After each iteration the Hence the loop invariant is maintained for each iteration of
element at indices stored at end of array B and element at the for loop.
indices i are exchanged.
• Termination:
A[] B[] It remains to note that when the outer loop terminates, i=
length(A)+1 so A[0….i-1] is A[0….length] and thus from the
1 8 5 3 9 15 6 7 10 2 0 2 3 9 reasoning of the maintenance part, we can infer that at the
1 2 5 3 9 15 6 7 10 8 0 2 3 end of this iteration the A[1………length] is sorted. Hence
1 2 3 5 9 15 6 7 10 8 3 loop invariant holds at termination of the algorithm also.
1 2 3 5 9 15 6 7 10 8 4 6
1 2 3 5 6 15 9 7 10 8 5 6 7 Hence, we prove that the algorithm IAS ( ) correctly sorts
1 2 3 5 6 7 9 15 10 8 5 6 9 the input array.
1 2 3 5 6 7 8 15 10 9 7 8 9
1 2 3 5 6 7 8 9 10 15 8

Completely sorted data after all iterations :- IV. COMPLEXITY OF THE ALGORITHM

1 2 3 5 6 7 8 9 10 15 We compute the time complexity of IAS ( ) using the cost


time analysis. Since the number of constants involved would
The time required to sort is lesser than any other existing be large, we would skip some of the details of the analysis but
algorithm because: it would not affect the flow of the analysis.
1. The number of iterations has been reduced by saving
the indices of smaller elements than the pivot element , The complexity is given by the time function f (n) .
thus reducing the overhead and the running time.

35

Authorized licensed use limited to: Ingrid Nurtanio. Downloaded on October 01,2020 at 02:21:12 UTC from IEEE Xplore. Restrictions apply.
IAS ( A[ ], n ) cost times V. APPLICATIONS
1. j ← 0, B[j] ← 0 c1 1
The present algorithm can be efficiently used to sort small
2. for i←1 to n c2 n arrays, typically of size lying from 10 – 50 elements. The
3. do if(B[j] <= i) c3 1 algorithm can be used in conjunction with the O(n log n )
4. j←0 sorting algorithms efficiently.
5. B[j] ← i Our algorithm can also be used efficiently in all the places
6. for k ← i to n where percentage of sorted data in input data elements is
high. This algorithm can be used to find percentage of sorted
7. do if (A[k] <= A[B[j]]) c4 n
data in input data set by checking the extent of indexing in
8. j←j+1 array B.
9. B[j] ← k In the single-objective optimization there is only one
10. else c5 1 global optimum, but in multiobjective optimization there is a
11. for k ← B[j+1] to n set of solutions, called the Pareto-optimal(PO) set, which are
12. do if(A[k] <= A[B[j]]) considered to be equally important; all of them constitute
global optimum solutions. An important goal in
13. j←j+1 c6 n multiobjective optimisation is to find a good set of non-
14. B[j] ← k . dominated solutions that is both well-distributed and well-
15. exchange A[i] ↔ A[B[j]] c 1 converged. Since we are storing indices of smaller elements
16. do if(B[j] > 1) c87 1 with respect to pivot element, so this stored data can be used
17. j ← j – 1 in finding a measure of the amount of domination and in non-
We can aggregate the constants into singular constants, dominated sorting to find set of dominating elements and
without the loss of generality, thus making the derivation dominated elements.
more comprehendable. Hence we compute f (n) by
multiplying the costs with corresponding times. So we have, VI. CONCLUSION AND FUTURE WORK

f (n) = an 2 + bn + c In this paper we presented our algorithm which give a better


running time than the existing sorting algorithms of the same
for some constants a, b and c . Hence we see that complexity class.
f (n) varies as quadratic function of n . Thus we have The important thing about this algorithm is that its best
case running time is linear and for generally distributed data
f ( n ) = Ω( n) and f ( n) = O (n 2 ) ( )2
sets it is the best candidate amongst the O n sorting
Thus we prove that the algorithm IAS ( ) belongs to the algorithms. As a matter of fact, we have yet not found the
O(n 2 ) complexity sorting algorithm class.
worst case of the algorithm.
The future work includes the study of the performance of
We now give the running time comparison chart, where the algorithm in conjunction with merge sort when applied in
our algorithm is compared with selection, bubble and external sorting applications. Also we are currently working
insertion sort. Here in following graph we have taken time on reducing the number of swaps. The challenge is to propose
units on Y-axis and number of data elements on X-axis. a method which introduces a minimal overhead.

VII. IMPORTANT REFERENCES

[1] Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. The


Design and Analysis of Computer Algorithms. Addison Wesley,
1974.

[2] Donald E. Knuth. Sorting and Searching, volume 3 of The Art of


Computer Programming. Addison Wesley, 1981.

[3] Horowitz E. and S. Sahni. Fundamentals of Computer


Algorithms. Woodland Hills, Cal. : Computer Science Press, 1978.
It is clear from the above graph that IAS runs faster than the
existing sorting algorithms of the same complexity class. [4]Thomas H. Cormen, Charles E. Lieserson, Ronald L. Rivest and
Hence we can conclude that IAS ( ) is the best performing Clifford Stein. Introduction to Algorithms. MIT Press.
amongst selection, bubble, insertion and IAS. One important
thing to note about our algorithm is that its running time [5] Richard Neapolitan and Kumarss Naimpour. Foundation of
decreases rapidly as the percentage of sorted data increases in Algorithms. Narosa, 2005.
input data.

36

Authorized licensed use limited to: Ingrid Nurtanio. Downloaded on October 01,2020 at 02:21:12 UTC from IEEE Xplore. Restrictions apply.

You might also like