Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Received: 14 June 2018 Revised: 27 August 2018 Accepted: 4 September 2018 Published on: 4 October 2018

DOI: 10.1002/spy2.46

ORIGINAL ARTICLE

A comparative study and analysis of some pseudorandom number


generator algorithms

Shobhit Sinha1 SK Hafizul Islam1 Mohammad S. Obaidat2,3

1 Department of Computer Science and


Engineering, Indian Institute of Information In this article, we have investigated the statistical nature of popular pseudorandom
Technology Kalyani, West Bengal, India number generators (PRNGs) present in the literature and have analyzed their perfor-
2 King Abdullah II School of Information

Technology, University of Jordan, Amman, Jordan


mance against the battery of tests prescribed in NIST SP800-22rev1a. Different tests
3 School of Engineering, Nazarbayev University, performed in this article provide an insight into the PRNGs and have revealed if they
Astana, Kazakhstan are statistically random or not, which is the first criteria in being cryptographically
Correspondence secure. In our study, we have considered the following PRNGs: (a) Linear Congruen-
Mohammad S. Obaidat, King Abdullah II School
tial Generator, (b) WichMann-Hill Algorithm, (c) Well Equidistributed Long Period
of Information Technology, University of Jordan,
Amman, Jordan; and School of Engineering, Generator and (d) MIXMAX.
Nazarbayev University, Astana, Kazakhstan
Email: msobaidat@gmail.com KEYWORDS

LCG, MIXMAX, NIST test suit, PRNG, WELL, WichMann-Hill

1 INTRODUCTION

“Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin”
—John von Neumann.1

Randomness is an essential criteria in many fields of computer science including simulations, cryptography, statistics and
randomized algorithms. In cryptography, randomness is generally used for the generation of keys, initialization vectors (IVs)
or nonces for an encryption algorithm. The key is an integral portion of the security as modern cryptographic algorithms are
built on the principle that the security of a system depends entirely on the key and not on the system’s design.2 This means
that all modern security algorithms and protocols have their cryptographic strength expressed in terms of size of the key that
an attacker needs to guess before breaching the security of the system.
This expression of strength implicitly assumes that the attacker has no knowledge of the number of bits of the original key
used. The effective strength of an algorithm is reduced when better attacks against it are found and more bits of the key can
be derived from looking at a portion of the output data. With advances in computational speeds and the advent of quantum
computing, there is always the looming danger of a partial/reduced round brute-force attack on such systems, if a portion of the
key can be guessed.
How to create this “Randomness”? Let us have an informal discussion. Assume that there is a function f (x) such that it
produces a random number for a given input x, following a series of operations on x and involving some fixed, well-defined
constants. The function has a remarkable property. It produces a different input every time we enter the same constant. Intuitively
and to the best of our knowledge such a function does not exist. This is because f (x) is completely deterministic in nature. If
x and the definition of f (x) is known, then for a given input x or a series of inputs (that may include time as a parameter), f (x)
will always give the same output. Hence, the algorithms being a set of instructions and deterministic in nature cannot produce
true randomness. This is beautifully summarized by John von Neumann1 in his famous quote, which is mentioned at the start
of this paper.

Security Privacy. 2018;1:e46. wileyonlinelibrary.com/journal/spy2 © 2018 John Wiley & Sons, Ltd. 1 of 8
https://doi.org/10.1002/spy2.46
2 of 8 SINHA ET AL .

TABLE 1 Description of different pseudorandomness tests in NIST suite

No. Test N Description


1 T1 100 Checks the proportion of 1’s and 0’s in whole sequence generated by the PRNG
2 T2 100 Similar to “frequency test”, but this test is performed within every block of uniform sizes
3 T3 100 Check the uninterrupted length of identical bits in a sequence generated by the PRNG
4 T4 100 Check the deviation of the distribution of long runs of 1’s in a sequence generated by the PRNG
5 T5 38 912 Check the deviation of the rank distribution of a sequence generated by the PRNG and a truly random sequence
6 T6 1000 Checks for the presence of periodic features across entire sequence
7 T7 100 Check the occurrences of non-periodic and fixed templates in a sequence generated by the PRNG
8 T8 106 Check the occurrences of m-bit runs of 1’s, where m is window size
9 T9 387 840 Check the compressibility of a sequence generated by the PRNG
10 T 10 106 Check the length of smallest LFSR required to generate the given sequence generated by the PRNG
11 T 11 100 Check the frequency of all possible overlapping m-bit patterns across the entire a sequence generated by the PRNG
12 T 12 100 Compare the frequency of overlapping blocks of two adjacent templates of length m bits and m + 1 bits
13 T 13 100 Check for too many 0’s or 1’s at the beginning of the sequence generated by the PRNG
14 T 14 106 Check the deviation from the distribution of the number of visits of a random walk to a certain state
15 T 15 106 Similar to “random excursions” test but the distribution of visits is across many random walks

T 1 , frequency test; T 2 , frequency test within a block; T 3 , runs test; T 4 , longest-run-of-ones in a block test; T 5 , binary matrix rank test; T 6 , discrete
Fourier transform (spectral) Test; T 7 , non-overlapping template matching test; T 8 , overlapping template matching test; T 9 , Maurer’s “Universal
Statistical” test; T 10 , linear complexity test; T 11 , serial test; T 12 , approximate entropy test; T 13 , cumulative sums (Cusums) test; T 14 , random
excursions test; T 15 , random excursions variant test; n, minimum length of the sequence obtained from a PRNG.

Then how can we get this randomness for the robust design of any cryptosystem? Following techniques are generally used:
i We can integrate a physical source of randomness, like thermal noise in a resistor, in a cryptosystem. In such kinds of
system, the speed of the generator can be an issue that can create a bottleneck on the overall performance of the system.
ii Using a cryptographically secure PRNG (CSPRNG) as a stand alone or with the physical source as a seed generator for the
CSPRNG.
For a pseudorandom number generators (PRNG) to be cryptographically robust, it should satisfy the following conditions3 :
a Statistical difference: The chance of finding distinctions between the statistical properties of the PRNG sequence as compared
to that of a truly random sequence, by any polynomial-time algorithm, should not be significantly greater than 0.5.
b Next-bit prediction: Given first k bits of the PRNG sequence, the chances of predicting the next bit, that is, . (k + 1)th bit,
by any polynomial-time algorithm, should not be significantly greater than 0.5.
It is interesting to note that both conditions need to hold simultaneously and one does not imply the other.

2 ORGANIZATION O F THE PAPER

The paper is organized as follows. In Section 3, we have briefly discussed the different tests proposed by NIST and the parameters
used in these tests. In Sections 4-7, we have discussed the LCG,4 Wichmann-Hill,5 WELL6 and MIXMAX7 PRNGs and their
performances based on the NIST Test Suit. In Section 7, we have concluded the paper with comparative comments on the LCG,
Wichmann-Hill, WELL and MIXMAX PRNGs.

3 STATISTICAL TESTING AND THE NIST S TANDARD

There are lots of test-packages available in the literature that detect a certain kind of weakness present in the sample sequence
obtained from a PRNG algorithm. Some famous packages can be found in.8–12 All these tests have their own respective places
in the testing and strictness spectrum. While Marsaglia’s tests9 are not so vigorous, however, the test called TestU0112 proposed
in 2007 is so strict that statisticians opine that given a huge sample every generator gets disqualified by this test. We, therefore,
choose the NIST standards for its strict but a not so extremist approach for the comparative analysis. The Table 1 illustrates the
tests proposed by NIST10 along with the details for a given bit sequence.
The NIST Test Suite targets mainly the condition (a) required for a CSPRNG. The tests, in general, produce a P-value.10 The
P-value is compared with the significance level (𝛼) which is defined by the tester. In this study, 𝛼 has been taken as 0.01. If
for agiven test, P-value of a sequence is less than 𝛼, the sequence is said to have failed the test. Tests like “Non-overlapping
SINHA ET AL . 3 of 8

Template matching”, “Random Excursion” and “Random Excursion Variant” do not generate a single P-value.10 For the above
three tests, the description on what to do if one of the P-values indicate non-randomness is quite vague. In this study, we assume
that any PRNG is not secure if it fails one or more tests described in the Table 1. For all the PRNGs considered, we have
tested 1000 sequences, with each sequence being of 1 000 000 bits length. For each test, we calculate S, which is the number
S
of sequences passing, PR, which is the passing ratio, that is, 1000 . If PR lies in the confidence interval of 0.99 ± 0.0094392, the
PRNG is said to have passed the test.

4 LCG PRNG

The LCG represents one of the oldest and best-known techniques to produce the pseudorandom sequence of bits. The LCG can
be defined by the tuple (m, a, c), which is shown in the recurrence relation given below:
Xn+1 = (𝑎𝑋 n + c) mod m. (1)
The X n represents the nth term of the sequence, a is the multiplier, c is the increment, and m is the modulus. The performance
of the LCG is very sensitive to the choices of the parameters m, a, and c. Let’s see how it fares up against the NIST Test Suite.10
In Table 2, we enlisted some popular implementation of different LCGs.
The performance of LCGs is shown in Table 3–21. The results indicate that all the LCGs given in Table 2 perform poorly
against the NIST Test Suite.

TABLE 2 LCGs with different m, a, and c in common use in runtime libraries of various compilers4

No. Source m a c
1. GNU scientific library 232 1 664 525 1 013 904 223
2. Borland C/C++ 232 22 695 477 1
3. Glibc (used by GCC) 231 -1 1 103 515 245 12 345
4. ANSI C: Watcom, digital Mars, CodeWarrior, IBM VisualAge C/C++C99, C11 231 1 103 515 245 12 345
5. Borland Delphi, virtual Pascal 232 134 775 813 1
6. Microsoft visual/quick C/C++ 232 214 013 2 531 011
7. Microsoft visual basic (6 and earlier) 224 1 140 671 485 12 820 163
8. RtlUniform from native API 231 –1 2 147 483 629 2 147 483 587
9. Apple CarbonLib, C++11’s minstd_rand0 231 -1 16 807 0
10. C++11’s minstd_rand 231 -1 48 271 0
11. MMIX 264 6 364 136 223 846 793 005 1 442 695 040 888 963 407
12. Newlib, Musl 264 6 364 136 223 846 793 005 1
13. VMS’s MTHRANDOM, old versions of glibc 232 69 069 1
14. Java’s Java.Util.Random, POSIX rand48, glibc rand48[_r] 248 25 214 903 917 11
15. random0 134 456 8121 28 411
16. POSIX [de]rand48, glibc [de]rand48[_r] 248 25 214 903 917 11
17. cc65, Sydney 2016 223 65 793 4 282 663
18. cc65 232 16 843 009 826 366 247
19. RANDU 231 65 539 0

TABLE 3 Test results for GNU scientific library


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 934 991 974 990 993 009 773 992 991 963 608 000 000 829 897
PR 0.000 0.934 0.991 0.974 0.99 0.993 0.009 0.773 0.992 0.991 0.963 0.608 0.00 0.00 0.829 0.897
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 4 Test results for Borland C/C++


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 948 991 966 991 989 009 819 988 995 968 588 000 000 819 888
PR 0.000 0.948 0.991 0.966 0.991 0.989 0.009 0.819 0.988 0.995 0.968 0.588 0.000 0.000 0.819 0.888
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail
4 of 8 SINHA ET AL .

TABLE 5 Test results for glibC (used by GCC)


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 943 995 969 988 994 009 794 990 988 973 537 000 000 822 913
PR 0.000 0.943 0.995 0.969 0.988 0.994 0.009 0.794 0.99 0.988 0.973 0.537 0.000 0.000 0.822 0.913
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 6 Test results for ANSI C: Watcom, digital Mars, etc.


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 938 988 960 991 983 005 783 981 991 974 574 000 000 818 903
PR 0.000 0.938 0.988 0.96 0.991 0.983 0.005 0.783 0.981 0.991 0.974 0.574 0.000 0.00 0.818 0.903
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 7 Test results for Borland Delphi, and virtual Pascal


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 932 994 966 992 988 010 787 980 979 969 572 000 000 818 899
PR 0.000 0.932 0.994 0.966 0.992 0.988 0.01 0.787 0.98 0.979 0.969 0.572 0.000 0.000 0.818 0.899
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Fail Fail Fail Fail Fail Fail Fail Fail

TABLE 8 Test results for Microsoft visual/quick C/C++


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 933 991 968 983 986 008 800 990 993 968 581 000 000 827 895
PR 0.000 0.933 0.991 0.968 0.983 0.986 0.008 0.8 0.99 0.993 0.968 0.581 0.000 0.000 0.827 0.895
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 9 Test results for Microsoft visual basic (6 and earlier)


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 958 997 941 991 000 004 775 997 994 000 874 000 000 794 857
PR 0.000 0.958 0.997 0.941 0.991 0.000 0.004 0.775 0.997 0.994 0.000 0.874 0.000 0.000 0.794 0.857
Verdict Fail Fail Pass Fail Pass Fail Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 10 Test results for RtlUniform from native API


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 989 000 992 989 986 000 986 986 987 928 014 000 000 859 917
PR 0.000 0.989 0.000 0.992 0.989 0.986 0.000 0.986 0.986 0.987 0.928 0.014 0.000 0.000 0.859 0.917
Verdict Fail Pass Fail Pass Pass Pass Fail Pass Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 11 Test results for apple CarbonLib, C++11’s minstd_rand0


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 945 994 974 988 995 011 796 990 991 975 548 000 000 853 913
PR 0.000 0.945 0.994 0.974 0.988 0.995 0.011 0.796 0.99 0.991 0.975 0.548 0.000 0.000 0.853 0.913
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 12 Test results for C++11′ minstd_rand


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 939 990 975 990 990 10 773 986 994 969 568 000 000 841 903
PR 0.000 0.939 0.99 0.975 0.99 0.99 0.01 0.773 0.986 0.994 0.969 0.568 0.000 0.000 0.841 0.903
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail
SINHA ET AL . 5 of 8

TABLE 13 Test results for MMIX


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 951 992 970 992 989 008 795 983 991 970 536 000 000 839 909
PR 0.000 0.951 0.992 0.97 0.992 0.989 0.008 0.795 0.983 0.991 0.97 0.536 0.000 0.000 0.839 0.909
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 14 Test results for Newlib, Musi


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 928 993 969 993 989 005 799 991 986 958 575 000 000 806 873
PR 0.000 0.928 0.993 0.969 0.993 0.989 0.005 0.799 0.991 0.986 0.958 0.575 0.000 0.000 0.806 0.873
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 15 Test results for VMS’s MTHRandom


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 933 988 963 988 980 010 788 985 993 976 580 000 000 830 882
PR 0.000 0.933 0.988 0.963 0.988 0.98 0.01 0.788 0.985 0.993 0.976 0.58 0.000 0.000 0.83 0.882
Verdict Fail Fail Pass Fail Pass Fail Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 16 Test results for Java’s Java.Util.Random


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 673 531 022 757 000 000 123 522 987 000 000 000 000 814 859
PR 0.000 0.673 0.531 0.022 0.757 0.000 0.000 0.123 0.522 0.987 0.000 0.000 0.000 0.000 0.814 0.859
Verdict Fail Fail Fail Fail Fail Fail Fail Fail Fail Pass Fail Fail Fail Fail Fail Fail

TABLE 17 Test results for random0


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 1000 1000 000 986 000 000 000 165 982 000 000 000 000 841 896
PR 0.000 1.000 1.000 0.000 0.986 0.000 0.000 0.000 0.165 0.982 0.000 0.000 0.000 0.000 0.841 0.896
Verdict Fail Fail Fail Fail Pass Fail Fail Fail Fail Pass Fail Fail Fail Fail Fail Fail

TABLE 18 Test results for POSIX [de]rand48, glibc rand48


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 941 989 972 997 991 010 779 990 985 971 567 000 000 830 835
PR 0.000 0.941 0.989 0.972 0.997 0.991 0.010 0.779 0.99 0.985 0.971 0.567 0.000 0.000 0.83 0.835
Verdict Fail Fail Pass Fail Pass Pass Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 19 Test results for cc65, Sydney 2016


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 1000 1000 000 598 000 000 000 676 987 000 000 000 000 854 877
PR 0.000 1.000 1.000 0.000 0.598 0.000 0.000 0.000 0.676 0.987 0.000 0.000 0.000 0.000 0.854 0.877
Verdict Fail Fail Fail Fail Fail Fail Fail Fail Fail Pass Fail Fail Fail Fail Fail Fail

TABLE 20 Test results for cc65


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 940 992 971 994 568 015 765 985 985 000 832 000 000 820 851
PR 0.000 0.94 0.992 0.971 0.994 0.568 0.015 0.765 0.985 0.985 0.000 0.832 0.000 0.000 0.82 0.851
Verdict Fail Fail Pass Fail Pass Fail Fail Fail Pass Pass Fail Fail Fail Fail Fail Fail

TABLE 21 Test results for RANDU


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 000 941 994 976 988 974 014 879 984 990 981 547 000 000 817 876
PR 0.000 0.941 0.994 0.976 0.988 0.974 0.014 0.879 0.984 0.99 0.981 0.547 0.000 0.000 0.817 0.876
Verdict Fail Fail Pass Fail Pass Fail Fail Fail Pass Pass Pass Fail Fail Fail Fail Fail
6 of 8 SINHA ET AL .

5 WICHMANN-HILL P RNG

The Wichmann-Hill, also known as AS-183, is a PRNG that is a combination of three LCGs with different m, a, and c.5 The
output of the three LCGs (each belonging between 0 and 1) are summed (modulo 1) to produce the result, which is nothing but
the fractional part of the sum. The values of m, a, and c are fixed5 and the procedure is given as Algorithm 1. The seed values
for s1 , s2 , and s3 should be between 0 and 30 000.

Clearly, the Wichmann-Hill algorithm performs better than a single LCG against the NIST Test Suite. We have shown the
performance of the Wichmann-Hill PRNG against the NIST Test Suite in Table 22.

6 WELL PRNG

The WELL PRNG6 is a form of Linear Feedback Shift Register (LFSR) specifically suited for 32-bit machines. The WELL
can be defined by the parameters (k, w, r, p, m1 , m2 , m3 , M 0 , M 1 , M 2 , M 3 , M 4 , M 5 , M 6 , M 7 ), where k = rw + p, r > 0 and
0 ≤ p < w, mp is the bit-mask and M i is the transformation matrix of size w × w. There can be six possible transformations
as defined in.6 It is also very interesting to note that if M 1 = M 2 = M 3 = M 6 × M 2 ⊕ M 7 × M 2 = M 6 × M 3 ⊕ M 7 × M 3 and
M 0 = M 5 × M 1 ⊕ M 7 × M 1 = I. Here, I is the identity matrix and M 4 is a matrix with only nonzero elements are on the first line
and on the first sub-diagonal (which contains all 1’s), then WELL reduces to the “Mersenne Twister”.13 The general algorithm
for WELL6 is reproduced as Algorithm 2.
The NIST Test Suite was applied to WELL512a, WELL1024a, WELL19937a, WELL19937c, WELL44497a and
WELL444497b. The PR corresponding to each test is shown in Table 23.

7 MIXMAX PRNG

The MIXMAX PRNG is based on k-mixing Kolmogorov systems.7,14 The generator is described mathematically as follows:


N
ui (t + 1) = A𝑖𝑗 × uj (t) mod 1. (2)
j=1

TABLE 22 Test results for Wichmann-Hill PRNG


1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 995 994 985 989 988 988 213 991 985 992 976 993 994 992 915 910
PR 0.995 0.994 0.985 0.989 0.988 0.988 0.213 0.991 0.985 0.992 0.976 0.993 0.994 0.992 0.915 0.91
Verdict Pass Pass Pass Pass Pass Pass Fail Pass Pass Pass Fail Pass Pass Pass Fail Fail

1 2
T13 : Cusums Forward Test; T13 : Cusums Backward Test.
SINHA ET AL . 7 of 8

The initial seed value u0 may be a vector with a non-zero component. An efficient implementation is presented in,15 with
size of the matrix being 240, special entry in the matrix is 487 013 230 256 099 140, special multiplier is m = 251 + 1. We have
performed NIST Test Suite on the MIXMAX PRNG and the results are presented in the Table 24.

8 CONCLUSION

Based on the NIST Test Suit results on different LCG PRNGs, we observed that all the LCGs have performed very poorly
over other PRNGs. However, a linear combination of LCGs, which is defined as Wichmann-Hill PRNG, has fared well against
the NIST Suite Test. However, Wichmann-Hill PRNG fails the “Serial Test”. The different implementations of WELL PRNG
perform better than the Wichmann-Hill PRNG. Note that certain implementations like WELL512a, WELL44497b fail the
“Serial Test”. The MIXMAX PRNG outperforms the above three generators convincingly as it does not fail the “Serial Test’.
However, due to the strong failure assumption for “Non-Overlapping Test” (it generates 148 P-values), “Random Excursions
Test” (it generates 8 P-values) and “Random Excursions Variant Test” (it generates 18 P-values), all the generators considered
in this paper failed these tests. The PR value was exceptionally low for the “Non-overlapping Template Matching Test” due to
the very high chances of failure (any 1 out of 148) compared to the “Random Excursions Test” (any 1 out of 8) and “Random

TABLE 23 Test results for WELL PRNG


1 2
Type T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
512a 0.987 0.983 0.992 0.989 0.989 0.987 0.246 0.99 0.991 0.992 0.976 0.989 0.987 0.986 0.897 0.922
1024a 0.991 0.99 0.985 0.988 0.99 0.988 0.256 0.989 0.988 0.989 0.981 0.987 0.991 0.988 0.912 0.929
19937a 0.99 0.989 0.99 0.988 0.991 0.982 0.229 0.991 0.983 0.99 0.982 0.986 0.993 0.993 0.901 0.929
19937c 0.985 0.99 0.993 0.99 0.984 0.982 0.253 0.993 0.991 0.987 0.989 0.983 0.989 0.99 0.915 0.928
44497a 0.99 0.985 0.99 0.99 0.99 0.993 0.262 0.991 0.985 0.995 0.983 0.99 0.99 0.99 0.922 0.917
44497b 0.989 0.992 0.994 0.986 0.995 0.988 0.203 0.99 0.988 0.991 0.977 0.993 0.99 0.99 0.911 0.907

TABLE 24 Performance of the MIXMAX PRNG against the NIST test suite
1 2
Test T1 T2 T3 T4 T5 T6 T7 T8 T9 T 10 T 11 T 12 T13 T13 T 14 T 15
S 990 985 988 993 987 985 244 992 990 991 984 984 984 991 905 910
PR 0.99 0.985 0.988 0.993 0.987 0.985 0.244 0.992 0.99 0.991 0.984 0.984 0.984 0.991 0.905 0.91
Verdict Pass Pass Pass Pass Pass Pass Fail Pass Pass Pass Pass Pass Pass Pass Fail Fail
8 of 8 SINHA ET AL .

Excursions Variant Test” (any 1 out of 18) based on the assumption that one P-value failure implies the failure for the test.
Therefore, based on test results, none of the generators considered in this paper should be used for cryptographic applications.

CONFLICT OF INTEREST
The authors declare no potential conflict of interests.

ORCID

Shobhit Sinha https://orcid.org/0000-0001-7555-0785


SK Hafizul Islam https://orcid.org/0000-0002-2703-0213
Mohammad S. Obaidat https://orcid.org/0000-0002-1569-9657

REFERENCES
1. von Neumann J. Various techniques used in connection with random digits, Collected Works. Vol. 5, New York: Macmillan; 1963.
2. Kerckhoffs A. Militaire cryptographie. J Mil Sci. 1883;IX:48-83.
3. Menezes AJ, van Oorschot PC, Vanstone SA. Handbook of Applied Cryptography. 5th ed. USA: CRC Press, Inc.; 2001.
4. Frieze AM, Kannan R, Lagarias JC. Linear Congruential generators do not produce random sequences. Proceedings of the 25th Annual Symposium on Foundations
of Computer Science; USA: IEEE; 1984:480-484.
5. Wichmann BA, Hill D. Correction: algorithm AS 183: an efficient and portable pseudo-random number generator. J R Stat Soc Ser C Appl Stat. 1984;33(1):123.
6. Panneton F, L’Ecuyer P, Matsumoto M. Improved long-period generators based on linear recurrences modulo 2. ACM Trans Math Softw. 2006;32(1):1-16.
7. Savvidy KG, Ter-Arutyunyan-Savvidy NG. On the Monte Carlo simulation of physical systems. J Comput Phys. 1991;97(2):566-572.
8. Knuth DE. The Art ofComputer Programming. Vol 2. 2nd ed. Reading, MA: Addison-Wesley; 1981.
9. G. Marsaglia, Diehard: A Battery of Tests of Randomness. http://stat.fsu.edu/pub/diehard/
10. Rukhin A, Soto J, Nechvatal J, et al. A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications. USA: NIST Special
Publication; 2001.
11. Gustafson H, Dawson E, Nielsen L, Caelli W. A computer package for measuring strength of encryption algorithms. J Comput Sec. 1994;13(8):687-697.
12. L’Ecuyer P, Simard R. TestU01: a software library in ANSI C for empirical testing of random number generators. ACM Trans Math Softw. 2007;33(4):22-40.
13. Matsumoto M, Nishimura T. Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator. ACM Trans Mod Comput Simul.
1998;8(1):3-30.
14. Savvidy KG. The MIXMAX random number generator. Comput Phys Commun. 2015;196:161-165.
15. Savvidy K, Savvidy G. Spectrum and entropy of C-systems MIXMAX random number generator. Chaos Solitons Fractals. 2016;91:33-38.

How to cite this article: Sinha S, Islam SKH, Obaidat MS. A comparative study and analysis of some pseudorandom
number generator algorithms. Security and Privacy 2018;1:e46. https://doi.org/10.1002/spy2.46

You might also like