Download as pdf or txt
Download as pdf or txt
You are on page 1of 197

Applications of Chaos and Fractals to

Cryptology
Applications of Chaos and Fractals to Cryptology

A dissertation submitted in partial fulfillment


of the requirements for the degree of
Doctor of Philosophy

By

Juan Carlos Córdova Zeceña, Ingeniero Electrónico, M.S.E.E.


Universidad de San Carlos de Guatemala, 1993
University of Arkansas, 1997

August, 1999
University of Arkansas
This dissertation is approved for
recommendation to the
Graduate Council

Dissertation Director:

___________________________
Dr. Edwin Engin Yaz

Dissertation Committee:

___________________________
Dr. Kraig J. Olejniczak

___________________________
Dr. Dwight F. Mix

___________________________
Dr. Mitchell A. Thornton

___________________________
Dr. David L Andrews
Carpe Diem
Applications of Chaos and Fractals to Cryptology

Abstract of dissertation submitted in partial fulfillment


of the requirements for the degree of
Doctor of Philosophy

By

Juan Carlos Córdova Zeceña, Ingeniero Electrónico, M.S.E.E.


Universidad de San Carlos de Guatemala, 1993
University of Arkansas, 1997

August, 1999
University of Arkansas
This abstract is approved by:

Dissertation Director:

_______________________
Dr. Edwin Engin Yaz
The applications of chaotic systems and fractals to cryptology are examined. Several

methods are proposed with the intention to produce better pseudorandom numbers and

increase the level of confusion and diffusion in encoded messages. A complexity measure is

adopted as a means to assess the performance of each method. The proposed schemes are

compared against each other and against the results of the crypt algorithm, available from the

Unix operating system.

The schemes are divided into pseudorandom number generators, permutation matrix

generators, substitution schemes, and secret sharing schemes. Computer simulations suggest

that, in the case of pseudorandom generators, a technique we call indirect thresholding

should be favored against a more straightforward approach called direct thresholding. Also,

the proposed substitution schemes, from the viewpoint of our complexity measure, compare

favorably against the crypt command. Permutation matrix generators are found to produce

weaker results when used directly for encoding purposes and simulations suggest their proper

role in cryptography is that of cryptographic primitives whose function is to increase the

diffusion in the ciphertext.

1
Iterated function systems are used in both permutation matrix generation and secret

sharing schemes. In the latter, a method is devised by which a secret is broken into shares

(represented by functions of an iterated function system) in such a manner that only when the

shareholders agree on combining their information, it is possible to decode the secret

message. Examples are shown in which individual attempts by a single shareholder to decode

the secret are unfruitful.

Some other issues are also treated, such as the assignment of a dimension to structured

languages, the general concepts of fractals and chaotic systems, and how unpredictability in

these systems suggest their application to cryptography.

2
Dedication

Somehow Queen's old hit, Another one bites the dust, comes to my mind. I have finished
a work that started in 1997 when I formally entered the doctoral program at this university. I
had come here in the autumn of 1995, enjoying the benefits of a Fulbright scholarship, to
pursue a Master's degree in Electrical Engineering and by the time I was awarded that degree
I was also sure I did not want to stop there.

I am deeply grateful to the Electrical Engineering department of this university. Working


for this department has allowed me to complete my doctorate degree and has also provided
me with the opportunity to exercise and improve my teaching skills. A most enjoyable
experience. It is through teaching and research that the wonders of science come alive and
one cannot but stand in rapture, seeing how beauty reveals itself in all the odd places, even in
chaos.

One starts writing a dissertation and, without knowing it, one embarks in a very
demanding but at the same time very rewarding relationship. Long nights are spent in front of
a flickering computer screen that you know cannot be healthy for your eyes. On Friday
nights, the thought "this is not my life!" crosses your mind, and you refrain from abandoning
it all and drive to the nearest pub. But not without a hard stare your computer is oblivious to.
And yet, this is your life. One does not pursue a PhD without first falling in love with
science. Curiosity is our drive, the quest for answers to old and new questions, but also
playfulness, the need to solve puzzles or make up our own riddles. And in our field,
scribbling equations on a napkin and simulating systems on a computer are the tools we have
to find those answers, be this any day of the week.

Excitement and frustration alternate as your brain's child painstakingly grows in pages.
You may think (or at least hope) you are changing blank pages into meaningful stuff, but by
the end of the process you also find that you have changed too. You know yourself better,
your strengths and weaknesses. You also know that much lies ahead.

Now that I am writing these last words on my "Lady D" (as I nicknamed my dissertation)
I feel like another one bites the dust, but there are still plenty of things to come and do. A
vast sea is in front of us, we will set sail and see where the wind and the tides take us to. Let
our journey be extraordinary, we ask for nothing less.

This dissertation is dedicated to life. I have seen the dark in some of your deepest valleys
and also gazed the bright skies on your highest mountains. I want to see more.

3
Acknowledgments

Quoting Antoine de Saint-Exupéry: "what is essential is invisible to the eye. . .". To be


here, on this particular place, at this particular time, may seem like a random event. In truth,
it is the result of a personal inclination towards studying and the huge support I have received
from many many people that, at one time or another, believed in me and were willing to bet
on me. All of them have been essential, even when it is impossible to name them all, and
their identities vanish in this mist we call the past.

I would like to express my gratitude to my family. For knowing how to let me pursue my
ways and knowing how to keep in touch with me, reminding me without words that family is
forever. This silent support has been the foundation on which my personal growth has
become possible. Thank you. I carry you with me anywhere I go.

My extended family, my friends, have also been a very important stabilizing factor in my
life. To my pleasure, now they span the world and there is probably not a continent I can visit
where I cannot find a friend. University life makes this miracle possible and it changes one's
perspective on the world: we are One and caring for each other amounts to caring for
ourselves and vice versa.

To the immediate and most influential persons on this work, the members of my doctoral
committee, my deepest thanks. Meeting you and having you as part of my life honors me.

Starting with Dr. Edwin Engin Yaz, the director of this dissertation and my major
professor almost since my arrival to this university, working with him has always been
interesting and I cannot think of any other person I would rather have to direct my academic
steps.

My thanks to Dr. Mitch Thornton, whose intuition and thinking I found most clear, and
who seems to be specially gifted in the art of posing problems one cannot resist. Working
with him on different projects and in different occasions, has been fun and stimulating.

Finally, I would like to extend my appreciation to Dr. David Andrews, Dr. Dwight Mix
and Dr. Kraig Olejniczak, the other members of my committee, with whom I have either
taken classes or worked with, and who have, perhaps unknowingly, encouraged me and fed
my enthusiasm in the quest for knowledge.

4
Nomenclature

_ - An abstract field
R - The set of real numbers
C - The set of complex numbers
I - The set of integers
N - The set of natural numbers
x - Magnitude if x belongs to a field _
Set diameter if x is a set
Determinant if x is a matrix
 - Such that
≈ - Approximately equal to
≡ - Identically equal to
∝ - Proportional to
∞ - Infinity
∀ - For all
∃ - There exists
∈ - Belongs to
⊂ - Subset
⊆ - Proper subset
⇒ - Implies
⇔ - If and only if
∂f/∂x - Partial derivative of f with respect to x
∧ - And
∨ - Or
∪ - Union
∩ - Intersection

5
Chapter 1

Introduction

Cryptology derives from the Greek words χρυπτός and λoγιχή, meaning hidden and
treatise respectively, i.e., the science of secrecy. In modern usage, however, the intended
significance of the word is not limited to those things that are to be kept secret but to the
broader field of information integrity [1]. This includes (but is not limited to) privacy, where
information is kept hidden from all but the intended parties and authentication, where the
authenticity of the information is confirmed. See Figure 1.1. The main focus of cryptology is
to ensure the availability of genuine information to those for whom the information was
originally intended to be provided.

The privacy aspect of cryptology concentrates on delivering information to its rightful


recipients through a possibly insecure medium called the channel. Since having access to the
channel does not imply being authorized to receive the information available on it, this must
be encoded or hidden in such a way that only legitimate users can retrieve it. Cryptography
refers to the case where encoding is used, whereas steganography alludes to the case when it
is the very existence of the information what is kept secret [2]. Assessing the level of security
of a channel or a scheme is done by means of cryptanalysis.

6
Authentication, on the other hand, deals with the problem of the legitimacy of the
information and its affiliation. Common authentication problems include recognizing a
legitimate user (identification), recognizing a genuine document (validation), and linking a
user to a document (signature).

Schemes are devised in which these cryptologic functions take place. They consist of
protocols, cryptoalgorithms, and implementations. Each of these is a link in a chain that is
supposed to insure data integrity, and when designing a practical cryptosystem one, must take
all of them into account. Protocols consist of a set of rules and procedures for handling and
processing information without compromising its integrity. Cryptoalgorithms, the algorithms
that actually process data, are at the core of a cryptosystem; they rely on some mathematical
problem that is easy to solve when a piece of information, called the key, is known and very
difficult if this information is missing. A cryptographic attack is an attempt to gain
unauthorized access to the information. The enemy or adversary or opponent is any party that
undertakes such an attack. Attacks may be directed to protocols [3], to the cryptographic
algorithms (by means of cryptanalysis) [4] or to the actual implementation on a particular
platform where the hardware and software involved give away information on keys,
passwords or pairs of unencrypted and encrypted information [5]. So, for a cryptosystem to
be secure, it needs to be designed taking all these factors into account and also envisioning
the kind of adversaries and allies (trusted parties) it might be dealing with [6].

Cryptosystems, by their nature, are difficult to analyze. There is no standard procedure to


estimate the level of security of a system, although there are some requirements involving
physical access and functionality [7]. Cryptoalgorithms are usually analyzed in terms of
theoretical and practical measures. In the case of the former, a mathematical analysis may
reveal how difficult it is for an all-powerful adversary to gain unauthorized access to the
system, whereas in the latter, a similar estimate is formulated on the assumption of limited
resources available to the adversary. This is a weaker form of a security measure, but its use
is validated by the fact that some cryptoalgorithms do not lend themselves to an explicit
mathematical description.

In any case, it is important to stress that when designing cryptosystems in general, and
cryptoalgorithms in particular, some immediate practical concerns should be considered,
such as: where the system operates, what kind of data it should protect, for how long a time,
what liberties we are granting the adversary; and in the particular case of data encoding, if it
is a lossy or lossless algorithm we are using and if there is any bandwidth or other physical
limitation.

In this dissertation, we will deal primarily with cryptoalgorithms because protocols and
implementations are system specific, while algorithms are not. Once a cryptographic
algorithm is devised, it is the task of the systems engineer to ensure the hardware
implementation and protocols do not undermine the security level of the cryptoalgorithm.

7
Also, of all cryptologic functions, cryptography is the oldest and most useful, so we will
focus on it.

The rest of this chapter introduces general aspects of cryptography, chaotic systems,
fractals, and some of the terminology we will be using later on.

Chapter 2 will deal with the issue of complexity, a measure of how intricate a structure is.
This will be important in later chapters as a means to assess the performance of different
cryptoalgorithms and derive conclusions on their usefulness an applications. In Chapter 2 we
apply this complexity measure to text fragments written in English. This is done to gain an
understanding of the underlying structure of this language and to set a reference point from
which we may better understand how much a cryptoalgorithm obscures this structure.

In Chapter 3, we resume our analysis of chaotic systems and fractals to add a level of
detail and definitions that will empower us to make assertions on the local and also eventual
behavior of these systems. We also analyze a natural association of a language with a fractal
dimension. It remains a conjecture that this association may point out fractals that are better
suited for the purpose of encoding messages in this language.

Several schemes that involve pseudorandom numbers generation, permutation matrices


generation, substitution and secret sharing are proposed and analyzed in Chapter 4. They are
also compared against a known popular cryptographic algorithm.

Finally, in Chapter 5 we draw conclusions on the performance of the proposed schemes,


their applicability and possible extensions to them.

The main contribution of this work is to show diverse ways in which chaotic systems and
fractals can be applied to encode information without increasing the physical requirements
needed to transmit it. Of course, the same techniques used to guarantee safe transmission of
information can be used to ensure safe storage and retrieval of it in a computer system. So the
work presented here may find applications in communication systems, computer systems,
electronic commerce, etc.

1.1 Cryptography

The main goal in cryptography is to transfer information between two or more points,
through a possibly insecure channel, in such a way that even in the presence of eavesdroppers
or active intruders, only the parties to which the information is addressed will be able to
decode it. This is depicted in Figure 1.2, which is essentially the same diagram Shannon [8]
used to start his mathematical description of secrecy systems.

The field has evolved a somewhat colorful naming of the different participants. A
convention we will follow here. Alice is the sender and originator of a message, Bob is an

8
intended receiver (and so will Carol and Dave be if more than two parties are involved in the
transfer), and Oscar will be any opponent in general.

So, referring to Figure 1.2, the goal is for Alice to communicate with Bob without
disclosing any information to Oscar. To accomplish this, Alice encodes the original
information, the text m, before transmitting it to Bob. Both, Alice and Bob, agree, a priori, to
share a piece of information K, called the key, that allows Alice to encode m into the
ciphertext c and Bob to decode m from c. Oscar is assumed not to know the key, so he faces
the problem of inferring what the key is based upon the knowledge he can gather from the
ciphertext.

However, not knowing K does not necessarily stop Oscar from trying to estimate what m
was. He will use any means in his hand -statistics, heuristics, side information- to discard all
but those texts that could have given rise to c with an appreciable probability. In doing so, he
will produce an estimate of the text, m^ . He would have succeeded in breaking the system if
he can come up with estimates that are arbitrarily close to the original message for all
possible ciphertexts. The goal of the cryptosystem, on the other hand, is to leak as little
information as possible about the operation of the system (specifically, the key) so that
Oscar's knowledge of c does not enable him to create better estimates of m.

Of course, for the system to operate properly it is necessary that the encoding and
decoding processes be invertible:

c=T K( m )

m = T -1K ( c )

and that, given a particular ciphertext, there exist more than one K that will index an inverse

9
transformation that will produce a plausible text. If this is the case, the particular ciphertext is
unbreakable, and it is breakable otherwise [9].

A key K is a weak key for the encryption system if the ciphertexts obtained by encoding
texts using this key have the property that very few plausible texts can be associated to them;
choosing among fewer possibilities would make Oscar's task easier. However, this is not the
only way a key can be weak. It may be that the particular transformation indexed by a key
preserves, in the ciphertext, a great deal of the statistical characteristics of the original text,
so correlation with known trends in a language can be used to decipher the ciphertext without
actually finding the key.

According to one of Kerckhoffs' rules [10], in a good cryptosystem the security of the
system must rely on key alone and not on the particular algorithm chosen or the details of its
implementation. This does not mean that the algorithm or its implementation are
unimportant, they certainly are important, but they are also public knowledge, so one should
not expect an opponent to be unaware of them. Instead, the only vital piece of information
Oscar lacks is the key, and this ignorance, in conjunction with a good choice of the family of
transformations and key space, should be enough to render his efforts useless.

The amount of liberty we assign Oscar will determine the kind of attacks he may launch
against the system [11]:

a) ciphertext only, when he only has access to the channel through which encripted
information is transmitted;

b) corresponding text-ciphertext, when he has access to pairs of corresponding original


text and resulting ciphertext, but he has no control on the particular text used;

c) chosen plaintext, when he can induce Alice into enciphering a text of his choice;

d) chosen ciphertext, when he can induce Bob to decipher a ciphertext of his choice.

A cryptographic system is more robust the greater the number and diversity of attacks it
can withstand without being weakened. The strength of a cryptosystem is partially (but not
entirely) associated with the size of its key space: large key spaces prevent brute force
attacks that involve trial and error of every key. Another contributor to a cryptosystem's
strength is the type of transformations and indexing system that has been selected, and yet
another part is determined by cryptographic protocols and implementation details.

The keys used for enciphering and deciphering need not be the same. If they are, the
unique key is called a secret key and the algorithm that uses it, a symmetric algorithm (or
secret-key algorithm or one-key algorithm or single-key algorithm). A symmetric algorithm
requires the exchange of the secret key through a secure channel; the reason for not using

10
such a channel for information exchange is usually related to availability and expenses.
Public-key algorithms, on the other hand, do not require the existence of a secure channel.
Two keys are generated: one called the private key, used for decryption, and another called
the public key, used in encryption. Although related, it should be computationally infeasible
to calculate one key from the knowledge of the other. The system operates in the following
manner: Alice generates a pair <private key, public key> and publishes her public key;
anyone that wants to communicate privately with Alice uses her public key to encode a
message, Alice will be the only one able to decipher the message because she is the only one
that knows the private key. This system is not without flaws. If Oscar is not a simple
eavesdropper but can actively intercept messages through the channel, he can substitute
Alice's public key for his own one, so he can read any message originally intended for Alice.
Since he also knows Alice's original public key, he can choose to re-encode these messages,
with or without tampering, so that Alice is unaware that her communications are being
monitored. This man-in-the-middle attack can be prevented by using more elaborate
protocols [12].

Encoding-decoding schemes can also be classified as stream-ciphers and block-ciphers


based upon the way they transform the original text. Stream-ciphers transform element by
element in the text, whereas block-ciphers transform chunks of elements. What constitutes an
element is still a bit subjective, since one can consider, for example, an algorithm operating
on one ASCII character at a time as a stream-cipher operating on an alphabet consisting of
256 elements or as a block-cipher operating over chunks of eight bits. The distinction is not
rigorous because stream-ciphers can always be considered block-ciphers of block length
equal to unity, and block-ciphers can always be considered as stream-ciphers over a much
larger alphabet. The terminology applies for the more practical cases where information
generation rates can be compared with information encryption rates, such as in secure
telephone communications where the production of ciphertext occurs simultaneously with
the feeding of input text (speech), here, the cipher employed is usually a stream-cipher. Block
ciphers are more often used when time is not a major concern and the whole text is available
at once.

Cryptoalgorithms in general rely on cryptographic transformations to hide information.


These transformations are selected based on their ability to hide the statistical structure of the
original text (confusion), obscure the statistical connection between the text and the
ciphertext (diffusion), and the difficulty in determining them when the key is unknown. The
last is related to the time or space complexity of any cryptanalytic algorithm (one used by an
opponent) to search over the key space.

Cryptosystems that cannot be broken, no matter how much ciphertext is analyzed and no
matter how much computational power the opponent has at his disposition, are dubbed
perfect systems. Such systems attain perfect secrecy; their existence was proved by Shannon
[8], but unfortunately they are impractical since they require keys that convey as much
information as the message they are used to encode, and the keys are, therefore, at least as

11
long as the original text. In more realistic cryptosystems, unique solutions for a particular
ciphertext can be found, in principle, after examining only an amount u cipher digits, this is
called the unicity distance. Perfect systems have u equal to infinity; other systems, called
ideal systems also have u equal to infinity but the ciphertext can still be related to a finite (but
not unique) number of plaintexts. Imperfect ciphers have a finite unicity distance that in
many cases is a small number, yet this does not imply it is necessarily easy to break them or
that only u characters have to be collected to break them. More often than not, the opponent
has to gather many more than u characters before cryptanalysis is feasible, and breaking the
code is a matter of time only. But this computational time may be enormous, given the state-
of-the-art in computers, and that is where the strength of the system comes from. This also
illustrates the fact that cryptographic methods are dependent on current technology and with
more powerful computers we may expect more powerful algorithms to be conceived and
used that are not possible today because of the time requirements involved. Particularly, the
advent of quantum computers might have a striking impact on cryptography and
cryptanalysis, since they are expected to run through exponential search spaces in polynomial
time.

1.2 Fractals

The word fractal was coined by Benoit Mandelbrot from the Latin adjective fractus [13]
to refer to certain geometrical objects and sets that he thought could represent more closely
the geometry of natural objects such as clouds and mountains. Fractals where known,
however, long time before Mandelbrot christened them [14]. The Cantor middle-third set
appeared in 1883 [15]; the Peano curve, in 1890 [16], the Hilbert curve, in 1891 [17] and the
works of Sierpinsky, Julia, and Hausdorff date back to the early nineteen hundreds. But it
was not until Mandelbrot started using computers to visualize fractals and proposed a
'Geometry of Nature' that the field stood up on its own.

The basic idea underlying the concept of a fractal is that of self-similarity; a fractal set is
one for which any subset is a scaled down and possibly distorted version of the original.
Often, such sets can be associated to a dimension, a fractal dimension, different from their
topological dimension, and in the majority of the cases this dimension is, amazingly, a
fractional number. This leads to dust-like sets that have a dimension greater than zero,
continuous curves with dimension greater than one and surfaces with dimensions greater than
two, to mention a few examples.

Topological dimension corresponds to our intuitive concept of dimension, i.e., the


number of degrees of freedom necessary to specify a point within a set. More technically
[18], we may define the topological dimension of a set recursively in terms of the dimension
of the intersection between the set and the boundary of neighborhoods of points within the
set.

Definition 1.1 A set S has topological dimension 0 if every point within the set has

12
arbitrarily small neighborhoods whose boundaries do not intersect the set.

Definition 1.2 A set S has dimension d if each point in S has arbitrarily small neighborhoods
whose boundaries' intersection with S is k-1 dimensional, and k is the least nonnegative
integer for which this holds.

There is no unique definition of what fractal dimension constitutes, and in fact there are
several ways in which such a dimension may be defined [19]. Fortunately enough, in many
cases the various definitions lead to the same result, so choosing a particular one is
determined by the type of the available data concerning the set and by the ease with which a
particular definition can be applied. Commonly these fractal dimension definitions, of which
the Hausdorff dimension and the box counting dimension are important instances, rely on
some power law that relates the set to its self-similar elements.

Self-similarity may be geometric or statistical or defined by some other properties of the


set and the particular measure involved. It may also be strong in which the properties
involved are strictly preserved, or weak in which only some properties are preserved or all
present a mild degree of distortion.
Since self-similarity is a fundamental characteristic of a fractal, it too can be used to
create them. Take for example a geometric fractal such as the Koch curve [20], here a single
segment is scaled down by a factor of three and four copies of it are laid down replacing it.
See figure 1.3. If we continue doing this kind of substitutions wherever we find a straight line
segment, we generate the Koch curve (Figure 1.4): a curve with infinite length, no tangents,
delimiting a finite area between it and the abscissa, and with a fractal dimension of
log(4)/log(3) ≈ 1.26, a strange mathematical object indeed!

The Koch curve can be described in several manners. First we can view the process of
generating the curve as that of a production system or a grammar in formal languages [21], in
such systems a production is a rule that tells us how to rewrite (or transform) an element in
the set. L-systems, named after the biologist Aristid Lindenmayer who proposed them to
model growth in living organisms [22], are essentially grammars of this type. In the case of
the Koch curve there is a single production that tells us that a simple straight line (Figure

13
1.3.A) should be replaced or rewritten by a wedge-like figure (Figure 1.3.B). Repeating this
process ad infinitum renders the fractal of which Figure 1.4 is an approximation.

The production that gives rise to the Koch curve can also be considered as the combined
result of applying four different mathematical transformations at once to the original set and
repeating this process ad infinitum. These transformations can be described verbally and as
mathematical transformations of the plane.

If we let x denote a vector in R2 lying on Koch curve, the four transformations implied by
the production rule are:
1) contract the image to one third of its size;

1 0
3 
 
x k+1 = w1 ( x k ) =   xk
0 1
 3 

2) contract the image to one third of its size and displace it two units to the right;

1 0
3 
  + 2
xk +1 = w2 ( x k ) =   xk 0 
0 1
 3 

3) contract the image to one third of its size, rotate it 30° counter-clockwise and displace
it one unit to the right;

14
 1 - 3
 6 
 6   1
xk +1 = w3 ( x k ) =   xk +  
 1 0 
3
 
 6 6

4) contract the image to one third of its size, rotate it 30° clockwise and displace it to the

 1 3
 6   1.5
 6   
xk +1 = w4 ( x k ) =   k  3
x +
- 3 1  
   2 
 6 6
right by 1.5 units and upward by 31/2/2 units.
The functions w1, w2, w3, and w4 comprise what is known as an iterated function system
or IFS. In particular, since the w's only involve linear scaling, rotations and translations, this
IFS consists of affine transformations. Iterating the functions within the IFS renders an
invariant set or attractor which, in this case, is the Koch curve. A surprising fact is that the
order in which the functions are selected, and the image we start with, bear no significance:
the final image (the attractor) is independent of them. This will hold true as long as the
mappings in the IFS are all contractive [22].

In the case of the Koch curve, we may find interesting to observe how the number of
segments Nk , and length Lk, vary at the different stages k of its construction, provided we
measure the length using at least a resolution δk. In the beginning, k = 0, we start with a
single segment of length L0 (in the particular curve depicted in Figure 4, L0 equals 3 but as
we will see, this is not too important), N0 = 1, and our resolution does not need to be better
than δ0 = L0 . In the first iteration we substitute this segment for four different segments, each
_ of the length of the original, N1 = 4, at this scale we need a resolution δ1 = L0/3 to be able to
follow the curve and in doing so, we find L1 = (4/3) L0. Proceeding in this way, we find:

k
Nk = 4
L0
δk= k
3
k
4
L k =   L0
3

we notice Nk →∞ as δk →0, so we conjecture a power law between those quantities

15
N = c δ -s

If such is the case, we might say the set has self-similarity dimension s [19], and may find

log N
s = lim
δ →0 - log δ

it as,
δ will approach zero as k approaches infinity. Using the expressions we have for k and δ we
find:

k log 4 log 4
s = lim = = 1.2618...
δ →0 k log 3 log 3

and this is the self-similarity dimension for the Koch curve.

Not all sets coming from the 'real world' follow a power law, and those that do, may
follow it only for resolutions within a certain range. Self-similarity of real objects usually
breaks up much before the atomic level is reached. We may say, however, we are dealing
with imperfect fractals. This happens even for virtual objects within a computer, since finite
precision prevents us from carrying out some operations after a finite number of iterations,
but it may not necessarily happen in a continuos time analog system on which we may make
measurements. More important than the shortcomings of a particular information processing
system is to be aware of what conditions make them show up, so no invalidated conclusions
are drawn from them.

1.3 Chaotic systems

Dynamical Systems are mathematical models whose evolution can be described using
differential equations if they are continuous-time systems, and difference equations if they
are discrete-time systems. The state of the system is some short, but complete enough,
description of the system that enables us to predict how the system will evolve given we
know what forces are acting upon it [23].

In finite dimensional systems, the state of the system is encoded in a finite number of
variables, called state variables, whose particular value determine the actual state. These
variables may be grouped under a single structure called the state vector. The state vector can
be visualized as an object existing in a space whose dimension equals the number of state
variable. This space constitutes the state space or phase space of the system. The evolution

16
of the system itself is a trajectory in state space.

Traditional control theory deals with a particular type of dynamical systems called linear
time-invariant in which the parameters of the system do not change in time and the response
of a combined excitation to the system equals the combination of the responses to individual
excitations. Loosely speaking, this implies that if we know how the system reacts to a
particular driving signal, we know how it reacts to any signal.

A system is autonomous if there is no external force driving it. In such case the system
evolves by virtue of its initial state. If there is an external driving force, the system is forced,
and its evolution depends generally on the initial state and the particular nature of the driving
force. Quite often, we can associate an energy to a state, i.e., a scalar function of the state
whose value is in proportion to the amount of work that may be done by the system. This
value, although not giving us complete information on the state of a system, can be used to
estimate what regions of the state space are immediately accessible to it. Systems in which
this quantity decreases monotonically with time in the absence of a driving force are called
dissipative systems, those for which it does not change are conservative, and those that
actually increase it are unstable since they make certain components of the state vector
increase without bound. That a certain component can increase without limit is only a
problem when it implies that the energy of the system is also unbounded, since that will
mean the system has an infinite capacity to produce work (or to store energy) and no physical
system has such a property.

Consider a mass-spring system as shown in Figure 1.5. For such a system the potential
energy U and the kinetic energy K are given by:

1
U = κ x2
2
1
K = m v2
2

where x is the displacement from the equilibrium position and v is the velocity.

17
We may recall Euler-Lagrange equations [24]:

L= K -U

d ∂L ∂L
- =0
dt ∂ qɺ ∂q

so, upon substitution, we come up with:

dv
m +κ x = 0
dt

If we wish to account for a frictional force proportional to the velocity we would have:

dv
m +η v + κ x = 0
dt

x1 = x
d x1
x2 = =v
dt

and making the substitutions


we arrive at the following system model in state-space form:

d x1
= x2
dt

d x2
= - α x1 - β x 2
dt

18
κ
α=
m

η
β=
m

where,
For this system, three types of movement are physically possible, and a fourth one,
corresponding to 'negative friction' (friction that aids the movement), is at least
mathematically possible:

a) β = 0, the system is conservative and never loses energy, its trajectory in


phase space is a closed curve, Figure 1.6.a;

b) 0 < β ≤ 2α½, the system is dissipative and under damped (critically damped if
equality holds), so it approaches the origin in phase space
asymptotically, Figure 1.6.b;

c) 2α½ < β, the system is dissipative and overdamped, so it approaches the origin in
phase space without encircling the origin, Figure 1.6.c;

d) β < 0, the system is unstable, it creates energy (physically impossible for an


isolated mass-spring system like this) so its trajectory in phase space
grows without bounds; it may spiral out if β ≤ -2 α½, Figure 1.6.d.

19
Autonomous linear systems do not present more complicated behavior than this. In

d x1
xɺ1 = = σ ( x 2 - x1 )
dt
d x2
xɺ 2 = = ( ρ - x 3 ) x1 - x 2
dt
d x3
xɺ 3 = = x1 x 2 - β x3
dt

contrast, lets look at the following set of equations:


These are the equations Edward N. Lorenz proposed for a model to predict weather in
1962. The paper [25] (which was not published until 1963), describes the first strange
attractor known to science: the Lorenz attractor (see Figure 1.7). Trajectories of the
dynamical system are certain to remain in a particular region of the phase space (i.e., within
the attractor), but all periodic solutions are unstable, so with probability 1 a trajectory or orbit
is never a closed curve. Such an orbit is called a chaotic orbit, and a dynamical system that
exhibits such orbits is called a chaotic system. In particular, the Lorenz attractor portrayed in
Figure 1.7 obtained with parameter values: σ = 16, ρ = 45.92, and β = 4 is chaotic.

20
One of the necessary (but not sufficient) conditions for a system to be chaotic is that it
has to be nonlinear, if it is linear, as we saw previously, its behavior is rather simple and its
orbits are either periodic, decaying periodic, increasing periodic or simply increasing or
simply decreasing. Nonlinearity, however, is not enough, and for chaos to be possible the
system must be nonconservative and of order greater than two if its continuous-time
(discrete-time nonlinear maps of any order can be found that are chaotic) [26].

Although we will refine what we mean by chaotic system in later chapters, for the
moment it will be enough to think of them as systems whose orbits are non-periodic, locally
diverging (two trajectories that are close at one moment get separated after a short time), but
limited to a finite region in phase space any subregion of which is visited by any orbit at
some point in time.

1.4 Interrelationships between cryptography, fractals, and chaotic systems

Fractals and chaotic systems are related in seemingly incidental ways. The strange
attractor of a chaotic system is a subset of the phase space and in many cases can be
associated with a fractal dimension lower than the topological dimension of the phase space.
The actual dimension is a measure of how 'normal' or 'well defined' the evolution of the
chaotic system is within the attractor. This 'normality' is closely related to the entropy of the
chaotic system which, in turn, is an indicative of the degree of certainty we might attain when
predicting the evolution of the system. On the other hand fractals in the form of IFS's can be
considered chaotic systems, since the number of iterations necessary to reveal the attractor is
infinite and therefore all orbits (periodic and nonperiodic) of any point within the set are
present. The fact that this implies chaotic dynamics is proved in [22].

21
Having a deterministic chaotic system opens up the possibility for spread spectrum
communications because it implies that it is possible to calculate (or by some means
generate) points along an orbit that, for all purposes and intent, look and behave like random
numbers. This also can be taken advantage of in secure communications where we may use
these pseudo-random numbers to mask information we may wish to transmit from Alice to
Bob. If only Alice and Bob know the particular chaotic system involved (and this becomes
the key), then only they can decode each other's ciphertext.

Modulation and communication schemes based on chaos fall , until now, into one of five
categories [27]: chaotic masking, inverse systems, Predictive Poincaré Control modulation
(PPC), Chaos Shift Keying (CSK), and Differential CSK (DCSK). The possibility of using
chaotic systems in communications arises as a result of having means for synchronizing and
controlling such systems [28][29][30]. Through synchronization, we may have two or more
systems that would normally be diverging following almost identical orbits in state space
which enables the receiver to perform an inverse operation on the data (encrypted) signal as
to decipher the message (plaintext) embedded in it.

In the case of chaotic masking, we add a chaotic component to a comparatively weak


signal, so the whole appearance of the signal is that of a completely random (or at least
chaotic) one. If the receiver can synchronize with the original system by use of this masked
signal, then it can recreate the original, signal-less, masking component, so it can subtract it
from the incoming data stream and recreate the original message.

When using chaotic shift keying, the signal is used to modulate parameters of the
transmitting chaotic system and some sort of estimator is used in the receiver end to detect
the changes in those parameters. Detection of a binary stream can often be done without
performing any actual estimation, but realizing when the receiver system cannot synchronize
with the transmitter, indicating an actual change in parameters large enough to prevent
synchronization from taking place. The Lorenz attractor, for example, has been proposed as a
system in which both types of modulation can be accomplished [31].

Using inverse systems involves designing a couple of systems, the transmitter and the
receiver, in such a way that if the transmitter is driven by the information signal it generates a
chaotic data stream that when driving the receiver it would cause a suitable output to
replicate the original information signal.

Finally, in predictive Poincaré control modulation, the chaotic system is forced to follow
a certain path according to the values of the information signal. The receiver decodes the
signal by identifying those paths, or more exactly, the crossings of those trajectories with
suitably selected Poincaré surfaces (a generalization of the concept of projection onto a
surface).

There is another kind of modulation suggested by Abarbanel et al [32], in which the

22
density of periodic orbits is utilized to encode information. In phase space modulation, a
particular region of phase space encodes information in the following way: any (unstable)
periodic orbit is bound to go by this particular region because of the characteristics of a
chaotic system and when it does, it will exhibit (in some suitably selected output) a particular
sequence (points along the orbit). We encode information by choosing to transmit the whole
sequence or just part of it. For example, the absence of one of the points in the sequence may
signify a 0 was transmitted and the presence of the whole sequence that a 1 was sent.

Other schemes resort to coding messages in basins of attraction of chaotic systems and
selecting as the ciphertext any initial point whose orbit would be attracted by this particular
message [33], thus there is a many-to-one correspondence between ciphertexts and
plaintexts, and although this may help hide information, it also redounds in the necessity of a
larger channel bandwidth.

1.5 General outline

In this chapter we have highlighted the problem we are concerned with, namely that of
secure communications, and introduced fractals and chaotic systems as possible means to
solve it. In Chapter 2, we will focus our attention on the notion of complexity, to understand
what we mean by it and which means are available to quantify it. Our interest is in
developing a tool that will help assess how complicated a structure is and how well a
particular encoding scheme is hiding information (by transforming information in such a way
that it bears little resemblance to its original form).

Later, in Chapter 3, we will focus on those characteristics of fractals and dynamical


systems that can be exploited for the benefit of secure communications. After these
foundations are laid down, we propose, in Chapter 4, several schemes that involve trajectory
encoding of information and the use fractals and chaotic systems to come up with
permutation matrices that will scramble the statistics of the original plaintext in order to hide
its true meaning. One of the aims is to do so without increasing the bandwidth that would be
necessary if the information was to be transmitted without encoding.

Also proposed are schemes for secret sharing in which the key is distributed among
different users in such a way that only when all parties agree on cooperating are they able to
decipher the message.

Examples of the operation of each scheme are given and analyzed under the light of the
complexity tools developed in Chapter 2, and fractal and dynamical systems results of
Chapter 3. Finally, conclusions are drawn in Chapter 5 and further areas of research are also
pointed out.

23
Chapter 2

Complexity

The notion of complexity is important in computer science and in communication theory.


In the former, it addresses the resource requirements of an algorithm designed to solve a
particular problem; in the latter, it attempts to describe the degree of intricacy inherent to an
object. In this chapter we will introduce a working definition of complexity that will allow us
to compare sequences and decide on which one exhibits a greater apparent complexity (that
we will simply call complexity). The aim is to generate a tool to help us assess the
performance of the cryptographic schemes to be introduced in Chapter 4.

Also, language considerations will be addressed, so as to familiarize ourselves with the


structure of texts written and English and later better understand how a particular encoding
scheme obscures this structure.

2.1 Problem classes

Problems are classified according to the type of the algorithms available to solve them.
Algorithms are described as programs for some particular computer model, such as a Turing
machine, and the problem itself is presented to the computer under a convenient or
reasonable encoding scheme [34]. An algorithm can be classified according to time
complexity, i.e., the time it takes to solve the problem, or according to space complexity, i.e.,
the amount of memory it uses when solving it. Sometimes a special distinction is made
concerning how many arithmetic operations a run of the program involves, referred to as
computational complexity. However, this can be regarded as a particular case of time
complexity.

Time complexity is perhaps the most important measure of complexity of an algorithm. It


is a function of the length n of the input to the program. If the time t required to solve a
problem is upper-bounded by a polynomial function of the input length, then the problem is
said to belong to the class P of problems, i.e., solvable in polynomial time by a deterministic
Turing machine. Solving P problems can be accomplished efficiently. On the other hand,
there are some problems for which no polynomial time algorithm to solve them is known, but
if we were given a solution there is a polynomial time algorithm that could check its validity.

24
These problems are said to belong to the class NP, i.e., problems that are solvable by means
of a nondeterministic Turing machine. Such a machine was proposed by Turing in his 1938
doctoral thesis at Princeton University [35]. It is an ordinary Turing machine enhanced with a
"guessing" module (Turing called it an oracle, and the machine an O-machine). The guessing
module provides the answer and it is the task of the deterministic part to check that this
solution indeed satisfies the problem. The class P is a subset of the class NP, but it is not yet
known if this is a proper relationship or if P = NP, although it seems unlikely that this would
be the case.

In cryptography, the class NP of problems is particularly important because several


cryptographic methods depend on the fact that problems in this class are very difficult to
solve with deterministic machines (we have no access to a nondeterministic machine at this
moment, yet the development of practical quantum computers may change this). Knapsack
cryptosystems fall into this category and solving the knapsack problem is NP-hard [4].

2.2 Complexity of data sequences

Complexity also enters the picture referring to the structure of an object [63]. The
particular objects we are interested in are data sequences, and their complexity is associated
to the amount of information they carry or difficulty of their description.

We need a measure for an object’s complexity to assess whether a given transformation


acting upon it renders a simpler or a more complex one. For example, when encoding a text
we would like to know if the resulting ciphertext is indeed more difficult to analyze than the
original. If not, the encoding scheme is of no value and we would discard it. If they
ciphertext is more difficult to analyze then we may wish to compare it against the complexity
obtained by using other schemes and base our choice of encoding on this measure.

Unfortunately, there seems to be no unique way in which to describe complexity [36]


although some definitions seem to capture important aspects of the basic idea. From classic
information theory’s point of view [37][38], we may equate complexity with information
content: an object having a greater complexity as larger is its entropy. This makes sense since
sequences in which few messages appear with high probabilities are intuitively easier to
predict (are less complex in some sense) and give rise to low entropies. This measure of
complexity, however, is more related to the difficulty in communicating a data sequence than
to the intrinsic complexity of it.

Another, perhaps more appealing, measure of complexity is the Turing-Kolmogorov-


Chaitin complexity, which relates the complexity of a sequence to the size of the smallest
program capable of generating it. A random sequence would be one whose description
(program) requires at least as many bits as the sequence itself. However, the lack of an
algorithm to calculate this complexity and its dependence on a particular computing model

25
restricts its practical applications [39], the latter may not constitute a transcendental
limitation since complexities calculated using different models may differ only by an additive
constant [40]. However, we still face the problem of arriving at a minimum program and an
exhaustive search for such is usually a prohibitive approach. Formally speaking, classical
information theory and algorithmic information theory are identical [41].

The linear complexity of a sequence may be more practical, at least in the sense that there
exists an efficient algorithm to compute it [42] and it is a monotone non-decreasing function
of the sequence length. A sequence’s linear complexity is the length of the shortest Linear
Feedback Shift-Register (LFSR) that can generate the sequence [43]. A drawback of using the
linear complexity as an absolute measure of complexity is that sequences that are time-mirror
images of each other do not posses the same linear complexity yet, intuitively, they should. A
mirror image of a text may seem more complicated than the actual text but meaning, which is
a semantic property, should not be confused with complexity, which is a structural attribute.

So, no single definition seems to completely capture what we mean by complexity.


Things even get worse when we realize that whatever definition we come up with relies on a-
priori assumptions about the nature of the object we are dealing with. Take for example a
640x400 pixel computer image representing a black circle over a white background: viewed
as a two-dimensional picture we need no more than the radius of the circle and its center to
describe the image, whereas if we look at the sequential data stream that actually drives the
CRT it would look like a long sequence of zeros and ones devoid of any pattern, whose
description and interpretation would not be quite clear. From this we may conclude that an
information preserving projection of an object should end up producing a more complex
object just because of the loss in dimensionality (things that are easy to describe in n
dimensions require a more complex description in n-1 dimensions). However, for
cryptographic purposes this kind of increased complexity is of no value unless the projection
mechanism itself is one among many possible and the selection of the particular scheme is
determined by extra information (a key). In any case trying to compare the complexity of
objects of different dimensionality or to assess the value of a transformation on these objects
may be unjustified, equivocal and meaningless. We will refrain from doing so and we will
direct our interests in comparing objects considered to coexist in the same dimension and
assessing the usefulness of transformations applied to these object which render objects of
equal dimensionality.

Although any object made up of finitely many constituent parts can be described in many
different dimensions, there is usually a natural dimension associated with it and we may
expect that a suitably defined complexity measure be minimal in this dimension. This natural
dimension would not be difficult to identify if the source is known (for cryptographic
purposes the cyphertext should reveal as little information as possible of this dimension):
images are two-dimensional, spacial objects are three-dimensional and normal text is
basically sequential and, therefore, one-dimensional and some objects may even be
considered to have a fractal dimension.

26
Our primary concern would be one-dimensional objects or sequences. We will operate on
a source object, called the text, and produce a corresponding encrypted version of it, which
we will call the ciphertext following standard practice. The transformation that renders the
ciphertext from the text is called the encoding scheme and the one that recovers the original
text from the ciphertext is called the decoding scheme. A piece of information, called the key,
controls how these schemes operate and it is not required that the key used in encoding be the
same one used in decoding.

Clearly a working definition of complexity is necessary to evaluate the performance of an


encoding scheme. We propose the following:

Definition 2.1. A finite sequence S of length N, is an ordered collection of N atoms or


letters s0 s1...sN-1 that belong to a field Ζq containing q elements with values running from 0
to q-1.

Definition 2.2. A symbol α is any sequence we may wish to regard as a single unit.

Definition 2.3. A standard ordering of symbols is the indexed set {αi}k consisting of all
possible symbols of length k ordered according to their numerical value when interpreted as
a number in base q.

Definition 2.4. A probabilistic profile Pki of order k of a sequence S is a discrete probability


distribution in S of the symbols of a standard ordering of length k. Pki is the probability of αi,
of length k, appearing in S.

H k = - ∑ P ki log 2 ( P ki )
i

Definition 2.5. The kth complexity component Hk (or k-complexity or k-entropy), is:
where the sum is taken over all sequences of length k ≥1.

Definition 2.6. The total complexity C of a sequence of length N is:

N
C=∑Hk
k =1

The motivation behind these definitions is that complexity, so defined, would be


insensitive to time reversals (a sequence and its time reversed mirror image would have the
same complexity) and that a sequence is examined at all possible scales, each one adding to

27
the measure of complexity according to how "disordered" the sequence appears at that scale.

2.3 Upper bounds on sequence complexity

Definition 2.5 is essentially that of the entropy of a system that generates messages or
symbols of length k. This entropy reaches its maximum when all messages are equally likely,
or when the probabilistic profile for these messages is a constant. Therefore,
1
P ki = k
q

and

H k ≤ log2 ( N k )
where Nk is the number of equally-likely messages, or symbols, of length k present in the
string. For infinite strings, the number of messages of length k taken out of an alphabet with
q elements is:

k
Nk= q

So the upper bound on the k-complexity of an infinite string increases linearly with k:

H k ≤ k log2 (q)
The total complexity of an infinite, totally random, sequence approaches infinity at a
quadratic rate:

N
lim N(N + 1)
C = ∑ k log2 (q) = log2 (q)
N →∞ k =1 2

For complex signals of finite length the scenario is different because not all symbols of
length k are realizable. We still get a maximum when all realizable symbols are equally
probable, so (2.4) still gives us an upper bound on Hk but Nk < qk if k is big enough.

If N is the length in atoms of a sequence S then:

Nk ≤ q
k

and since we may only form N/k non-overlapping subsequences of length k out of S then,

N
Nk ≤
k

28
so

 k N
N k = min  q , 
 k

Let kT be the value of k that satisfies:

k
kT q T = N

all Hk complexity components with k < kT are bounded by (2.6) whereas those with k > kT are
bounded by log2(N/k), so:

The total possible complexity of a sequence of length N is bounded by:

N
kT N
C ≤ ∑ k log2 (q) + ∑ log 2  
k =1 k = k T +1 k
And this can be rewritten in a more convenient form as:

( + 1)  N! 
C ≤ kT kT log2 (q) + (N - k T ) log2 (N) - log2  
2  kT ! 

Lets define Cm to be the maximum possible complexity for a sequence of length N, then,
according to (2.14):

k T ( k T + 1)  N! 
Cm = log2 (q) + (N - k T ) log2 (N) - log2  
2  kT ! 
we also have:

k T ≤ logq (N)
and

logq (N) ( logq (N) + 1)  N! 


Cm ≈ log2 (q) + (N - logq (N)) log2 (N) - log2  
2  ( log (N))! 
 q 

29
this expression is Θ(N ln(N)), meaning that constants α and β can be found such that:

α N ln(N) ≤ C m ≤ β N ln(N)
for large enough N.
An interesting fact that can be inferred from equations (2.17) and (2.18) is that for large
enough N the role of q diminishes. This implies that most of the maximum possible
complexity of a finite sequence is due to its length and not the size of its alphabet.

Maximum complexity curves are shown in Figure 2.1 for values of q equal to 2, 4, 8, 16, 32
and 64. The sequence size ranges from 1 up to 1000 elements.

2.4 Complexity at different scales

Suppose we look at the sequence at a different scale and we make up a new set of atoms
by grouping r original atoms together. In this case the number of elements in our alphabet
would increase and the length of a given sequence, in atoms, would decrease according to:

q′ = q r
and

N
N′=
r
and

kT ′ q T = N ′ _ kT′ ≤ kT
k ′

So all terms and factors of (2.15) have been reduced and:

C m′ < C m if r > 1
30
Therefore the maximum possible complexity of a sequence decreases if we look at it as
composed of larger "blocks". The opposite also holds true: if we look at sequences at a finer
scale, their maximum possible complexity increases. All this is in accordance with our
experience: objects from a distance (a coarser scale) seem simpler than when viewed at close
range, where their details are revealed. In computers and digital processing, however, the
finer scale is that of q = 2, when we have our atoms equal the bits we use to encode
information, and this sets an upper limit to the maximum possible complexity attainable by
any binary encoded sequence. However, as we have seen before, complexity is more
sensitive to sequence length than it is to alphabet size.

2.5 Information distance

A measure of a distance between sequences is needed, especially when comparing


sequences that have undergone a transformation (possibly encoding). Two sequences may
have the same total complexity but that does not imply that they are alike, at most this
property implies that they are as difficult to analyze or appear equally random.

As is the case for complexity, there are several proposed ways of calculating the distance
between two sequences [44][45], some based on the classical idea of mutual information and
others on the notion of algorithmic complexity: the length of the shortest program that can
turn one sequence into the other [46]. In every case the purpose is to measure the elusive
property of likeness and use it in practical applications as pattern recognition,
thermodynamics of computation, communication systems and cryptography.

Given our definition of complexity it seems natural to define the information distance in
terms of our complexity components:

Definition 2.7. The complexity vector V of a sequence of length n is a vector also of length N
whose kth component is the Hk.

Definition 2.8. The information distance or complexity distance d between two sequences s1
and s2 with corresponding complexity vectors V1 and V2 is the 1-norm of the difference of
these vectors:
N
d( s1 , s 2 ) = ∑ | V 1 (k) - V 2 (k) |
k =1

Definition 2.9. The k-complexity difference or k-complexity distance is defined to be:

31
d k ( s1 , s 2 ) = | V 1 (k) - V 2 (k) |

Cryptographically, these definitions are appealing because they are in the same scale as
the total complexity and actually the distance between a perfectly constant (and therefore
predictable) sequence and a more complex sequence is just the complexity of the latter. They
are also appealing because simple symbol substitution will render a new sequence not more
difficult to analyze than the original text and from the point of view of this measure both
ciphertext and text are equivalent and their information distance is zero. Caesar's substitution
is an example of the kind of encoding that renders ciphertexts that are as easy to analyze as
the original text itself.

Equation (2.23) states that two sequences generate a null distance when they are equally
complex at any level. Also, the time necessary to calculate this distance is linear in n.

Figures 2.2 - 2.5 show the magnitudes of the components of six maximum complexity
vectors for sequences composed of 100, 1000, 10000, and 100000 atoms, respectively. The
alphabet size for the vectors in each graph runs from q = 2 to q = 64.

Notice how all vectors' components coincide after a certain value of k is reached. This
value is equal to kT for the sequence with smallest q. Since the total complexity of the
vectors is basically the area under the components, we can see, once again, that the behavior
up to kT is determined by the value of q. But for most of the rest components it is
independent of q and is solely dependent on N; so we expect the total maximum sequence
complexity to be mostly dependent on sequence length and not on alphabet size. In addition,
we would expect the difference of two sequences of length N that are maximal in complexity,
to be due only to their first kT components and not to their sheer size.

32
If we fix the alphabet size for a maximal complexity vector, Hk will increase
monotonically for all k ≤ kT. From then on, Hk will decrease monotonically because the
number of different symbols of length k that can be formed diminishes rapidly. The fact that
kT is proportional to N implies that the peak of Hk would be shifted towards larger values of k
the larger the sequence size becomes. See Figure 2.6.

33
2.6 Language-related considerations

Now we turn our attention to the effect of a language's structure on the entropy (and for
us, complexity) of sequences generated within this language. The problem was originally
studied by Shannon [47] and our interest in it stems from our need to understand the
complexity profiles of typical language sequences to be able to assess how much scrambling
has a particular encoding scheme done on the original text.

Shannon defines a quantity Fk he calls the k-gram entropy as:

34
F k = - ∑ p( bi , j) log2 ( p(j / bi ) )
i, j

where p(bi , j) is the probability of a sequence that starts with a subsequence bi, consisting of
k-1 atoms, and is followed by the letter j; p(j/bi) is the probability of j appearing after the
block bi. Now, this can be rewritten as:

   
F k = - ∑ p( bi , j) log 2 ( p( bi , j) ) - - ∑ p( bi ) log 2 ( p( bi ) )
 i, j   i 

The first term in this equation is the entropy of sequences consisting of k atoms, and the
second is that of a sequences consisting of k-1 atoms. So (2.26) is equivalent to:

F k = H k - H k -1
We will define

H0 ≡0

F 0 ≡ log2 (q)
so that (2.27) will be valid for k ≥1.

Shannon defined the entropy HL of a language as the limit of Fk when k approaches


infinity. HL is a measure of how many bits of information per letter are necessary on average
to communicate a message, given that one knows the underlying statistical structure of the
language. However, from a practical point of view HL can only be approximated up to a
certain point because (2.27) is basically the slope of a numeric sequence whose values are
those of the components of the complexity vector associated to a string within the language.
Since for any string, there is a limit as to the maximum complexity it can exhibit and,
furthermore, since after the kTth element Hk decreases with increasing k, Fk would be negative
for k > kT. This does not make sense and should be discarded as being a byproduct of
analyzing sequences of finite length, and not an intrinsic property of the language.

Figures 2.7-9, show the general tendency of the complexity profile for strings of 100,
1000, and 10000 letters long extracted from Edwin A. Abbot's Flatland: A Romance in many
Dimensions, this and the other texts used for comparison were obtained from their ASCII
electronic form available on the Internet through the Project Gutenberg Association at the
Illinois Benedictine College; an effort to publish public domain work on the Internet,
initiated by Professor Michael S. Hart [48]. Table 2.1 shows the various texts used and their
abbreviation for referencing in this dissertation.

Table 2.1: Test texts.

35
Name Author Title Size
TXT1 Edwin A. Abbot Flatland: A Romance in many 218569
Dimensions
TXT2 George Bernard Shaw Mrs Warren's Profession 210786
TXT3 Charles Babbage The Economy of Machinery and 270148
Manufactures
TXT4 Edgar Allen Poe The Fall of the House of Usher 92312
The Cask of Amontillado
The Black Cat

The staircase-like appearance of the complexity profile for TXT1 is due to the fact that
the algorithm that calculates entropies will partition the sequence into N/k symbols of size
k instead of exactly N/k symbols of this size. The graphs show the average complexity
components of sequences of 100, 1000 and 1000, letters long. The maximum complexity
profiles for alphabets consisting of 16 and 32 elements are shown for comparison. As can be
seen, for values of k ≥kT the complexity profile of sequences taken out of TXT1 matches
(except for the 'staircase effect') that of the maximum complexity vectors, and this, as we

36
know, is a consequence of limited sequence length and not alphabet size. So we would
expect this to be a common trait of 'typical' (we will define what we mean by this more
precisely in the next chapter) sequences coming from any stochastic process.

Moreover, any differences in complexity profiles for sequences generated by the same
source (in our case this might as well mean meaningful sequences of a given natural
language) should be noticeable only in the lower complexity components, for k ≤ kT, and
should not be excessive on average. That this is the case can be seen in Figures 2.10-12,
where the average complexity profiles of sequences of 100, 1000, and 10000 elements long
coming from our four text files TXT1, TXT2, TXT3 and TXT4 are plotted. The graphs are
very close to each other for k ≤ kT, and they coincide for k > kT. This will imply that the
summation in (2.23) does not need to be carried out for more terms than kT.

37
Although, as mentioned before, to calculate information distances we do not need to take
into account complexity components beyond kT. It may be useful to do so up to a component
kc just because it is easy to obtain this number, and it is not much larger than kT:
k c = logq ( n )
So, we have
kc
d( s1 , s 2 ) = ∑ | V 1 (k) - V 2 (k) |
k =1

as a more practical formula for calculating information distances between sequences.

Now, we return to the issue of expected complexity in a language. The first 10


components of the average complexity vectors for TXT1, TXT2, TXT3, and TXT4 are given
in Table 2.2. From it, we can calculate Fk using (2.27) and (2.28). In Table 2.3 we compare
these results against Shannon's reported k-gram entropies.

38
Table 2.2: Average complexity components.

n H1 H2 H3 H4 H5 H6 H7 H8 H9 H10
100 4.02 5.26 5.00 4.60 4.30 4.07 3.89 3.69 3.57 3.31
1000 4.23 7.02 7.80 7.72 7.53 7.33 7.13 6.94 6.79 6.63
10000 4.27 7.54 9.57 10.31 10.49 10.46 10.34 10.21 10.07 9.93

Table 2.3: k-gram entropies.*


n F0 F1 F2 F3 F4 kc
100 4.75 4.02 1.24 - - 1.41
1000 4.75 4.23 2.79 0.78 - 2.12
10000 4.75 4.27 3.27 2.03 0.74 2.82
** 4.76 4.03 3.32 3.1 - -
* Empty entries correspond to negative or unavailable data.
** Transcribed from [47].

From the data presented in these tables we can see that conclusions based upon the
observation of higher order complexities should be taken cautiously, since there is not
enough experimental information to support it. Below the value of kc (kT, to be precise)
however, there are enough symbols in a sequence to accurately calculate k-entropies and k-
gram entropies.

For the moment, the maximum complexity profiles of sequences of a specific length have
been determined, the average complexity profiles for sequences written in English have been
plotted and conclusions related to sequence size and entropy meaning have been drawn. In
later chapters, we will turn to the aspect of complexity profiles of transformed sequences to
assess how much a given encoding scheme hides the statistical characteristics of a message.

39
Chapter 3

Chaotic Systems and Fractals

Finite-dimensional dynamical systems are systems whose evolution can be described by a


finite set of differential equations or a finite set of difference equations. In the former case,
the system is called a continuous-time system, whereas in the latter, a discrete-time system.
Moreover, by a suitable choice of state variables, the differential equations do not need to be
of an order greater than one and the difference equations do not have to refer to values of
state variables more than one discrete-time step into the past.

Research in the field of chaotic systems started in the late 1800's by Henri Poincaré's
effort to solve the famous n-body problem: to calculate the trajectories of n celestial bodies
whose only interactions are by virtue of their gravitational attraction. Interestingly enough,
the n-body problem has not found a closed form mathematical solution for n greater than
two, and numerical calculations show that almost identical initial conditions lead to very
different trajectories, that an orbit can be found that passes through any two given points in
space, and that there are infinitely many orbits passing nearby any given point. All these are
the hallmarks of chaos.

Most physical systems are of the continuous time type, but if we are willing to ignore
some details of their evolution we may arrive at a simpler model for the system that requires
only the sampling of events at specific times and thus, at a discrete-time model.
Mathematically, however, discrete and continuous time systems have equal standings and
one is not a subset of the other. Furthermore, to numerically integrate continuous-time
differential equations we must first turn them into discrete-time (difference) equations,
effectively substituting the original continuos-time system by a discrete equivalent a
computer can solve.

Use of a digital computer introduces not only the necessity for time discretization, but
also for parameter discretization, in the sense that all values are kept within the computer's
memory with limited precision. This will set a limit on how long we can follow an orbit of a
dynamical system and still expect the results to be close to the real orbit the dynamical
system we are simulating is following. Loosely speaking, this is the predictability horizon of
the system, i.e., how much time into the future we can meaningfully ask about the system's
evolution.

40
In this chapter, we will adopt a definition for the characteristics that, if possessed by a
dynamical system, will make it a chaotic dynamical system. We will also present several
examples of chaotic systems that will be used later, in Chapter 4, to encode information.
Also, we will examine the issue of chaotic dynamics on a fractal, the idea of fractal
dimension, and the notion of strings within a language as points belonging to a fractal set of a
specific dimension. The idea behind the latter is that to 'fit' information directly to a fractal it
has to have a dimension greater than or equal to that of the language.

3.1 Chaotic systems

As mentioned before, continuous-time dynamical systems are modeled by a finite set of


first order differential equations and discrete-time systems by a finite set of difference
equations as follows

xɺ0 = f 0 ( x0 , ... , xn-1 , t )


xɺ1 = f 1 ( x0 , ... , x n-1 , t )

xɺ n-1 = f n-1 ( x0 , ... , x n-1 , t )

x0 [k + 1] = f 0 ( x0 [k] , ... , x n-1 [k] , k )


x1 [k + 1] = f 1 ( x0 [k], ... , x n-1 [k] , k )

xn-1 [k + 1] = f n-1 ( x0 [k] , ... , xn-1 [k] , k )
which can be compressed somewhat if we define x to be a vector, the state vector, in an n-
dimensional space called the phase space and f a vector function of the same dimension:

xɺ = f( x , t )

x[k + 1] = f( x[k] , k )
If time does not appear explicitly in these equations, the system is autonomous or self-
regulating In this case the solution for the state vector depends only on the elapsed time and
on the initial conditions but not on the particular choice for the origin of time.

For continuous-time systems, the function f, also called the velocity field, defines
streamlines in phase space that are tangential to the actual trajectory of the system. In the
discrete case it is f(x[k], k) - x[k] , that is actually tangential to the trajectory. In either case,
the knowledge of f is sufficient to determine flows in phase space, as we will see.

Suppose a solution for (3.3) can be found for a given initial state x0,

41
x(t) = φ ( x0 ,t)
then we can locally analyze the behavior of the system by means of the Jacobian:

or, for discrete systems,

In particular, understanding how a collection of nearby points in phase space behaves


along a streamline will be useful. Defining the set of points to be those in a volume element
δΩk at time k (or δΩ0 at time t0), then we will have [26]:

δ Ωt = | J(t) | δ Ω0
δ Ωk+1 = | J[k] | δ Ωk

where J denotes the determinant of the Jacobian matrix.

The condition J = 1 guarantees that a volume in phase space does not shrink nor expand,
and such systems are termed conservative. Systems for which J < 1 are called dissipative,
and those for which J > 1 are called expansive or unstable. This condition can be checked
directly in a discrete-time system, but it appears as though checking for it in a continuous-
time system will require solving the system equations beforehand. This is not actually so.
Since J satisfies the following equation (for continuous-time systems) [26]:

| Jɺ |= | J | ∆ • f
the condition J = constant is guaranteed if ∇⋅f = 0, and according to (3.6) J(t0) = 1, so
J(t) = 1, for all t > t0. Therefore, it is enough to check the divergence of f in a continuous
system to tell if it is conservative, dissipative, or expansive.

The attractor of a system is the point set in phase space to which all orbits starting within
what is called the basin of attraction tend to asymptotically. For dissipative systems, the
dimension of the attractor has to be less than that of the phase space because (3.8) implies
volume elements along any orbit approach zero asymptotically, so at least one of the
coordinates that produces a volume element is collapsing. The way in which this collapse
occurs, however, and the change in shape of volume elements, actually determine the
reduction of dimensionality in the attractor, and it could be the case that the attractor has a
non-integer or fractal dimension.

Invariant sets of a transformation, are those sets of points in phase space on which the
transformation has no effect, i.e., the sets remain unchanged. Strictly speaking, the sets must
also be compact, i.e., there must be a finite collection of open sets that covers them. Fixed

42
points are invariant sets consisting of a single point. Periodic orbits are invariant sets
consisting of orbits that close on themselves, and quasi-periodic orbits are orbits that never
close on themselves but nevertheless remain confined to a specific region in phase space in
such a manner that any finite, arbitrarily small (but non-zero) volume element within this
region is eventually visited by the orbit. Attractors correspond to any invariant set that has a
basin of attraction that is a proper superset of the invariant set itself. Repellors are invariant
sets for which the only (local) basin of attraction is the invariant set itself. And there is also
the case of invariant sets that are neutral, i.e., attracting for points in certain regions in phase
space and repelling for points in other regions.

It should be clear that given the nature of invariant sets, in physical experiments or in
computer simulations, if any invariant set is detected it must be an attractor, for if it is not,
any noise or round-off error will drive the system far away from the invariant set. This
statement, however, should be taken with certain caution: computer arithmetic may play
tricks due to finite precision and inevitable feedback loops in the algorithm that runs the
simulation. For example, a computer simulation may show what appears to be a periodic
orbit of a dynamical system that is not supposed to have any, the reasons for this to occur
being not unlike the kind of errors digital hardware introduce in infinite impulse response
filters [49], which is not surprising because digital filters are discrete-time dynamical
systems. A possible solution would be to consider the computer itself, running the particular
simulation we are interested in, as a new dynamical system to analyze, but this is a far more
complicated problem than that of analyzing the original system, especially if floating-point
arithmetic is being used. Another approach is to use finite (but unlimited) precision, where
we increase the number of bits necessary to store a number as needed so that no round-off or
truncation errors ever occur. However, this eventually requires an impossible amount of
memory and time. Finally, we may rely on the computer for most results and when one is of
particular importance, change the precision to detect if it is indeed a property of the
dynamical system under consideration or just the result of imperfect arithmetic.

If we define H(X) to be the space of all compact subsets of the space X to which the state
vector x belongs and A to be a subset of H(X), then the study of a particular dynamical system
is the study of all the points in H(X) that are fixed points of the transformations induced by
the dynamical system. So, if we have

g(A) = {y | y = g(x) ∀ x ∈ A}
then we must solve

A = f(A) A ∈ H(X)
for a discrete-time system and

A = φ (A)

for a continuous-time system.

43
The spaces we will deal with are metric spaces, i.e., there exists a function d, called a
metric, that gives a measure of the distance between two elements in the space.

Definition 3.1 A metric d(x,y) ≥0 is a function defined over a set X, such that if x, y, and z
belong to X [23]:

(i) d(x, y) = 0 _ x = y
(ii) d(x, y) = d(y, x)
(iii) d(x, y) ≤ d(x, z) + d(z, y) (triangle inequality)

To typify a dynamical system as chaotic, we will need to define what a dense set is, what
transitivity implies, and what is sensitivity to initial conditions. We will adopt those
definitions from [18].

Definition 3.2 A set Y ⊂ X is dense in X if, for any point x in X, there is a point y in Y and an
arbitrarily small ε > 0 such that d(x,y) < ε.

Definition 3.3 A dynamical system is transitive if for any pair of points x and y and any ε > 0
there is a third point z within ε of x whose orbit comes within ε of y.

Definition 3.4 A dynamical system depends sensitively on initial conditions if there is a β >
0 such that for any x and any ε > 0 there is a y within ε of x such that eventually the orbit of x
is β apart from that of y.

Connected to the idea of sensitivity on initial conditions is that of Liapunov exponents,


which are an average measure of how fast, locally, nearby orbits separate.

| δx | ≈ eσ t | δ x0 |

Positive Liapunov exponents imply divergence of nearby orbits, negative ones imply
convergence to an attracting orbit, and zero exponents imply constant separation.
Furthermore, if σ is the largest Liapunov exponent of a system, and L is a measure of the size
of the attractor of a dynamical system, then the prediction horizon tH, i.e., the amount of time
into the future we can extrapolate or predict the behavior of the dynamical system, satisfies:

 L 
1
ln
tH » 
σ  | δ x0 | 
The quantity δx0 being our uncertainty on the initial conditions.

44
Definition 3.5 A dynamical system is chaotic if:

- periodic orbits of the dynamical system are dense;


- the dynamical system is transitive;
- the dynamical system depends sensitively on initial conditions.

As a result, we may expect the dynamical system to wander almost everywhere within its
attractor, since being transitive and dense will allow any fluctuation in parameters to drive
the system to a new, unstable orbit. Also, being sensitive to initial conditions, will make two
different initial points, no matter how close together they are, give rise to orbits that
eventually are very different.

For continuous-time autonomous systems to exhibit chaos, their dimensionality must


exceed two. In two dimensions, the only kind of invariant sets are equilibrium points and
periodic orbits [19]. If the system has only equilibrium points, any orbit will approach them
or escape from them asymptotically depending on whether they are attracting, repelling, or
neutral. If it has a periodic orbit this will divide phase space into two parts, the interior of the
region enclosed by the periodic orbit containing an equilibrium point. So a nearby orbit will
be either attracted or repelled by the periodic orbit and finally it will either approach infinity,
stay in the periodic orbit, or home in the equilibrium point, all depending on the periodic
orbit being repelling, attractive, or neutral. In any case, the kind of behavior a orbit follows is
pretty much determined, and now possibilities for dense invariant sets exist. In higher
dimensions, however, a periodic orbit does not partition phase space in two, so more
complex dynamics can occur.

Contrary to the continuous-time case, chaos can arise in discrete-time systems of any
order. In the specific case of a dynamical system represented by a function f:R→R, we have
the following result [18], originally published in 1975 by T. Y. Li and J. Yorke [50]:

Theorem 3.1 The Period 3 Theorem. Suppose f:R→R is continuous. Suppose also that f
has a periodic point of prime period 3. Then f also has periodic points of all other periods.

(n)
f (x) = f(f(...f(x)...)) n - fold composition

If f(n) denotes n-fold functional composition of f with itself, then f has a periodic orbit of
prime period n if an x can be found such that n is the smallest integer for which x = f(n)(x).

The period 3 theorem is a particular case of a much more general result stated by A. N.
Sarkovskii in 1964 [18]. To state Sarkovskii's theorem we must first order the natural
numbers in the following way, known as the Sarkovskii ordering of the natural numbers:

3, 5, 7, 9, . . ., all odd numbers

45
2⋅3, 2⋅5, 2⋅7, 2⋅9, . . .
22⋅3, 22⋅5, 22⋅7, 22⋅9, . . .
23⋅3, 23⋅5, 23⋅7, 23⋅9, . . .
_
. . ., 2n, . . ., 23, 22, 21, 1.

Theorem 3.2 Sarkovskii's Theorem. Suppose f:R→R is continuous. Suppose also that f has
a periodic point of prime period n and that n precedes k in the Sarkovskii ordering. Then f
also has a periodic points of prime period k.

Here we can see how having prime period 3 implies all other periods: every number
follows 3 in the Sarkovskii ordering. Moreover, the only requirement on the function f is its
continuity (not even differentiability), so we may construct a function that, by design, has a
particular prime period and we are guaranteed that it will also present periodic orbits of all
orders that follow this particular prime period in the Sarkovskii ordering.

From here on, we will concentrate on discrete-time autonomous dynamical systems of the
form:

x[k + 1] = f( x[k] )
and, on occasion, we will use subscripts to denote time indices and superscripts within
parenthesis to denote functional composition:

xk +1 = f( xk )
3.2 The logistic function

One of the functions we will be using is the logistic function fλ. Originally, the function
arouse as refinement made by P. F. Verhulst in 1844 to R. T. Malthus' 1798 exponential
growth model [51]. To us, this fact bears only historical interest. Of much more relevance is
that the proposed model is quadratic and therefore nonlinear and it depends on a parameter λ
that can be varied continuously.

xk +1 = f λ ( x k ) = λ xk (1 - xk )
The mapping exhibits a maximum at x = ½, and to keep x in the range [0,1], λ has to be
in the range [0,4]. Also, the two fixed points of (3.19) are:

x- = 0
λ -1
x+ =
λ
*
A fixed point x of f is stable, or attractive, if

46
| f ′( x* ) | < 1

where the priming of a function denotes differentiation with respect to its free variable. So,
for fλ we have
f λ ′ (x) = λ (1 - 2 x)
and
f λ ′ ( x- ) = λ
f λ ′ ( x+ ) = 2 - λ

then, for 0 ≤ λ < 1, x- is stable and x+ is not, so any orbit starting at x0 in the range [0,1] will
eventually reach x- = 0. For 1 < λ < 3, x+ is stable. For 3 < λ ≤ 4, neither point is stable. The
bifurcation diagram for 0 ≤ λ < 3, is shown in Figure 3.1, at λ = λ1 = 1, the system undergoes
a Hopf bifurcation, i.e., a stable fixed point becomes unstable, or vice versa, without the
appearances of any new stable cycles.

A similar kind of bifurcation, called a Saddle-Node (or tangent) bifurcation, occurs at


critical value λ0 of the parameter λ, if there is a ε > 0 and an open interval I, in which there
are no fixed points for values of λ = λ0 - ε, a single, neutral, fixed point appears in I for λ =
λ0, and two new fixed points (one stable and the other unstable) appear for λ = λ0 + ε.

The complete bifurcation diagram, displaying fixed points and stable orbits, is shown in
Figure 3.2. What happens shortly after λ = λ2 = 3 is that a 2-cycle appears, i.e., fλ(2) exhibits a
stable fixed point. This is a period doubling bifurcation, because a stable fixed point of f
becomes unstable and, in turn, it is replaced by two stable fixed points of f(2).

47
At λn, n=1,2,3,..., marked in the diagram, the system experiences a period doubling
bifurcation. At those parameter values where fλ(2n) has a stable fixed point fλ has a 2n-cycle.
The values of λn and λn+1 are related. It was Mitchell J. Feigenbaum who first noticed that the
ratio of these parameter values approaches a definite limit:

λ n+1 - λ n = 4.6692016...
δ F = lim
n →∞ λ n+2 - λ n+1
Moreover, the limit is universal for all functions that exhibit period doubling, and the
ratios converge to δF geometrically, so pretty soon any period-doubling system behaves
exactly alike, no matter what the original function f was. Another consequence of (3.24) is
that there is a finite value of λ for which all cycles are unstable (repelling), i.e., the system
achieves chaos via period-doubling at a specific parameter value. In the case of the logistic
function this value is λ = 4.

There are other aspects of this kind of dynamical systems that lead to the existence of
universal, invariant, functions that describe the behavior of any period-doubling system in the
limits of high iterates, but, for our purposes, the fundamental properties we have described so
far will suffice.

3.3 The quadratic map

Until now, we have implicitly assumed the set on which the function f operates on is
either R, the set of real numbers or Rn, the set of vectors in n-dimensions whose components
are real. In general any vector space X over a field _ for which a metric d exists will allow
us to define dynamical systems and discuss their invariant sets. Such a pair (X,d), constitutes
a metric space, and in particular we will be using complete metric spaces, for which the set X

48
contains all its limit points.
Another function we will use is the quadratic map Qc defined by (3.25), with the
dynamical system (3.26).

Qc (x) = x 2 + c

xk +1 = Qc ( xk )

The set of number on which this dynamical system is defined is R, and the metric d can
be taken as
d( x1 , x 2 ) = | x1 - x 2 |
As in the case of the logistic map, we show the bifurcation diagram for Qc in Figure 3.3.
This graph should be compared with Figure 3.2. By scaling and rotation operations, they
could be made to overlap. This can be seen in Figures 3.4 and 3.5, where both bifurcation
maps have been plotted after the first period-doubling bifurcation, and the bifurcation map
for Qc has also been mirrored in both horizontal and vertical axes.

Associated to any mapping f, there is a set, called the Julia set J(f), defined to be the
closure of the set of repelling periodic orbits of f [19]. The complement of the Julia set is
called the Fatou set or stable set of f. In addition, f is chaotic on J(f).

49
The quadratic map Qc is actually
representative of any second order polynomial transformation p2(y):

p 2 (y) = a 2 y + a1 y + a0

because we may relate x in (3.25) to y in (3.28) by a linear transformation y = h(x):

y=α x+ β
so that the dynamical system (3.26) in terms of the variable y becomes:

1 2  β  β2 
y k +1 =   y k +  - 2  y k +  c α + β + 
α   α  α 

which has the form of (3.28) if we select

1
α=
a2
1
β = - a1
2 a2
1 1
c = a0 a 2 + a1 - a12
2 4

In particular, we can transform the logistic map fλ(y) into its corresponding quadratic map
Qc(x) by choosing

50
1
α=-
λ
1
β=
2
λ  λ
c = 1 - 
2  2

so periodic points xp of Qc(x) correspond to periodic points h(xp) of fλ(y). Actually, the
quadratic map is representative of any dynamical system of the form (3.17) where the
function f is related to Qc via a continuous, invertible, function h, such that:

f(x) = h( Qc ( h-1 (x) ) )

Fully developed chaos FDC occurs in the logistic map at parameter value λ = 4, and
according to (3.32) it will occur in the chaotic map at c = -2. The fact that this is the case can
be seen (by the absence of stable periodic orbits) in figures 3.4 and 3.5.

3.5 The shift map

Now we turn our attention to a new metric space ( Σ,dS ), where Σ consists of all infinite
sequences s0 s1 s2 s3 . . ., with si is either 0 or 1, and dS is given by:


| si - t i |
dS (s, t) = ∑
i=0 2
i

The shift map σ:Σ→Σ is defined to be

σ (s) = σ ( s0 s1 s 2 s3 ...) = ( s1 s2 s3 s4 ...)


so σ drops the first element of the sequence it operates on and shifts the whole sequence one
position to the left. This is a continuous map because if two sequences s and t are closer than
ε = 1/2n , implying that at least their first n elements are the same, their images will have at
least n-1 elements in common, and therefore the distance between them cannot exceed δ =
1/2n-1. So, for every s, t ∈ Σ, such that dS(s,t) < ε, we have dS(σ(s),σ(t)) < δ, thus the
conditions for continuity are met.

The shift map is sensitively dependent on initial conditions. This case can be seen by
picking two arbitrarily close sequences s and t, so that they share a block of n digits, but
differ at least in the n+1 digit. In this case we have dS(s,t) ≤ ε = 1/2n, but after following their
orbit through n steps we get dS(σn(s),σn(t)) ≥β = 1/2. It is fairly easy to demonstrate that there
is a point in Σ whose orbit under σ forms a dense subset of Σ and that transitivity conditions

51
also apply [18]; in other words: the shift map is chaotic.

Returning to the quadratic map, the fixed points of Qc are:

p+ =
1
2
(
1+ 1 - 4 c )
1
(
p- = 1 - 1 - 4 c
2
)
and all important dynamics occur in the interval I = [-p+, p+] . If c > 1/4 then there are no
real fixed points and whatever x we start with, will eventually reach infinity. For -2≤ c ≤ 1/4,
Qc will map I onto itself, but for c < -2 there would be an open interval A that splits I into
two closed intervals I0 and I1, such that if x ∈ A it will escape I in the first iteration of Qc.

Definition 3.6 The itinerary S(x) of a point x (within I) under Qc, is defined to be the
sequence (s0 s1 s2 s3 . . .), such that si is 0 if Qic(x) visits I0 or si = 1 if Qic(x) visits I1.

If we let Λ = I0 ∪ I1, then S(x):Λ→Σ is a homeomorphism for values of c that are less
than -2, and the shift map σ on Σ is conjugate to the quadratic map Qc on Λ. That is:

S( Qc (x) ) = σ ( S(x) )
which allows for the commutative diagram shown in Figure 3.6, which implies that these two
spaces are essentially the same.

So, to any point s in Σ we can associate (uniquely) a point x in Λ, and if the orbit of s is
chaotic, then the orbit of x must necessarily be chaotic too. Take, for example,

52
s = (0100011011000001010011100101110111 . . . )
which is the sequence formed by writing down all combinations of 0's and 1's in blocks of 1,
2, 3, 4, . . . bits. The orbit of (3.38) comes arbitrarily close to any point in Σ and thus there is
an x whose orbit comes arbitrarily close to any point in Λ. Moreover, s given by (3.38) is not
the only point whose orbit forms a dense subset of Σ. In fact there are infinitely many such
points. Now, one can carry-out calculations of true orbits up to n iterations and with a
precision of p digits only if the original point is specified with at least p+n digits. Which
means that part of the limitations for calculating chaotic behavior lies in the precision at
which numbers are stored and arithmetic is done. Another limitation is the time actually
spent in the calculation. For the shift map both resource requirements increase linearly with
orbit length, whereas for the quadratic map they increase exponentially with orbit length,
which is one of the reasons the shift map is easier to study.

3.5 Ad hoc chaotic functions

We are now in position of defining functions that will behave chaotically. To do this we
may, by design, create a continuous function that has a given prime period p and, by
Sarkovskii's theorem, it will also posses periods of all orders that follow p in the Sarkovskii
ordering of the natural numbers.

For example, if we require a function f to map the interval I = [0, 1] into itself and have
an unstable period-3 orbit (0.2, 0.5, 0.8) satisfying the following conditions:

(a) I = [0, 1]
(b) f : I → I
(c) x0 = 0.2
(d) x1 = 0.5
(e) x 2 = 0.8
(f) x1 = f( x0 )
(g) x 2 = f( x1 )
(h) x0 = f( x 2 )
(i) 0 = f(0)
(j) 1 = f(1)
(k) f (′ x0 ) = 1.5
(l) f ′( x1 ) = - 1.5
(m) f (′ x2 ) = 1.5

Since (3.39) imposes eight conditions on f, we may synthesize it as a seventh degree


polynomial

53
f(x) = a0 + a1 x + a 2 x 2 + a3 x3 + a4 x4 + a5 x5 + a6 x6 + a7 x7

with

f ′(x) = a1 + 2 a 2 x + 3 a3 x2 + 4 a4 x3 + 5 a5 x4 + 6 a6 x5 + 7 a7 x6
so the conditions in (3.39) and our assumption of a polynomial form for f, lead to the
following matrix equation:

1 0 0 0 0 0 0 0   a0 
 0  
   1 x0 2 3 4 5 6 7  
x0 x0 x0 x0 x0 x0   a 1 
 x1   
   1 x1 7  
1  a2
2 3 4 5 6
x 1 x 1 x 1 x 1 x1 x
 x2   
   1 x2 x2
2
x2
3
x2
4
x2
5 6
x2 x2   a3
7
 x0  =   
  1 1 1 1 1 1 1 1  a4 
 1  
 1.5 0 1 2 x0 3 x02 4 x03 5 x04 6 x05 7 x60   a5 
    
 
- 1.5 0 1 2 x1 3 x 2 4 x3 5 x4 6 x5 7 x6  a6 
   1 1 1 1 1
 
 1.5 0 1 2 6 a
 x 2 3 x 2 4 x2 5 x 2 6 x 2 7 x 2   7 
2 3 4 5

whose solution is:

A = [ a0 a1 a 2 a3 a4 a5 a6 a7 ]T
= [ 0,7.4778, 58.3875, 249.6354, - 483.7500, 376.3021, 46.8750, 43.4028 ]T

If we define

X(x) = [ 1 x x2 x3 x4 x5 x6 x7 ]

then
f(x) = AT X(x)

That this f maps I into I can be seen in Figure 3.7. Alternatively, we could have specified
f as a continuous function defined over subintervals of I in such a way as to meet the
requirements of (3.39). One possible description of f in this manner is given by (3.46) and
shown in Figure 3.8.

54
The functions defined by (3.45) and (3.46) meet the conditions in (3.39), and because
they exhibit a period-3 orbit, and they are continuous, they will also have orbits of any other
prime period. Both functions have three repelling fixed points, which are the points where
the graph of f(x) intersects the line y = x in figures 3.7 and 3.8. They are repelling because
f`(x) > 1 at those points. Finally, for almost any choice of initial conditions, the orbits are
chaotic in a subset of I. Figure 3.9 shows a typical histogram for an orbit of (3.45), this
particular orbit has x0 = 0.4, and figure 3.10 illustrates sensitivity to initial conditions, since

55
points that are fairly close eventually become very distant.

The selection of a particular function among the infinite number of functions that satisfy
a finite number of constraints is largely arbitrary, but might be influenced by resource
considerations: the amount of memory necessary to store the description of the function, and
the time it takes to calculate its iterates. From a theoretical point of view what matters is that
functions that exhibit a particular chaotic behavior can be found.

56
Another approach to chaotic dynamical systems design is to start with a series of known
chaotic functions gi and designing a new function f that behaves like appropriately scaled
versions hi of gi in the vicinity xi. This may be accomplished by using sifting functions wi and
defining

n
f(x) = ∑ wi (x) hi (x)
i=1
where the fundamental requirement of the wi's is that they evaluate to one in the vicinity of xi,
are flat enough in the this region, and vanish for values of x far from xi.

for all i, j such that 1 ≤ i ≤ n and 1 ≤ j ≤ n :

wi ( x j ) = δ i j

m
d ( (x)) = 0
wi |x j
dx

where δij is Kronecker's delta (equals unity if i = j, and vanishes otherwise) and k determines
the level of constancy of wi near xi.

If qi(x) is an invertible function that maps the attractor Gi of gi into the desired attractor Hi
of hi, then

H i = qi ( G i )

( (
-1
hi (x) = qi g i qi (x) ))
Equation (3.48) imposes n(m+1) different conditions on each wi, so that it would be
possible to synthesize wi as a polynomial in x of degree n(m+1)-1.

n(m+1)-1
wi (x) = aw0 + aw1 x + . . .+ awn(k+1)-1 x

Care must be taken so that the size of the Hi attractor fits in the flat region of the
corresponding wi. This can be controlled somewhat by adjusting the transformation qi and/or
the value of m in (3.48).

As an example, consider the case of the quadratic map Qc with its domain extended over
C, the set of complex numbers, and let c = -0.75+j0.1. The escape map for this
transformation is a map of the points within a region of the complex plane colored according
to how fast they escape the invariant set of the transformation. Those points that lie on the

57
border between points that eventually reach infinity and points that remain close to the
attractor form the Julia set of the transformation. So we have:

g(z) = z 2 + (-0.75 + j 0.1)


The escape map for g(z) is shown in Figure 3.11 and the attractor is seen to be enclosed
by the rectangular region whose corners extend approximately from -1.5 - j1.5 to 1.5 + j1.5.
Let us say that we wish to construct a function f which essentially behaves like g(z) around
the points z1 = 0, z2 = 1, and z3 = 2, with basins of attraction that are a thousand times
smaller than that of g(z). A simple linear transformation qi(x) transforms the original attractor
into the functions hi(x):

qi (z) = α z + z i

58
with

1
α=
1000
z1 = 0
z2 = 1
z3 = 2

The functions hi follow immediately from (3.50). The remaining problem is the selection
of the functions wi, for which we select m = 2, in (3.48). The resulting functions, of eighth
degree are given below and shown in Figure 3.12.

3 4 5 6 7 8
w1 (z) = 1.00 - 24.75 | z | + 67.68 | z | 77.06 | z | + 44.56 | z | 12.93 | z | + 1.50 | z |
w2 (z) = 32.00 | z | 96.00 | z | + 120.00 | z | 76.00 | z | + 24.00 | z | ′3.00 | z |
3 4 5 6 7 8

w3 (z) = - 7.25 | z | + 28.31| z | - 42.93 | z | + 31.46 | z | - 11.06 | z | +′ 1.50 | z |


3 4 5 6 7 8

59
The resulting functions hi(z) and the solution f(z) are given by (3.56) and (3.57). The
escape map for f(z) is shown in Figure 3.13. The small spots at zi are the attractors of f(z),
which, at the scale of the graph, are hardly noticeable. Figures 3.14-16 are amplifications of
the neighborhoods of the zi in which we can clearly see how the attractors are indeed scaled

60
down copies of the attractor of g(z).

2 4 -4
h1 (z) = 1000 z 7.5 10 + j 10
2 4
h2 (z) = 1000 (z1 ) + 0.99925 + j 10
2 4
h3 (z) = 1000 (z2 ) + 1.99925 + j 10
and

( )
f(z) = 1000 z 2 + 68250 | z |5 + 26250 | z |6 + 78750 | z |4 35000 | z |3 3750 | z |7 z
+ 49736.87 | z |6 7.50 104 + 3017.50 | z |3 + 17210.62 | z |4 + j 104
20248.12 | z |7 + 3000 | z |8 51715.87 | z |5

3.6 Fractals

In chapter 1 we introduced fractals as self-


similar sets, i.e., sets whose subsets presented
scaled-down versions of the properties of the
whole. These properties could be geometric
shape, statistical characteristics, or any other
defining aspect of the set we might be
interested in. We also introduced the
definitions of iterated function systems and of
a self-similarity dimension. In this section we
will introduce probabilistic iterated function
systems (PIFS) and we will expand the notion
of dimension to include the Hausdorff
dimension and the Box counting dimension.
Finally, we will see how a natural language

61
can be associated with a fractal dimension.

3.6.1 Iterated Function Systems

An IFS is a collection of mappings {wi} defined over a complete metric space (X,d). If the
IFS consists only of contractive mappings, then there would be a unique invariant set G:

G = { x ∈ X | x ∈ G _ wi (x)∈ G }

If we let H(X) stand for the set of all compact subsets of X (not including the empty
subset), then the Hutchinson operator W associated with an IFS operates on points of H(X):

W(A) = ∪ wi (A)
i

The parallel body Aδ of a set is the set of all points that lie within a distance δ of some
point in A.

Aδ = { y | d(x, y) ≤ δ for some x ∈ A }


The Hausdorff metric hm(A,B) between two points of H(X) is:

hm (A, B) = inf { δ | A ⊂ Bδ ∧ B ⊂ Aδ }
If the functions within an IFS are all contractions with contractivity factors si, then the
Hutchinson operator (3.59) is also a contraction mapping (or simply a contraction) on the
space H(X). Furthermore, W has contractivity equal to max{si}. This way, repetitive
applications of the IFS over an arbitrary initial set will eventually lead to the invariant set G.
Therefore, the Hutchinson operator also satisfies

G = W(G)
A probabilistic iterated function system (PIFS), is a set {wi, pi} of functions wi:X→X and
numbers 0 < pi ≤ 1 such that
1 = ∑ pi
i

at each iteration of a PIFS, a single function wi within the PIFS is picked at a time with
probability pi, and applied to the set of points generated in the previous iteration. The set of
all points generated in this fashion eventually converges to G because the Hutchinson
operator for the PIFS is exactly the same as for a deterministic IFS consisting of only the
functions {wi}. However, what varies is the frequency at which different parts of G are
visited.

62
When the numbers pi are replaced by functions that may depend on the transformations
applied in the last iterations, the number of the iteration, or points which were most recently
generated, we have the case of a recurrent iterated function system (RIFS). Such iterated
function systems are useful in image compression [54].

3.6.2 Hausdorff measure and Hausdorff dimension

A measure µ over a set X is a function that assigns a non-negative number, possibly ∞, to


any subset of X such that the following conditions are met:

(a) µ ( φ ) = 0
(b) µ (A) ≤ µ (B) _ A ∈ B
 
(c) µ  ∪ Ai  ≤ ∑ µ ( Ai ) for any countable sequence of sets
 i  i

equality in (3.64.c) occurs only in the case of all sets being disjoint.

The size |A| of a set A is the largest possible distance between two points that belong to
the set:

| A |= sup { d(x, y) | x , y ∈ A }

A collection {Ai} of countable sets of size at most δ forms a δ-cover of a set B if B ⊂ {Ai}.
A function Hδs is defined as:

 
H δ (B) = inf  ∑ | Ai | | { Ai } is a δ - cover of B 
s s

 i 
s
The s-dimensional Hausdorff measure H (F) is defined as:

s s
H (F) = lim H δ (F)
δ →0

Hs(F) usually evaluates either to zero or to infinity depending on the value of s. The
particular value of s at which this transition occurs is called the Hausdorff-Besicovitch (or
simply Hausdorff) dimension of F [19].
dim H (F) = inf { s | H (F) = 0 } = sup { s | H (F) = ∞ }
s s

Sets for which the Hausdorff measure itself has a finite value at s = dimH(F) are termed s-
sets.

63
3.6.3 Box counting dimension

This kind of dimension is much easier to visualize than the Hausdorff dimension. It relies
on the idea that the number of pieces Nδ of size δ in which we can break down a set A
depends on the scale δ in a characteristic way. The box counting dimension dimB(A) of a set A
is as:

log N δ (A)
dim B (A) = lim
δ →0 - log δ
when such limit exists.

The name of this dimension comes from one of the ways it can be calculated. Take a set
A that is contained in Rn, set-up an n-dimensional grid of resolution δ so that the elements of
the grid are n-dimensional cubes of volume δ n. Then count the number of cubes that have a
non-empty intersection with A. This is Nδ in (3.69), so by repeating this process at smaller
values of δ we get better approximations of Nδ , and therefore of dimB(A).

3.7 Fractal dimension of a language

A language in the general sense is any means of communicating ideas and particularly the
way set up and agreed upon by a large conglomerate of people. This communication takes
place serially in written language by concatenating letters into words and words into
sentences as often as necessary to convey a particular idea or set of ideas.

The word language in mathematical terms is restricted to a simpler class of concatenation


of letters: the set L of all possible strings generated by a particular mathematical model forms
a language in the mathematical sense.

True or natural languages may be approximated by mathematical models that take into
account observed constraints present in them. A way of doing this is to produce models that
mimic the statistics exhibited by the natural language. As we will see, by taking into account
more of the statistical behavior of a language we will find that it resides quite naturally in a
smaller fractal dimension.

However, to be able to show this, we must associate all the strings that belong to the
language (all elements of L) to a mathematical object we can quantify. This is done by first

L ⊆ Γ*

noting that
where Γ is the alphabet, consisting of m letters or atoms from which any string in L can be

64
formed.

Γ = {γ i } 0 ≤ i ≤ m -1

m =| Γ |

so
To each letter from Γ and present in a string S ∈ L, we associate an absolute value:

γ i=i

and we interpret S itself as the m-base expansion of a number in [0,1) with the γi as digits,


S = s1 s 2 s 3 ... _ 0. s1 s 2 s 3 ...|base m = ∑ si m-i
i=1

this representation is clearly unique. We will require a measure to assess the size of a
particular subset of L and we will use the probability mass of this subset as such. We will
consider the case in which the probabilities of letters in the language are independent and the
case in which the underlying structure of the language (its grammar) introduces a conditional
element in these probabilities.

3.7.1 Case 1: letters in the language appear with specified probabilities

The analysis in this section follows the same structure as that presented by [19] in
relation to subsets of real numbers in the interval [0,1). Because of the correspondence made
previously we will speak of texts, strings and numbers indistinctly. In our case, L becomes:

 
L =  S : S ∈ [0,1) ∧ lim ni ( S |k )= p i 
 k →∞ 

where ni(Sk) is the number of times the digit γi appears in the first k digits of S. The
probability of a number starting with a specific ordering of k digits (the probability of a text
starting with a specific string consisting of k letters) would be:

65
P( I i1 i2 ... ik )= pi1 pi2 ... pik

| I i1 i2 ... ik |= m-k

P( I i1 i2 ... ik )= p0k p0 p1k p1 ... p mk -p1m-1

and the length of this interval is:


Now, for a number in L and large enough k:

so that:

 P( I i1 i2 ... ik )   m -1

log2  s
 = k 


∑ p log ( p )+ s log (m)
i 2 i 2
 | I i1 i2 ... ik |  i=0

Define

m -1
1
θ =- ∑ p log2 ( pi )
log2 (m) i=0 i

and in the limit when k tends to infinity, we obtain

This critical value of θ defines the Hausdorff dimension of the set L

m -1
1
dim H L = - ∑ p log2 ( pi )
log 2 (m) i=0 i

If all letters are chosen with equal probability, we have L consisting of all possible

66
random texts

pi = m-1

dim H L = 1

and
Notice that (13) can be expressed in terms of the 1-gram complexity or letter entropy
H1(L) of the language L,

H 1 (L)
dim H L =
log2 (m)

and since H1 ≤ log2(m), the Hausdorff dimension for more structured languages would be
correspondently smaller than 1. This also implies that the number of typical texts belonging
to a language is proportional to the dimH(L). According to [55], the number nT of typical
sequences of length n when the entropy is measured in bits instead of Nepers is

nT ≅ e
n ln (2) H 1(L)

so nT is related to dimH(L) by

nT ≅ e
n ln (m) dimH (L)
= mn dimH (L)

If dimH(L) < 1, then nT « mn indicating that of all possible texts that can be composed
using the letters in the alphabet, only a few actually convey meaning. This is advantageous
for cryptographic purposes because a scheme can be envisioned that maps meaningful texts
into rare or non-typical ones such as to hide the information contained in the original text.
The smaller dimH(L), the larger the ratio of rare to typical texts.

3.7.2 Case 2: more restrictive languages

If we wish to approximate natural languages, we have to take into account the fact that letters
not only appear by themselves in a text with a specific probability, but also pairs of letters
(and triples, and so forth) have a particular probability distribution. Assuming we know the

67
probabilities of 2-grams and following an analysis similar to the previous case we have

P( I i1 i2 ... ik )= pi1 p( i 2 / i1 ) ... p( i k / i k -1 )

equation (7) changing to


let pij stand for the probability of the event "the digit γi is followed by the digit γj" then by the
definition of conditional probability:

puv
p( iu / i v ) =
pv

Also, the number of times nij we expect the pair (γi ,γj) to appear in the first k letters of
our text is:

lim n
k →∞
i j = k pi j

and the number of times we expect γi to appear is:

lim n = k p
k →∞
i i

Using (20), (19) becomes:

pik-1 ik
P( I i1 i2 ... ik )= pi1
pi1 i2
...
pi1 pik-1

and if k is large enough we can substitute the numerator and denominator of (23) by:

pi1 i2 ... pik-1 ik = Π pi j i j


kp

i,j

pi1 ... pik -1 = Π pi


k pi

68
Πp
k pi j
i j

lim P( I )= p i i,j

Πp
i1 i 2 ... i k 1 k pi
k →∞ i
i

 P( I i1 ... ik )   
log2   = k  ∑ pi j log 2 ( pi j ) - ∑ pi log2 ( pi ) + s log2 (m)  + log2 ( pi )
s   
 | I i1 ... ik |
1
  i, j i 

The size of the interval has not changed and it is still given by (8) so:
and

where

1  
θ = dim H (L) = - 
log 2 (m) 
∑p ij log2 ( pi j ) - ∑ pi log 2 ( p i ) 
ij i 

and this can be expressed in terms of 1-gram and 2-gram entropies:

H2 - H1
dim H (L) =
log2 (m)

This result can be extended to the case where conditional probabilities involving r-
elements are known:

 P( I i1 ... ik )  
log2   = k  ∑ pi i ...′i log 2 ( pi i ...′i ) - ∑ pi i ... i log2 ( pi i ... i ) + s log2 (m)
s   ′ 1 2′ 1 2′ 1 2′
 | I i1 ... ik |
r r 1 2 r -1 r -1
  i1 , i2′ , ... , ir i1′ ,′i 2′ ,′ ...′ ,′i r -1

69
H r - H r -1
dim H (L) =
log2 (m)

for s = dimH(L) we also have

70
P( I i1 i2 ... ik )
0 < lim = pi1 i2 ... ir-1 < ∞
k →∞ | I i1 i2 ... ik |s

so that L is what is called an s-set and for these sets having dimH(L) fractional implies that
they are totally disconnected (dust-like), their lower density is 0 and their upper density is
somewhere in the range [2-s,1] [19].

71
Chapter 4

Chaos and Fractals in Cryptography

In this chapter we propose several ways in which chaotic systems and fractals can be
used to generate pseudorandom sequences, permutation matrices, substitution systems and
secret sharing schemes.

For each proposed method, a series of computer simulations are run, and from these,
conclusions on their performance are inferred.

4.1 Pseudorandom sequence generation

A pseudorandom number (or sequence) generator PRNG, is a cryptographic primitive on


its own, that is usually used in conjunction with cryptographic algorithms and protocols. The
quality of the PRNG is of prime importance because a badly designed one may undermine
the security of the encoding scheme, rendering an otherwise sound encryption algorithm
useless [56].

A strong PRNG is one that generates sequences that will pass a large number of
statistical tests that a truly random sequence would be expected to satisfy. Or, from our
complexity viewpoint, one that generates maximum complexity sequences. Complexity,
here, is the practical measure we introduced in Chapter 2. Theoretical complexity cannot
usually be proved for particular sequences, even though most sequences are maximal
complex [57], and therefore we are relegated to statistical and empirical tests, including
computer simulations, in order to estimate the complexity of a sequence [58].

A chaotic system, like the quadratic map, seems to be the kind of dynamical system from
which we may generate pseudorandom numbers. Using parameter value c = -2 we now the
mapping to be at FDC:
2
xk +1 = xk - 2
The state variable x may take values between -2 and 2 and almost all orbits are dense and
those that are not are unstable. From such a system, we may generate pseudorandom
sequences by direct or indirect thresholding. Of course, the quadratic map is just one of an
infinitely many number of chaotic systems to which this procedure applies and for which the

72
conclusions to be derived hold.
4.1.1 Direct thresholding.

In this method, we decide first on the number of different values a particular digit within
the sequence may exhibit. We divide the whole range of x into that number of intervals, i.e.,
we set that number of thresholds, and assign a value to the digit sk according to what interval
does xk fall into.

We illustrate the scheme in Figure 4.1. A typical orbit of x is shown in Figure 4.1(a), if
we were to generate a pseudorandom sequence from this orbit with an alphabet consisting of
four letters, we would divide the range of x in four intervals (Figure 4.1(b)) and assign s
according to what interval is x in a that time (Figure 4.1(c)).

We generated ten different orbits of the quadratic map, whose initial points were evenly
distributed between -2 and 2, and then applied direct thresholding to produce sequences with
alphabet sizes of 2, 4, 8, 16, and 26 letters. The results are recorded in Table 4.1.

Table 4.1: Average Hk complexities for direct thresholding.

q\k 1 2 3 4 5 6 7
2 0.9999 1.9998 2.9996 3.9991 4.9981 5.9960 6.9916

73
4 1.9958 3.1396 4.2511 5.3171 6.3246 7.3255 8.3169
8 2.9544 4.1829 5.2805 6.3384 7.3684 8.3693 9.3446
16 3.8955 5.2553 6.4168 7.4915 8.5092 9.4862 10.4150
26 4.5555 5.9393 7.0693 8.1175 9.1074 10.0606 10.9479

4.1.2 Indirect thresholding

In the case of indirect thresholding we only divide the range of x in two intervals and we
assign a 0 or a 1 to an intermediate sequence s'. Then, to form s from an alphabet of size q
(assumed, for simplicity, to be a power of 2), we group log2(q) digits of s' and s would
simply be the numerical value of the resulting symbol.

To produce m digits of s using direct thresholding, we need only to follow the orbit of x
for m steps, whereas to produce the same amount of digits using indirect thresholding we
need orbits of x that consist of m log2(q) points. Again, we record the average complexity
components of ten different pseudorandom sequences generated using indirect thresholding.
See Table 4.2.

Table 4.2: Average Hk complexities for indirect thresholding.

q\k 1 2 3 4 5 6 7
2 0.9999 1.9999 2.9999 3.9998 4.9995 5.9991 6.9982
4 1.9999 3.9995 5.9981 7.9924 9.9691 11.8750 13.4480
8 2.9996 5.9969 8.9782 11.8122 13.5659 13.9617 14.0165
16 3.9991 7.9844 11.7373 13.4219 13.5967 13.6082 13.6089
32 4.9977 9.9261 13.0009 13.2779 13.2868 13.2870 13.2868

4.1.3 Comparison between direct and indirect thresholding

It can be seen, by comparing Tables 4.1 and 4.2, that except for the case q=2, all
corresponding entries in indirect thresholding exceed those of direct thresholding. When q=2
there is no difference between the methods.
The maximum possible complexity for sequences of infinite length depends only on
alphabet size (see equation (2.6)) and its shown in Table 4.3. Compared against it, both
methods fall short, specially in the case of larger alphabet size and k-gram length. However,

74
as we saw in Chapter 2, maximum complexity is limited by sequence length. The series of
sequences that were generated contain 50000 elements each, so for small alphabet size ( q=
2, 4) maximum k-gram complexity is attainable, but for larger alphabet size the sequence
length becomes the limiting factor. So a better model for comparison would be Table 4.4: the
maximum finite length sequence complexities.

As we can see, indirect thresholding more closely reaches maximum possible complexity
and should be, therefore, the method of choice when generating pseudorandom sequences
with a chaotic system.

Table 4.3: Maximum Hk complexities for infinite length sequences.

q\k 1 2 3 4 5 6 7

2 1.0000 2.0000 3.0000 4.0000 5.0000 6.0000 7.0000

4 2.0000 4.0000 6.0000 8.0000 10.0000 12.0000 14.0000

8 3.0000 6.0000 9.0000 12.0000 15.0000 18.0000 21.0000

16 4.0000 8.0000 12.0000 16.0000 20.0000 24.0000 28.0000

26 4.7004 9.4009 14.1013 18.8017 23.5022 28.2026 32.9031

32 5.0000 10.0000 15.0000 20.0000 25.0000 30.0000 35.0000

The reason why indirect thresholding achieves a better complexity lies in the fact that
under binary thresholding less information is revealed of the structure of the underlying
chaotic system. The finer the thresholding grid in direct thresholding, (number of decision
levels and intervals) the better we can estimate the mapping that is being applied, and the less
random numbers coming from such mapping appear.

Table 4.4: Maximum Hk complexities for finite length sequences.

q\k 1 2 3 4 5 6 7

2 1.0000 2.0000 3.0000 4.0000 5.0000 6.0000 7.0000

4 2.0000 4.0000 6.0000 8.0000 10.0000 12.0000 14.0000

8 3.0000 6.0000 9.0000 12.0000 15.0000 14.0242 14.0242

16 4.0000 8.0000 12.0000 16.0000 13.6092 13.6091 13.6089

26 4.7004 9.4009 14.1013 13.2873 13.2871 13.2870 13.2868

32 5.0000 10.0000 15.0000 13.2873 13.2871 13.2870 13.2868

75
The only drawback of indirect thresholding is the increase in the number m log2(q) of
orbit points that are necessary to compute in order to generate m sequence digits. However,
this is not a serious limitation because we can either choose a chaotic map that is not
computationally overwhelming, or we may design ad-hoc hardware to handle the chore.

4.2 Permutation matrix schemes

If we are to encode a text that consists of a certain number of letters into another one with
the same size, we may do this by affecting first a permutation on the original text and
following it by a substitution. The former implies the position of individual letters is altered,
whereas the second implies the letters themselves are changed under a certain rule. At bit
level, it appears that almost all texts and almost all ciphertexts contain the same number of
ones and zeros so, at bit level, permutation and substitution seem to be equivalent. However,
considering alphabets of size larger than two, there is a distinction between the two, and
conceptually it will be convenient to treat them separately.

In this section we use chaotic systems and fractals to generate permutation matrices that
are then used to scramble the data within the original text. We propose algorithms for
encoding and decoding data, then apply them to a large number of text fragments and finally
compute the average, maximum and minimum complexity differences between the original
texts and their corresponding ciphertexts obtained by these schemes.

4.2.1 Permutation matrix generation by means of a chaotic system (PMG/CS)

In this case both Alice and Bob agree on a particular chaotic system to be used. This and
the initial state x0 constitute the secret key. Suppose Alice wants to encrypt a message of
length m. To do so, she needs to find an m×m permutation matrix P such that, if we represent
the original text and the ciphertext by column vectors T and C respectively, the following
holds:
C= PT
Since P contains a single "1" per column and row, a more compact way to represent P
would be by means of a string π of length m whose components are non-repeated numbers in
the range 1 to m.

P _ π = ( π 1 π 2 ... π m ) i ≠ j _ π i ≠ π j
the meaning of πi being that the i-th element in T becomes the πi-th element of C. In other
words, P has a "1" in the πi-th row and i-th column.

The encoding scheme amounts to calculating an orbit of the chaotic system with m-1
points in it, and setting up m-i intervals in the range of x at time i. The interval Ij to which xi
belongs determines πi. The last element, πm is the only number that is left after having chosen
the first m-1 elements of π. From π we can compute P, although in practice this is not even
necessary.

76
Upon reception of the ciphertext, Bob uses his own chaotic system (a copy of Alice's) to
build the same permutation, and from it he can easily construct its inverse and apply it to the
ciphertext to decipher the original information:

-1 -1
P C = P P T =T

The last state of the system becomes the updated new initial state, and both Alice and
Bob are ready to do the same thing again. Since the system is chaotic, the next permutation
matrix will be a new one. There are at most m! permutations of size m, so eventually Alice
and Bob will be forced to reuse a permutation matrix, no matter what are the dynamics of the
chaotic system they are using. However, the number of permutations increases rapidly with
m. There are more than 3.5×106 permutations of 10 elements, and over 1.3×1012 when m=15,
so the chances of repeating permutation matrices are slim.

4.2.1.1 PMG/CSE encoding scheme

a) Alice creates an available positions vector A with m positions:

A(i) = i i ∈ (1, m)

b) Alice generates an orbit of the chaotic system, starting with x0 and consisting of m-1
points (x0 . . . xj . . . xm-2);

c) for i taking values between 1 and m-1,

c.1) Alice divides the range of x into m-i+1 intervals I1, I2, . . . , Im-i+1;

c.2) Alice assigns the value of the i-th element of the permutation π

π i = A(j) iff xi-1 ∈ I j


c.3) πi is deleted from A and the size of A is reduced by 1;

d) by this time, there is only one element in A, so:

π m= A
e) from π, Alice can construct P and then use (4.2) to generate the ciphertext C.

4.2.1.2 PMG/CSD decoding scheme

a) Bob does exactly the same as Alice, but instead of generating P, he computes P-1:

77
b) Bob applies P-1 to the ciphertext to recover the original message.

T = P -1 C

4.2.1.3 Example of using PMG/CS

We now proceed to illustrate PMG/CS on a small text T. Suppose Alice and Bob agree
on using the logistic map at FDC with initial condition x0, that is:

xi+1 = 4 xi (1 - xi ) x0 = 0.3
and the range of x is the interval (0,1).

Alice wants to encode the message Hello!, so:

T = [H e l l o ! ] T
(Actually, T would have as components the ASCII code for the letters involved) Following
procedure PMG/CSE (a) she creates the A vector:

A = [1 2 3 4 5 6 ] T
Then she generates an orbit of x0 consisting of 5 points (PMG/CSE (b)):
x0 = 0.3000
x1 = 0.8400
x 2 = 0.5376
x3 = 0.9943
x4 = 0.0225

Now she lets i=1 so, by PMG/CSE (c.1) she divides the interval (0,1) into six intervals:
I 1 = (0,0.1666)
I 2 = [0.1666,0.3333)
I 3 = [0.3333,0.5)
I 4 = [0.5,0.6666)
I 5 = [0.6666,0.8333)
I 6 = [0.8333,1)

Since x0 belongs to I2, then

78
π 1 = A(2) = 2
and A is updated:

A = [1 3 4 5 6 ] T
Now she lets i=2 and sets up five intervals:

I 1 = (0,0.2)
I 2 = [0.2,0.4)
I 3 = [0.4,0.6)
I 4 = [0.6,0.8)
I 5 = [0.8,1)
Since x1 belongs to I5, then

π 2 = A(5) = 6
and A is updated:

A = [1 3 4 5 ] T
Now she lets i=3 and sets up four intervals:

I 1 = (0,0.25)
I 2 = [0.25,0.5)
I 3 = [0.5,0.75)
I 4 = [0.75,1)
Since x2 belongs to I3 then

π 3 = A(3) = 4
and A is updated:

A = [1 3 5 ] T

Now she lets i=4 and sets up three intervals:

I 1 = (0,0.3333)
I 2 = [0.3333,0.6666)
I 3 = [0.6666,1)
Since x3 belongs to I3, then

π 4 = A(3) = 5
and A is updated:

79
A = [1 3 ] T

Now she lets i=5 and sets up two intervals:

I 1 = (0,0.5)
I 2 = [0.5,1)
Since x4 belongs to I1, then

π 5 = A(1) = 1
and A is updated:

A = [3] ′
Now, by PMG/CSE (d),

π 6 = A= 3
So the permutation is:

π = (2 6 4 5 1 3)
And according to (4.8) the permutation matrix P becomes:

0 0 0 0 1 0
 
1 0 0 0 0 0
0 0 0 0 0 1
P=  
0 0 1 0 0 0
 
0 0 0 1 0 0
 
0 1 0 0 0 0
Alice will now produce the ciphertext:

C = P T = [o H ! l l e ] T
and updates her initial state:

x0 = 0.0225
Bob, following PMG/CSD (a) will arrive at the same permutation (4.31), but by
PMG/CSD (b) he will compute the matrix P--1 as the transpose of P, because any
permutation matrix is a unitary matrix and this is a property of such matrices [59].

80
0 1 0 0 0 0
 
0 0 0 0 0 1
0 0 0 1 0 0
P = 
-1
0 0 0 0 1 0
 
1 0 0 0 0 0
 
0 0 1 0 0 0

And upon application of this matrix to C he will recover the original text. Also, he will
update his initial state as in (4.34) to be in synchronism with Alice.

From a practical standpoint, operations (4.2) and (4.4) do not need to be carried out. In
fact, it is much more efficient to construct the permutation π, and use this to re-order the
elements in T, while encoding, or those in C, while decoding. The effect is the same, but
moving elements in computer memory is faster than performing even simple arithmetic.

4.2.1.4 Computer simulation and results for PMG/CS

We used the data in TXT4 (see Table 2.1) to test the performance of the PMG/CS
encoding scheme. TXT4, when stripped from exceeding space characters, is a little over 80K
bytes in length. We fragmented it into eight text files of 10K bytes and subjected each
fragment to fifteen independent runs of PMG/CSE. After this, we calculated the k-gram
entropies of the original fragments and their corresponding ciphertexts. What we found was
that the complexities of all the ciphertexts corresponding to the same text fragment (but with
chaotic orbits that started at different points) were very close, so an average of them is
representative of the effect of the encoding scheme on the ciphertext complexity,
independent of the initial point of the orbit.

Figure 4.2 shows the complexity of the original text fragment and that of the fifteen
different encoded versions of it. These fifteen curves overlap, and they are all greater than the
original complexity of the text. Thus, encoding using PMG/CS increases the complexity of
the original text. Figure 4.3 shows a curve for the complexity difference between the text and
the ciphertext. Even though the figures are those of one of the fragments, the general
behavior is the same, as can be seen in Table 4.5.

81
It is noteworthy that, as revealed by Figures 4.2 and 4.3 and more generally in Table 4.5,
encoding by means of a permutation matrix does not alter the value of 1-gram complexities,
since the number and frequency of 1-grams remains unchanged under a permutation. The
other k-grams, however, do increase due to increased confusion introduced by the shuffling
of the original data.

Table 4.5: Complexity results for PMG/CS.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 4.4079 8.5869 11.0202 11.2388 10.9634 10.7030 10.4808 10.2877 10.1189 9.9658
txt2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 4.2456 8.3582 10.9510 11.2297 10.9627 10.7027 10.4808 10.2877 10.1189 9.9658
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 4.2258 8.3218 10.9459 11.2289 10.9619 10.7027 10.4808 10.2877 10.1189 9.9658
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 4.2416 8.3506 10.9624 11.2295 10.9610 10.7028 10.4808 10.2877 10.1189 9.9658
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 4.2531 8.3619 10.9500 11.2268 10.9620 10.7028 10.4808 10.2877 10.1189 9.9658
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 4.3558 8.5543 11.0414 11.2378 10.9631 10.7029 10.4808 10.2877 10.1189 9.9658
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
txt7.crp 4.3072 8.4564 11.0051 11.2361 10.9621 10.7030 10.4808 10.2877 10.1189 9.9658
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 4.2654 8.3987 10.9619 11.2250 10.9620 10.7028 10.4808 10.2877 10.1189 9.9658

4.2.2 Permutation matrix generation using an IFS (PMG/IFS)

82
Alice and Bob agree on using the same IFS {wi}, with i ∈ [1,m] . Not only do they agree
on the functions that comprise the IFS, they also agree on their ordering. Now, the method
we are about to explain works for one-dimensional objects (linear sequences), but it can
equally be applied to two-dimensional images and higher dimensional entities. So we will
assume Alice wants to communicate a picture to Bob. Another advantage of this method is
that it allows for both lossless and lossy transmission of information, at the encoder's
discretion.

So, we are to transmit a bi-dimensional array of numbers that represents an image. For
this, let W be a "basic" IFS, the one Alice and Bob agree on:

W = { wi } i ∈ [1, m]
An important requirement on W is that its attractor G must contain a closed, simply
connected subset S, homeomorphic to the interior of a square. To somewhat simplify the
following discussion, we will assume that by design of W, G is, in fact, a square.

From this W, Alice designs a new probabilistic IFS W', with attractor G identical to that
of W, but whose probabilities are such that the different regions of G are visited with
probabilities that are proportional to the intensity of the corresponding regions in the original
image. She then transmits only these probabilities (or a series of numbers proportional to
them) to Bob. Upon reception of the probabilities for the functions within W', Bob
reconstructs this IFS and uses it to regenerate the image.

Before we actually state the algorithm used by Alice and Bob to encode and decode the
image, we will show the intermediate steps that are required: the design of W, the procedure
that renders G and the design of W'.

4.2.2.1 Choosing a basic IFS

Suppose we are transmitting gray-scale images that are 64×64 pixels in size. We would
like to construct a basic IFS whose attractor G is a rectangle of the same size. By the collage
theorem [22], such an IFS can be found. To do so, we must find a series of transformations
such that the union, or collage, of the images of G form G.
Theorem 4.1 The collage theorem [22]. Let (X,d) be a complete metric space. Let A ∈
H(X) (the space of compact subsets of X ) be given and let ε ≥0 be given. Choose an IFS {wi}
with contractivity factor 0 ≤ s < 1, so that

h(A,∪ wi (A)) ≤ ε
i

where h(⋅,⋅) is the Hausdorff metric. Then


ε
h(A,G) ≤
1- s
where G is the attractor of the IFS, and A is the set we wish to approximate.

83
So, in our case, A is a 64×64 pixels square, and we want G to be as close to this square as
possible. There are infinitely many solutions to this problem, one of which is depicted in
Figure 4.4.

We can exactly match the square by the following IFS:

4.2.2.2 The attractor G and the final IFS W'

Now consider an image, Figure 4.5. Each of the functions in (4.39) is responsible for a
region of the attractor. The frequency at which they are applied determines the intensity of
that part of the attractor, so regions in the original image that are darker should be visited
more often. If we define µ(a) as some measure of the intensity of the image in a subset a,
then the probability p(a) of visiting such region should be proportional to µ(a).

84
More precisely, let A be the image an a a simply connected subset of A, then the
probability of visiting a is given by

µ (a)
p(a) =
µ (A)
where
µ (a) = ∑ of the pixels ∈ a
But a particular region within the attractor G of the IFS is reached only after applying the
functions in W in a specific way. For example, by successive applications of w1, we get closer
and closer to the upper-right corner of G and the whole attractor is mapped into a region that
shrinks in size with every iteration. In Figure 4.6 we can see a partial diagram that shows
how a particular region in the attractor corresponds to a certain combination of functions
within the IFS. In the figure, adjacency stands for functional composition, so w1w2 really
means w1(w2(x)), but we do not write it this way on the graph to avoid cluttering.

To regenerate the attractor G, we can proceed in two different manners: deterministically


or stochastically. In both cases we start with any non-empty compact subset of the plane, s0.
This might as well be a single point. Now, when generating G deterministically we proceed
to construct the set s1 as follows:
s1 = ∪ wi ( s0 )
i
and, for the k-th iteration we have

s k = ∪ wi ( s k -1 )
i
then, in the limit,
G = lim s k
k →∞

We only need to plot the points in sk to exhibit the k-th approximation of G.

85
Each of the subsets in sk corresponds to the image of s0 under one series of choices of
functions within the IFS, i.e., to one of the branches in the computation chart shown in
Figure 4.7. If we take s0 to equal G then sk is composed of mk different regions rn, where m is
the number of functions W is comprised of, and 1 ≤ n ≤ mk.

In a deterministic system, the computation on each branch is performed once and the
corresponding region visited a single time. In stochastic reconstruction, also known as the
chaos game [14], functions are applied according designated probabilities and this allows for
different regions to be visited more often and, therefore, appear with more intensity. It is this
what allows us to use a deterministic IFS to scan the image and a probabilistic one to
reconstruct it. Actually we may as well use a deterministic IFS to reconstruct the image,
provided we shade the region rk proportionally to its probability.
µ (A ∩ r k )
p( r k ) =
µ (A)

So, the pieces to implement the encoding scheme are laid down, and we are now in
position to state the encoding and decoding schemes.

86
4.2.2.3 PMG/IFSE

Alice and Bob agree on a basic IFS W. Alice wishes to send mk points of an image A to
Bob, where m is the number of functions in W, so she proceeds the following way:

a) she sets

s0 = G
b) she follows each possible path of the computation chart depicted in Figure 4.7 to
compute region rn

r 1 = w1 ( w1 ( w1 (... w1 ( s0 )...)))
k - fold composition

r 2 = w2 ( w1 ( w1 (... w1 ( s0 )...)))

r m = wm ( w1 ( w1 (... w1 ( s0 )...)))

r m+1 = w1 ( w2 ( w1 (... w1 ( s0 )...)))



r mk = wm ( wm ( wm (... wm ( s0 )...)))

and

87
r n = wa0 ( wa1 (... wak -1 ( s0 )...))
ai ∈ [1, m]
k -1
n = ∑ ( ai - 1) mi + 1
i=0

c) for each region Alice registers the sequence of compositions and the probability

µ (A ∩ r n )
p( r n ) =
µ (A)
these become part of the new IFS W'.

d) Alice transmits the mk probabilities thus found (or a number that is proportional to
them).

In fact, for perfect reconstruction, k should be selected in such a way that

max {Area( r )} ≤ 1
i
i

4.2.2.4 PMG/IFSD

a) Bob, upon reception of N data points computes

k = log m (N)
k
b) he then proceeds to create m (or N) different functions by traversing the tree in
Figure 4.7 and associates to each of them the probabilities that Alice sent. These
functions compose W'.
c) he can use the chaos game with the probabilistic IFS to reconstruct the image or he
can simply find the regions rn as in (4.47) or (4.48) and shade them with an intensity
proportional to their corresponding probability.

This method reconstructs the original image because Alice and Bob have an agreement
on the functions and order of the functions in W. If the functions' order was unknown to Bob,
he would have to sort through all possible permutations before he could actually decode the
image.

The reason why this is equivalent to permutation matrix is because when (4.50) is met
with equality, then each and every point in A is sampled once and a number proportional to
the value of the pixel is transmitted, so basically is only the ordering of the samples what has
been changed, i.e., we have a permutation.

88
4.2.2.5 Extensions and modifications to PMG/IFS

The same method can be applied to "linear" data, like the letters in a text file. In this case
the basic IFS Alice and Bob agree on should have a line for an attractor. The method is
otherwise the same.

Concerning the type of IFS, it would seem that it is possible to add flexibility and a range
of choice if the basic IFS can be made to depend on one or more parameters. In this case,
suitable for public use of the method, the general form of the basic IFS is known to
everybody, but the parameter α is only know to the parties that wish to communicate. From
this key is possible for both Alice and Bob to construct the basic IFS and then proceed as
described earlier.

In Figure 4.8 we illustrate this concept. Here we let the α be the ratio between the length l
and height h of the rectangular regions within the attractor.

l
α=
h

And given that the rectangle is 64×64,

α
l = 64
1+α

64
h=
1+ α
89
The five functions that comprise the IFS are given by:

This reduces to (4.39) for the special case α = 1and the factor u at which the area is
compressed is:

 α  α - 1  
2

2   
u = max  ,
 (1+ α )  1 + α  

and perfect reconstruction is achieved when

u ≤1
k

As an example of constructing an IFS suitable for encoding linear data, and lets say we
are interested in sending 10000 letters in our text (this is not a stringent requirement, and
with minor modifications we can send as much or as little text we want), we will divide the
line segment between 1 and 10000 into ten equal length segments, as shown in Figure 4.9.
We also show in this figure the particular labeling of the functions we have. Since ordering is
important, this is one of 10! possible labelings, so the key space (given no other free
parameter) little over three and half million keys. Testing this by brute force would not be a
problem, but is easy to see that for a moderately larger number of functions this increases
astronomically. For example, had we chosen to divide the line segment in 20 parts, we would
have a key space of over 2.4×1018 keys!

The IFS W that will reconstruct the line segment shown in Figure 4.9 consists of the
following ten functions:

This IFS can now be used in PMG/IFSE with the text vector playing the role of the image

90
A.

4.2.2.6 Examples of PMG/IFS and simulation results

If we consider the IFS given by (4.39) but allow for re-labeling of the functions and let
the key determine this, then there are m! keys. In this simple example this amounts to 4!. The
graphical results of applying these IFS to the image of the butterfly are shown in Figure 4.5,
where we present the original image and the 24 possible encodings of it.

The graph of the complexity components for the original figure and the encrypted
versions is shown in Figure 4.6. As we can see, there is an improvement in the apparent
complexity of the encrypted versions of Figure 4.9. The average values of those complexity
components are shown in Table 4.6, along with the complexity components of the original
image and the maximum possible complexity given a sequence length of 64×64 = 4096
atoms. Figure 4.7 shows the components of the complexity difference vector. As we have
seen earlier, there is no improvement in the complexity of 1-grams because PMG/IFS is a
permutation matrix scheme.

91
Comparing Figure 4.11 with Figure 4.2 and Figure 4.12 with Figure 4.3, we see that even
though the absolute values of k-gram complexities Hk are smaller for this image than for the
text analyzed in section 4.2.1.4, the complexity differences are larger, indicating a greater
amount of confusion introduced by the IFS produced permutation matrix.

Table 4.6:Complexity results when using PMG/IFS on an image.

k-gram 1 2 3 4 5 6 7 8 9 10
Hk 2.3075 2.8897 3.2584 3.3977 3.5684 3.5420 3.7435 3.8561 3.7394 3.8336
Avg(Hk) 2.3075 3.7622 4.9309 5.0835 5.6694 5.5227 5.6285 4.7439 5.5303 5.3683
Max(Hk) 8.0000 16.0000 11.9993 11.9989 11.9986 11.9982 11.9979 11.9975 11.9972 11.9968

If we choose to encode text by means of the permutation matrices induced by (4.58), in


which we allow the key to be a particular labeling of the ten functions that comprise the IFS
we find that the results match closely those for PMG/CS, as can be seen in Table 4.7 and
Figures 4.13 and 4.14. This suggests that, for almost all choices of permutation matrices, the
final complexity of the ciphertext is a factor times the original complexity. This factor seems
to be around the value of 1.1 for the k-grams that are not limited by sequence length.

92
Table 4.7: Complexity results when using PMG/IFS.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 4.4079 8.5838 11.0344 11.2395 10.9629 10.7030 10.4808 10.2877 10.1189 9.9658
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 4.2456 8.3527 10.9521 11.2293 10.9629 10.7027 10.4808 10.2877 10.1189 9.9658
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 4.2258 8.3248 10.9487 11.2297 10.9613 10.7028 10.4808 10.2877 10.1189 9.9658
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 4.2416 8.3484 10.9577 11.2285 10.9620 10.7026 10.4808 10.2877 10.1189 9.9658
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 4.2531 8.3575 10.9500 11.2304 10.9632 10.7027 10.4808 10.2877 10.1189 9.9658
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 4.3558 8.5302 11.0495 11.2351 10.9626 10.7030 10.4808 10.2877 10.1189 9.9658
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
txt7.crp 4.3072 8.4207 10.9873 11.2161 10.9603 10.7029 10.4808 10.2877 10.1189 9.9658
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 4.2654 8.3869 10.9621 11.2245 10.9620 10.7029 10.4808 10.2877 10.1189 9.9658

4.3 Substitution systems

Substitution systems are systems in which the ciphertext c is obtained by substituting


each element ti that appears in the text t by a corresponding ci according to a prescribed rule.
When each letter in the alphabet has only one possible substitution, the system is called a
monoalphabetic substitution system, otherwise it becomes a polyalphabetic substitution
system [10].

For monoalphabetic substitution we may express the correspondence between


cipherdigits and text digits as in (4.59), without attention to the position within the text (and
ciphertext) at which the digit is located. For polyalphabetic systems the position influences
the transformation, hence the substitution is of the form (4.60).

ci = f k ( t i )

ci = f k ( t i ,i)

In both cases, the particular transformation employed is determined by the key k in (4.59)

93
or (4.60). Substitution systems add to the confusion of the ciphertext and not their diffusion
(see Chapter 1).

The simplest, almost trivial, substitution system is Caesar's substitution Ck:

ci = C k ( t i ) = t i + k ( mod m)
where m is the size of the alphabet. Elementary number theory shows that decoding is
accomplished by basically the same operation:

t i = ci - k ( mod m)
This kind of system is too simple to be used in practice because the key space contains
only m elements. Slightly more complicated systems that are related to this can be found in
the literature [1] [10] [12] [39], but for all of them, cryptanalysis is simple. The amount of
confusion introduced by monoalphabetic substitution is only apparent and it does not destroy
nor hide the statistical relationships among symbols in the text, so correlating them to known
distributions in a language makes discovering the individual letter transformations not only a
feasible, but easy task.

Polyalphabetic substitution systems, on the other hand, are much more effective in hiding
the statistical characteristics of the text, so they are not evident in the ciphertext, and some
polyalphabetic systems are perfect and unbreakable, no matter the computing power
available.

An example of a perfect polyalphabetic system is the one-time pad in which the key k is a
string that contains as many, randomly chosen, letters as the text we want to encode (this, in
fact, is one of the major drawbacks of the one-time pad). The ciphertext digits are obtained as
follows

ci = t i + k i ( mod m)
Even though there is a striking similarity with (4.61) the one-time pad cannot be broken.
In (4.61) it is enough to know the correspondence between one digit of the text and the
ciphertext to deduce k and, therefore, break the system. But in (4.63) the knowledge of one
such correspondence reveals nothing of how, later-on on the text, the same digit is going to
be encoded because the key digit at that position is unrelated and random.

The key used in the one-time pad is as lengthy as the original text, and it has to be
produced at random every time, otherwise cryptanalysis becomes feasible when enough
cryptograms have been accumulated. These operations, namely key generation and
distribution, are expensive, therefore methods in which the key length can be reduced are
sought. One of them is the Vigenère substitution, where a short key, r-digits long, is used to
encode blocks of plaintext of length r.

Chaotic systems provide for a natural way to extend a short key into a running key that

94
has the length of the original text and no discernible pattern. In essence the PRNG's
discussed in Section 4.1 can produce the digits of a running key; the actual key being the
starting point x0 of the chaotic system. Using this approach we implement two simple
encoding schemes, SSPRNG/DT and SSPRNG/IT, that employ the pseudorandom number
generators of Section 4.1, and combine their output sequence with the plaintext as in (4.63)
to produce the cipher text.

4.3.1 SSPRNG/DT

In this case, direct thresholding is employed to form a pseudorandom sequence of the


same length as the text to be encoded. Then a simple substitution is effected on the plaintext
to produce the ciphertext.

4.3.1.1 SSPRNG/DTE

a) Alice and Bob agree on a starting point x0 of a known chaotic system. This is the key;

b) Alice uses PSRNG/DT to generate as many points as necessary to encode the text;

c) Alice then uses (4.63) to form the ciphertext.

4.3.1.2 SSPRNG/DTD

a) Bob uses his knowledge of the key to form the same pseudorandom sequence Alice
used in encoding;

c) Bob then applies (4.64) to recover the original text.

t i = ci - k i ( mod m)

4.3.1.3 Simulations and results

As we did for previous methods, we test this scheme by applying it to eight different text
files, and encrypting each one of them with fifteen randomly chosen keys. As it can be seen
in Figures 4.15 and 4.16, and from Table 4.8, this scheme alters noticeably the all of the Hk
components.

95
Table 4.8: Complexity results when using SSPRNG/DT.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 7.8213 11.9710 11.6750 11.2737 10.9573 10.6971 10.4790 10.2899 10.1262 9.9761
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 7.8168 11.9488 11.6755 11.2759 10.9550 10.6989 10.4805 10.2923 10.1236 9.9731
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 7.8171 11.9444 11.6713 11.2735 10.9576 10.6970 10.4815 10.2912 10.1242 9.9741
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 7.8189 11.9519 11.6757 11.2775 10.9551 10.7007 10.4818 10.2924 10.1242 9.9747
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 7.8184 11.9499 11.6723 11.2751 10.9585 10.6970 10.4820 10.2918 10.1246 9.9781
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 7.8261 11.9645 11.6717 11.2730 10.9560 10.6974 10.4810 10.2881 10.1221 9.9744
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
txt7.crp 7.8266 11.9529 11.6659 11.2699 10.9554 10.6951 10.4794 10.2900 10.1226 9.9744
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 7.8247 11.9584 11.6720 11.2723 10.9567 10.7003 10.4796 10.2878 10.1244 9.9755

4.3.2 SSPRNG/IT

In this case, indirect thresholding is employed to form a pseudorandom sequence of the


same length as the text to be encoded. Then a simple substitution is effected on the plaintext
to produce the ciphertext.

96
4.3.2.1 SSPRNG/ITE

a) Alice and Bob agree on a starting point x0 of a known chaotic system. This is the key;

b) Alice uses PSRNG/IT to generate as many points as necessary to encode the text;

c) Alice then uses (4.63) to form the ciphertext.

4.3.1.2 SSPRNG/ITD

a) Bob uses his knowledge of the key to form the same pseudorandom sequence Alice
used in encoding;

c) Bob then applies (4.64) to recover the original text.


4.3.1.3 Simulations and results

We test this scheme by applying it to eight different text files and encrypting each one of
them with fifteen randomly chosen keys. It is also the case that this scheme alters most the
lower k-gram complexities. This can be seen in Figures 4.17 and 4.18, and in Table 4.9. The
performance of the indirect thresholding scheme improves over to the one employing direct
thresholding. This was to be expected, since we know from Section 4.1 that PRNG/IT
provides better sequences of pseudorandom numbers.

Table 4.9: Complexity results when using SSPRNG/IT.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 7.9812 12.1808 11.6746 11.2700 10.9510 10.6940 10.4779 10.2909 10.1210 9.9764

97
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 7.9809 12.1856 11.6780 11.2720 10.9551 10.6967 10.4791 10.2890 10.1212 9.9784
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 7.9816 12.1800 11.6760 11.2668 10.9505 10.6933 10.4750 10.2870 10.1220 9.9716
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 7.9816 12.1799 11.6788 11.2721 10.9552 10.6944 10.4797 10.2911 10.1237 9.9743
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 7.9817 12.1798 11.6774 11.2681 10.9539 10.6914 10.4769 10.2849 10.1238 9.9763
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 7.9815 12.1845 11.6792 11.2721 10.9569 10.6949 10.4772 10.2925 10.1209 9.9782
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
txt7.crp 7.9815 12.1796 11.6729 11.2678 10.9542 10.6892 10.4749 10.2879 10.1188 9.9769
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 7.9818 12.1846 11.6774 11.2703 10.9548 10.6944 10.4754 10.2865 10.1158 9.9736

4.4 Secret sharing schemes

In this kind of systems the members of a group, or participants, are given pieces of a
secret K, called shares, by a special member, called the dealer. This is done in such a way
that knowledge of a partial number of shares is insufficient to determine the secret. Secret
sharing schemes can be very complicated according to how many members are there and
what combination of shares are sufficient to decode the secret. An m/m system requires the
shares of all participants to determine K, where as an m/n system needs only the shares of m
members (out of n participants).

The simplest secret sharing system is a 2/2 system, and this is the one we will deal with
here. In particular, we will show how an IFS can be used to implement visual cryptography
[60]. In this case the human visual system performs part of the reconstruction of the original
image, so this is not a lossless scheme. The task at hand, however is encode an image T in
such a way that given this cryptoimage C and the two shares corresponding to the two
participants, decoding is possible, whereas if only one of the shares is available the image
cannot be recovered.

Iterated function systems provide a natural way to perform this operation. Say the
participants are Alice, Bob, and the dealer Dave. Then Dave produces two different IFS's,
one for Alice and one for Bob, but with the same attractor G. Now, because of differences in
their given IFS's, Alice's IFS visits certain regions of G with a different frequency than Bob's.
Let Alice's IFS be WA and Bob's be WB . Dave forms his own IFS as

W D =W A ∪W B
To encode T, Dave must first iterate his IFS long enough for a density pattern P to show

98
up on G, then he adds corresponding elements between T and P to form C. A density pattern
is matrix of the same size and resolution of the original image, whose entries are proportional
to the frequency at which corresponding regions of G are visited.

To decode C, Alice and Bob must get together and form an IFS from their shares and
then compute the density pattern that needs to be subtracted from C to yield T. The reason
why they cannot do this by themselves is that even though they can reconstruct G, they
cannot create the appropriate density pattern, this is because the visiting frequency of a
region in G is determined uniquely by functions within the IFS and one is lacking the
functions of the other's IFS.

4.4.1 Secret sharing example

Suppose Dave decides on using IFS's of the form given in (4.55) so he selects two values,
α1 and α2, and gives one to Alice and the other to Bob. He then proceeds to form his own
IFS:

Alice forms WA as in (4.55) with α = α1 and Bob forms WB as in (4.55) with α = α2.

α 1 = 2.30
α 2 = 2.50

Figure 4.19(a) shows the original image, and the attractor for WA, WB, and WD
respectively. The cipher image, encoded by Dave, is shown in Figure 4.20(a), along with the
decipherments attempted by Alice and Bob alone (Figure 4.20(a)-(b)), and together (Figure
4.20(d)).

99
The complexity of the cipherimage, under this scheme, is also much larger than the
original image's complexity. This can be seen in Table 4.10 and Figures 4.21 and 4.22, where
we plot complexity and complexity differences.

Table 4.10: Complexity results when using secret sharing.

k-gram: 1 2 3 4 5 6 7 8 9 10
image 2.3075 2.8897 3.2584 3.3977 3.5684 3.5420 3.7435 3.8561 3.7394 3.8336
cipherimage
6.5163 9.9988 10.3125 9.9423 9.6246 9.3693 9.1504 8.9645 8.7989 8.6404

However, although Table 4.10 and Figures 4.21 and 4.22 suggest a great amount of
scrambling has obscured the image in the ciphertext, one must keep in mind that the

100
complexity measures applied and depicted in these sere intended for sequential sequences
and not bi-dimensional data. Caution must be taken when interpreting these results.

4.5 Hybrid systems

The ability of substitution schemes and permutation matrices to hide the statistics of the
plaintext compels us to ask the question: can we increase the apparent complexity by
combining these operations? In other words, do we expect better results if we follow a
substitution by a permutation? The answer is related to the notion of a group [61][62].

Definition 4.1. A group is an ordered pair (G,°) such that G is a set, ° is an associative binary
operation in G for which the following holds:

(i) ∃ I ∈ G | ∀ a ∈ G a ~ I = I ~ a = a
(ii) a ∈ G _ ∃ a -1 ∈ G | a ~ a -1 = a-1 ~ a = I
The set of all possible encoding operations on finite length sequences, with functional
composition as the associative, binary, operation, forms a group E. The set P of all
permutations on sequences of length n, is a subgroup of E, and so is the set S of all
(invertible) substitutions on sequences of length n. If S⊂P or P⊂S, then there would be no
point in encoding first using a permutation and then using a substitution (or vice versa) ,
since a single substitution (or permutation) can be found that will perform the same
encoding in a single step.

In our case, we have devised schemes that employ substitutions (SSPRNG/DT and
SSPRNG/IT) and permutations (PMG/CS and PMG/CS). Now, any permutation can be
considered as a special case of substitution, but the converse does not hold true. So

P ⊆T
and because the orbits of chaotic systems are dense, eventually any of our substitution
methods will produce any possible substitution.

Therefore, the answer to our question is no. We do not expect a combination of


permutation matrix generation and simple substitution to give rise to sequences that are more
complex than the ones we could come up with by using only substitution.

The results of encoding the same eight text fragments using a total of fifteen different
combinations of substitutions and permutation matrices are shown in Figures 4.23 and 4.24,
and are tabulated in Table 4.11. They corroborate our expectations.

101
Table 4.11: Complexity results when using a hybrid system.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 7.9817 12.1776 11.6761 11.2625 10.9519 10.6897 10.4737 10.2817 10.1212 9.9726
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 7.9818 12.1831 11.6765 11.2696 10.9516 10.6909 10.4803 10.2923 10.1205 9.9726
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 7.9811 12.1768 11.6732 11.2646 10.9488 10.6911 10.4718 10.2847 10.1172 9.9723
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 7.9818 12.1807 11.6791 11.2679 10.9527 10.6957 10.4780 10.2879 10.1222 9.9734
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 7.9834 12.1877 11.6818 11.2733 10.9600 10.6952 10.4805 10.2912 10.1264 9.9793
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 7.9807 12.1856 11.6775 11.2740 10.9537 10.6949 10.4798 10.2916 10.1230 9.9745
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
txt7.crp 7.9814 12.1825 11.6807 11.2708 10.9546 10.6980 10.4777 10.2887 10.1236 9.9735
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 7.9839 12.1836 11.6779 11.2716 10.9496 10.6918 10.4701 10.2871 10.1213 9.9696

4.6 Overall comparisons

We are now in position to compare the performances of the different encoding schemes
that have been proposed. Nevertheless, before we do so, it would be useful to also present the
results for some known encoding scheme different from ours. The Unix operating system
provides the user with a cryptographic command, crypt, that allows for encryption under a
chosen key. We proceeded to analyze our text fragments with fifteen versions of randomly

102
chosen keys. The results are shown in Figures 4.25, 4.26 and Table 4.12.

From the information compiled in Table 4.12 and Table 4.13, we observe that
permutation matrix based systems are the ones to show the poorest performance. This is does
not imply they are useless, but it does imply that they should be used as cryptographic
primitives, procedures that do not stand alone, but aid in the cryptographic goal of
information concealment. They do help to increase the diffusion in the ciphertext, but this is
not measured accurately by our complexity measure and that is the reason why it does not
show up in the tables. This also suggests that more than a single metric is necessary to assess
the overall performance of a cryptographic system. Which ones are cryptographically useful
functions remains still an open question.

Table 4.12: Complexity results when using "crypt".

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
txt1.crp 7.9808 12.1712 11.6778 11.2680 10.9530 10.6960 10.4787 10.2863 10.1226 9.9745
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
txt2.crp 7.9808 12.1742 11.6783 11.2740 10.9571 10.6961 10.4790 10.2921 10.1200 9.9761
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
txt3.crp 7.9816 12.1716 11.6790 11.2685 10.9557 10.6985 10.4766 10.2866 10.1231 9.9781
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
txt4.crp 7.9814 12.1704 11.6736 11.2713 10.9505 10.6945 10.4747 10.2879 10.1155 9.9746
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
txt5.crp 7.9806 12.1729 11.6759 11.2708 10.9501 10.6911 10.4801 10.2877 10.1190 9.9730
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
txt6.crp 7.9811 12.1704 11.6806 11.2730 10.9588 10.6963 10.4796 10.2885 10.1237 9.9769
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047

103
txt7.crp 7.9811 12.1724 11.6796 11.2699 10.9555 10.6956 10.4743 10.2886 10.1179 9.9763
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
txt8.crp 7.9811 12.1696 11.6771 11.2666 10.9498 10.6956 10.4764 10.2869 10.1217 9.9722

Table 4.13: Comparison of proposed methods.

k-gram: 1 2 3 4 5 6 7 8 9 10
txt1 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
PMG/CS 4.4079 8.5869 11.0202 11.2388 10.9634 10.7030 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.4079 8.5838 11.0344 11.2395 10.9629 10.7030 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8213 11.9710 11.6750 11.2737 10.9573 10.6971 10.4790 10.2899 10.1262 9.9761
SSPRNGIT 7.9812 12.1808 11.6746 11.2700 10.9510 10.6940 10.4779 10.2909 10.1210 9.9764
HYBRID 7.9817 12.1776 11.6761 11.2625 10.9519 10.6897 10.4737 10.2817 10.1212 9.9726
crypt 7.9808 12.1712 11.6778 11.2680 10.9530 10.6960 10.4787 10.2863 10.1226 9.9745
tx2 4.2456 7.5508 9.6295 10.4018 10.5250 10.4933 10.3701 10.2302 10.0913 9.9470
PMG/CS 4.2456 8.3582 10.9510 11.2297 10.9627 10.7027 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2456 8.3527 10.9521 11.2293 10.9629 10.7027 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8168 11.9488 11.6755 11.2759 10.9550 10.6989 10.4805 10.2923 10.1236 9.9731
SSPRNGIT 7.9809 12.1856 11.6780 11.2720 10.9551 10.6967 10.4791 10.2890 10.1212 9.9784
HYBRID 7.9818 12.1831 11.6765 11.2696 10.9516 10.6909 10.4803 10.2923 10.1205 9.9726
crypt 7.9808 12.1742 11.6783 11.2740 10.9571 10.6961 10.4790 10.2921 10.1200 9.9761
txt3 4.2258 7.5407 9.6280 10.4232 10.5884 10.5089 10.4204 10.2481 10.1064 9.9510
PMG/CS 4.2258 8.3218 10.9459 11.2289 10.9619 10.7027 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2258 8.3248 10.9487 11.2297 10.9613 10.7028 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8171 11.9444 11.6713 11.2735 10.9576 10.6970 10.4815 10.2912 10.1242 9.9741
SSPRNGIT 7.9816 12.1800 11.6760 11.2668 10.9505 10.6933 10.4750 10.2870 10.1220 9.9716
HYBRID 7.9811 12.1768 11.6732 11.2646 10.9488 10.6911 10.4718 10.2847 10.1172 9.9723
crypt 7.9816 12.1716 11.6790 11.2685 10.9557 10.6985 10.4766 10.2866 10.1231 9.9781
txt4 4.2416 7.5456 9.6651 10.4372 10.5927 10.5021 10.3712 10.2663 10.0949 9.9550
PMG/CS 4.2416 8.3506 10.9624 11.2295 10.9610 10.7028 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2416 8.3484 10.9577 11.2285 10.9620 10.7026 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8189 11.9519 11.6757 11.2775 10.9551 10.7007 10.4818 10.2924 10.1242 9.9747
SSPRNGIT 7.9816 12.1799 11.6788 11.2721 10.9552 10.6944 10.4797 10.2911 10.1237 9.9743
HYBRID 7.9818 12.1807 11.6791 11.2679 10.9527 10.6957 10.4780 10.2879 10.1222 9.9734
crypt 7.9814 12.1704 11.6736 11.2713 10.9505 10.6945 10.4747 10.2879 10.1155 9.9746
txt5 4.2531 7.5203 9.6058 10.3133 10.5379 10.5088 10.3849 10.2353 10.0956 9.9498
PMG/CS 4.2531 8.3619 10.9500 11.2268 10.9620 10.7028 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2531 8.3575 10.9500 11.2304 10.9632 10.7027 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8184 11.9499 11.6723 11.2751 10.9585 10.6970 10.4820 10.2918 10.1246 9.9781

104
SSPRNGIT 7.9817 12.1798 11.6774 11.2681 10.9539 10.6914 10.4769 10.2849 10.1238 9.9763
HYBRID 7.9834 12.1877 11.6818 11.2733 10.9600 10.6952 10.4805 10.2912 10.1264 9.9793
crypt 7.9806 12.1729 11.6759 11.2708 10.9501 10.6911 10.4801 10.2877 10.1190 9.9730
txt6 4.3558 7.5808 9.5761 10.2559 10.3899 10.3337 10.2297 10.0650 9.9262 9.7838
PMG/CS 4.3558 8.5543 11.0414 11.2378 10.9631 10.7029 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.3558 8.5302 11.0495 11.2351 10.9626 10.7030 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8261 11.9645 11.6717 11.2730 10.9560 10.6974 10.4810 10.2881 10.1221 9.9744
SSPRNGIT 7.9815 12.1845 11.6792 11.2721 10.9569 10.6949 10.4772 10.2925 10.1209 9.9782
HYBRID 7.9807 12.1856 11.6775 11.2740 10.9537 10.6949 10.4798 10.2916 10.1230 9.9745
crypt 7.9811 12.1704 11.6806 11.2730 10.9588 10.6963 10.4796 10.2885 10.1237 9.9769
txt7 4.3072 7.4460 9.4270 10.1137 10.3181 10.2369 10.1150 9.9761 9.8472 9.7047
PMG/CS 4.3072 8.4564 11.0051 11.2361 10.9621 10.7030 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.3072 8.4207 10.9873 11.2161 10.9603 10.7029 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8266 11.9529 11.6659 11.2699 10.9554 10.6951 10.4794 10.2900 10.1226 9.9744
SSPRNGIT 7.9815 12.1796 11.6729 11.2678 10.9542 10.6892 10.4749 10.2879 10.1188 9.9769
HYBRID 7.9814 12.1825 11.6807 11.2708 10.9546 10.6980 10.4777 10.2887 10.1236 9.9735
crypt 7.9811 12.1724 11.6796 11.2699 10.9555 10.6956 10.4743 10.2886 10.1179 9.9763
txt8 4.2654 7.4383 9.4133 10.2396 10.4001 10.3885 10.2652 10.1193 9.9743 9.8457
PMG/CS 4.2654 8.3987 10.9619 11.2250 10.9620 10.7028 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2654 8.3869 10.9621 11.2245 10.9620 10.7029 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8247 11.9584 11.6720 11.2723 10.9567 10.7003 10.4796 10.2878 10.1244 9.9755
SSPRNGIT 7.9818 12.1846 11.6774 11.2703 10.9548 10.6944 10.4754 10.2865 10.1158 9.9736
HYBRID 7.9839 12.1836 11.6779 11.2716 10.9496 10.6918 10.4701 10.2871 10.1213 9.9696
crypt 7.9811 12.1696 11.6771 11.2666 10.9498 10.6956 10.4764 10.2869 10.1217 9.9722

Table 4.14 condenses the information present in the previous one. The overall
considerations we have formulated hold for both tables.

105
Table 4.14: Comparison of proposed methods (averages).

k-gram: 1 2 3 4 5 6 7 8 9 10
txt 4.4079 7.7456 9.6895 10.3266 10.4857 10.4090 10.3203 10.1330 10.0414 9.9090
PMG/CS 4.2878 8.4236 10.9797 11.2316 10.9623 10.7028 10.4808 10.2877 10.1189 9.9658
PMG/IFS 4.2878 8.4131 10.9802 11.2291 10.9622 10.7028 10.4808 10.2877 10.1189 9.9658
SSPRNGDT 7.8212 11.9552 11.6724 11.2739 10.9565 10.6979 10.4806 10.2904 10.1240 9.9751
SSPRNGIT 7.9815 12.1818 11.6768 11.2699 10.9539 10.6935 10.4770 10.2887 10.1209 9.9757
HYBRID 7.9820 12.1822 11.6778 11.2693 10.9529 10.6934 10.4765 10.2882 10.1219 9.9735
crypt 7.9810 12.1716 11.6777 11.2703 10.9538 10.6955 10.4774 10.2881 10.1204 9.9752

4.7 Oscar's point of view

From the standpoint of an intruder, Oscar, the difficulty in cracking a message lies partly
in how much he knows about the cryptosystem, partly on the kind of attack he can launch,
and partly on the cryptoalgorithm being used. For most of the proposed schemes, Oscar's
uncertainty on the key is proportional to the key space and this itself is determined by the
number of physical bits allotted to the key.

If the dynamical system itself is the key, then Oscar, no matter what kind of attack he can
launch, will do no better than if he resorted to an exhaustive search over the key space. An
exception to this is PMG/IFS, here, the system is insecure and breakable under chosen
plaintext attacks because the permutation generated by this scheme does not change in
successive encodings (unless Alice and Bob choose a new key), so PMG/IFS, by itself,
should only be used once with a particular key.

If Oscar knows the type of chaotic system used in PMG/CS or in SSPRNG, then known
plaintext attacks would allow him to determine, with a small margin of error, what the
parameters and initial conditions are and he could build a copy of the system employed by
Alice and Bob. This, however, would not be enough to break the system because its chaotic
behavior guarantees that these small errors will eventually lead to orbits far different than the
true ones, so Oscar's dynamical system would not remain synchronized to Alice and Bob's.
This type of system forces Oscar to do a search over the key space, and the dynamical system
can be designed in such a way that this search is practically impossible.

106
Appendix A

Pseudorandom
Number Generators
PRNG

107
/* This program produces a pseudorandom sequence using the quadratic map at */
/* FDC and direct thresholding. xo is the initial state q is the alphabet */
/* size and N is the sequence length. The results are written to the */
/* standard output. */
/* */
/* usage: prngdt xo q N */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the ranges for the chaotic map */


#define LO -2.0
#define HI 2.0

double map(double xi)


/* This function applies the chaotic mapping */
{
double xf;
xf = xi*xi - 2.0;
return xf;
}

void main(int argc, char *argv[])


{
char *xostr,*qstr,*Nstr;
double xo,y;
long q,N,i;
char offset,output;

if (argc != 4)
{
printf("\n usage: prngdt xo q N\n");
exit(0);
}

xostr = *++argv;
qstr = *++argv;
Nstr = *++argv;

xo = atof(xostr);

108
q = atol(qstr);
N = atol(Nstr);
if (q>256)
{
printf("Maximum alphabet size cannot exceed 256\n");
exit(0);
}

/* Adtjust offset so to start with the letter "A" if possible */


if (q<190)
{
offset = 65;
}
else
{
offset = 0;
}
for (i=0;i<N;i++)
{
y = floor(q*(xo-LO)/(HI - LO));
output = (char)y + offset;
printf("%c",output);

/* Get new point along the orbit */


xo = map(xo);
}
}

109
/* This program produces a pseudorandom sequence using the quadratic map at */
/* FDC and indirect thresholding. xo is the initial state, the alphabet */
/* size is 2^k and N is the sequence length. The results are written to the */
/* standard output. */
/* */
/* usage: prngit xo q N */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the ranges for the chaotic map */


#define LO -2.0
#define HI 2.0

double map(double xi)


/* This function applies the chaotic mapping */
{
double xf;
xf = xi*xi - 2.0;
return xf;
}

void main(int argc, char *argv[])


{
char *xostr,*kstr,*Nstr;
double xo;
long k,N,i;
char j,offset,output,aux,bpwr;

if (argc != 4)
{
printf("usage: prngit xo k N alphabet size is 2^k\n");
exit(0);
}

xostr = *++argv;
kstr = *++argv;
Nstr = *++argv;

xo = atof(xostr);

110
k = atol(kstr);
N = atol(Nstr);

111
if (k>8)
{
printf("Maximum alphabet size cannot exceed 256 (k<=8)\n");
exit(0);
}

/* Adtjust offset so to start with the letter "A" if possible */


if (k<=7)
{
offset = 65;
}
else
{
offset = 0;
}
for (i=0;i<N;i++)
{
aux = 0;
bpwr = 1;
for (j=0;j<k;j++)
{
if (xo>0)
{
aux = aux + bpwr;
}

bpwr = bpwr*2;

/* Get new point along the orbit */


xo = map(xo);
}
output = aux + offset;
printf("%c",output);
}
}

112
Appendix B

Permutation Matrix Generation


Using Chaotic Systems
PMG/CS

113
/* This program encodes information by generating a permutation matrix */
/* by means of the logistic map, as described in 4.2.1 of the dissertation. */
/* */
/* usage: pmgcse filename xo */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the ranges for the chaotic map */


#define RMIN 0.0
#define RMAX 1.0

double map(double xi)


/* This function applies the chaotic mapping */
{
double xf;
xf = 4*xi*(1-xi);
return xf;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*xostring,*T,*C;
FILE *fin, *fout;
double xo,delta;
long i,j,k,m,check,*A,*P;

if (argc != 3)
{
printf("\n usage: pmgcse filename xo \n");
exit(0);
}

filein = *++argv;

xostring = *++argv;
xo = atof(xostring);

fileout = malloc(strlen(filein)*sizeof(*fileout));

114
sprintf(fileout,"%s.crp\0",filein);

/* printf("filein %s fileout %s xo %f\n",filein,fileout,xo); */

if ((fin = fopen(filein,"r"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

T = malloc(m*sizeof(*T));
check = fread(T,sizeof(*T),m,fin);

if (check != m)
{
printf("Error: %d elements read, %d bytes in file",check,m);
exit(0);
}

P = malloc(m*sizeof(*P));
A = malloc(m*sizeof(*A));
C = malloc(m*sizeof(*C));

/* Initialize A */
for (i=1;i<=m;i++)
{
*(A+i-1) = i;
}

/* Create the permutation P (pi in text) */


for (i=1;i<m;i++)
{
/* Refer to 4.2.1.1 (c) */
delta = (RMAX-RMIN)/(m-i+1);
j = (long) floor(xo/delta) + 1;

if (j>(m-i+1))
{

115
printf("Warning: fixed point reached\n");
j = m-i+1;
}

*(P+i-1) = *(A+j-1);

/* Now adjust A */
for (k=j-1;k<(m-i);k++)
{
*(A+k) = *(A+k+1);
}

/* Get new point along the orbit */


xo = map(xo);
}

/* Refer to 4.2.1.1 (d) */


*(P+m-1) = *A;

/* Generate ciphertext C */
for (i=1;i<=m;i++)
{
*(C+*(P+i-1)-1) = *(T+i-1);
}

/* Save file with "crp" extension */


fout = fopen(fileout,"w");
fwrite(C,sizeof(*C),m,fout);

printf("%f\n",xo);

fclose(fout);
free(C);
free(A);
free(P);
free(T);
fclose(fin);
free(fileout);
}

116
/* This program decodes information by generating a permutation matrix */
/* by means of the logistic map, as described in 4.2.1 of the dissertation. */
/* */
/* usage: pmgcsd filename xo */
/* */
/* This is the inverse of pmgcse */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the ranges for the chaotic map */


#define RMIN 0.0
#define RMAX 1.0

double map(double xi)


/* This function applies the chaotic mapping */
{
double xf;
xf = 4*xi*(1-xi);
return xf;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*xostring,*C,*T;
FILE *fin, *fout;
double xo,delta;
long i,j,k,m,check,*A,*P;

if (argc != 3)
{
printf("\n usage: pmgcse filename xo \n");
exit(0);
}

filein = *++argv;

xostring = *++argv;
xo = atof(xostring);

117
fileout = malloc(strlen(filein)*sizeof(*fileout));
sprintf(fileout,"%s.txt\0",filein);

/* printf("filein %s fileout %s xo %f\n",filein,fileout,xo); */

if ((fin = fopen(filein,"r"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

C = malloc(m*sizeof(*C));
check = fread(C,sizeof(*C),m,fin);

if (check != m)
{
printf("Error: %d elements read, %d bytes in file",check,m);
exit(0);
}

P = malloc(m*sizeof(*P));
A = malloc(m*sizeof(*A));
T = malloc(m*sizeof(*T));

/* Initialize A */
for (i=1;i<=m;i++)
{
*(A+i-1) = i;
}

/* Create the permutation P (pi in text) */


for (i=1;i<m;i++)
{
/* Refer to 4.2.1.1 (c) */
delta = (RMAX-RMIN)/(m-i+1);
j = (long) floor(xo/delta) + 1;

if (j>(m-i+1))

118
{
printf("Warning: fixed point reached\n");
j = m-i+1;
}

119
*(P+i-1) = *(A+j-1);

/* Now adjust A */
for (k=j-1;k<(m-i);k++)
{
*(A+k) = *(A+k+1);
}

/* Get new point along the orbit */


xo = map(xo);
}

/* Refer to 4.2.1.1 (d) */


*(P+m-1) = *A;

/* Generate text T */
for (i=1;i<=m;i++)
{
*(T+i-1) = *(C+*(P+i-1)-1);
}

/* Save file with "txt" extension */


fout = fopen(fileout,"w");
fwrite(T,sizeof(*T),m,fout);

printf("%f\n",xo);

fclose(fout);
free(T);
free(A);
free(P);
free(C);
fclose(fin);
free(fileout);
}

120
Appendix C

Permutation Matrix Generation


Using Iterated Function Systems
PMG/IFS

121
/* This program encodes a 64x64 image by generating a permutation matrix */
/* by means of the IFS described in (4.39) of the dissertation. */
/* The key is a 4-digit string, indicating a permutation of the four */
/* functions within the IFS. */
/* For example 1243 leaves functions w1 and w2 un changed but relabels w3 */
/* w4, and w4 w3. */
/* */
/* usage: pmgifse filename key */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the limits of the attractor of (4.39) */


#define LO 1
#define HI 64

void IFSP(int k,int ordr[],char A[64][64],char B[64][64],double x,double y,long *i,long


*j)
/* This function traverses the computation chart of Figure 4.7 */
{
int r,c,cnt;
double xk,yk;

if (k == 0)
{
r = (int) floor(y)-1;
c = (int) floor(x)-1;

/* These "if" statements should never activate, used to check for bugs */
if (*i>63)
{
printf("Error index i exceeds 63\n");
exit(0);
}
if (*j>63)
{
printf("Error index j exceeds 63\n");
exit(0);
}
if (r>63)

122
{
printf("Error index r exceeds 63\n");
exit(0);
}
if (c>63)
{
printf("Error index c exceeds 63\n");
exit(0);
}
B[*i][*j] = A[r][c];

/* Unquote the following line for debugging purposes */


/* printf("(%d,%d) -> (%d,%d)\n",r,c,*i,*j); */

*j = *j+1;
if (*j>63)
{
*j = 0;
*i = *i+1;
}
}
else
{
for (cnt=0;cnt<4;cnt++)
{
switch (ordr[cnt])
{
case 1 : xk = x/2 + 32.5;
yk = y/2 + 32.5;
break;
case 2 : xk = x/2 + 0.5;
yk = y/2 + 32.5;
break;
case 3 : xk = x/2 + 0.5;
yk = y/2 + 0.5;
break;
case 4 : xk = x/2 + 32.5;
yk = y/2 + 0.5;
break;
}
IFSP(k-1,ordr,A,B,xk,yk,i,j);
}
}

123
return;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*key,*auxstr,TI[64][64],CI[64][64];
FILE *fin, *fout;
double x=1.5,y=1.5,auxreal;
long i,j,k,m,side,check,*A,*P;
int auxint1,auxint2,order[4];
div_t auxdiv;

if (argc != 3)
{
printf("\n usage: pmgifse filename key \n");
exit(0);
}

filein = *++argv;
key = *++argv;

/* Check key */
auxint1 = atoi(key);
if ((auxint1 > 4321)||(auxint1 < 1234))
{
printf("key must be selected using the digits 1,2,3,4 only\n");
exit(0);
}
auxint2 =1;
for (i=3;i>=0;i--)
{
auxdiv = div(auxint1,10);
auxint1 = auxdiv.quot;
order[i]= auxdiv.rem;

/* printf("order[%d] = %d\n",i,order[i]); */

auxint2 = auxdiv.rem*auxint2;
}
if (auxint2 != 24)
{
printf("Improper key\n");

124
exit(0);
}

fileout = malloc(strlen(filein)*sizeof(*fileout));
sprintf(fileout,"%s.crp\0",filein);

if ((fin = fopen(filein,"rb"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

if (m != 4096)
{
printf("This version only encodes images that are 64x64x1byte\n");
exit(0);
}

/* Set the number of compositions */


k = 6;

/* Read-in file */
fread(TI,sizeof(char),m,fin);

/* Make sure the cryptofile is initialized to zero */


for (i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
CI[i][j] = 0;
}
}

i=0;
j=0;

IFSP(k,order,TI,CI,x,y,&i,&j);

125
/* Save file with "crp" extension */
fout = fopen(fileout,"wb");
fwrite(CI,sizeof(char),m,fout);

fclose(fout);
fclose(fin);
free(fileout);
}

126
/* This program decodes a 64x64 image by generating a permutation matrix */
/* by means of the IFS described in (4.39) of the dissertation. */
/* The key is a 4-digit string, indicating a permutation of the four */
/* functions within the IFS. */
/* For example 1243 leaves functions w1 and w2 un changed but relabels w3 */
/* w4, and w4 w3. */
/* */
/* usage: pmgifsd filename key */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

/* Define the limits of the attractor of (4.39) */


#define LO 1
#define HI 64

void IFSP(int k,int ordr[],char A[64][64],char B[64][64],double x,double y,long *i,long


*j)
/* This function traverses the computation chart of Figure 4.7 */
{
int r,c,cnt;
double xk,yk;

if (k == 0)
{
r = (int) floor(y)-1;
c = (int) floor(x)-1;

/* These "if" statements should never activate, used to check for bugs */
if (*i>63)
{
printf("Error index i exceeds 63\n");
exit(0);
}
if (*j>63)
{
printf("Error index j exceeds 63\n");
exit(0);
}
if (r>63)

127
{
printf("Error index r exceeds 63\n");
exit(0);
}
if (c>63)
{
printf("Error index c exceeds 63\n");
exit(0);
}
B[r][c] = A[*i][*j];

/* Unquote the following line for debugging purposes */


/* printf("(%d,%d) -> (%d,%d)\n",r,c,*i,*j); */

*j = *j+1;
if (*j>63)
{
*j = 0;
*i = *i+1;
}
}
else
{
for (cnt=0;cnt<4;cnt++)
{
switch (ordr[cnt])
{
case 1 : xk = x/2 + 32.5;
yk = y/2 + 32.5;
break;
case 2 : xk = x/2 + 0.5;
yk = y/2 + 32.5;
break;
case 3 : xk = x/2 + 0.5;
yk = y/2 + 0.5;
break;
case 4 : xk = x/2 + 32.5;
yk = y/2 + 0.5;
break;
}
IFSP(k-1,ordr,A,B,xk,yk,i,j);
}
}

128
return;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*key,*auxstr,CI[64][64],TI[64][64];
FILE *fin, *fout;
double x=1.5,y=1.5,auxreal;
long i,j,k,m,side,check,*A,*P;
int auxint1,auxint2,order[4];
div_t auxdiv;

if (argc != 3)
{
printf("\n usage: pmgifsd filename key \n");
exit(0);
}

filein = *++argv;
key = *++argv;

/* Check key */
auxint1 = atoi(key);
if ((auxint1 > 4321)||(auxint1 < 1234))
{
printf("key must be selected using the digits 1,2,3,4 only\n");
exit(0);
}
auxint2 =1;
for (i=3;i>=0;i--)
{
auxdiv = div(auxint1,10);
auxint1 = auxdiv.quot;
order[i]= auxdiv.rem;

/* printf("order[%d] = %d\n",i,order[i]); */

auxint2 = auxdiv.rem*auxint2;
}
if (auxint2 != 24)
{
printf("Improper key\n");

129
exit(0);
}

fileout = malloc(strlen(filein)*sizeof(*fileout));
sprintf(fileout,"%s.txt\0",filein);

if ((fin = fopen(filein,"rb"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

if (m != 4096)
{
printf("This version only decodes images that are 64x64x1byte\n");
exit(0);
}

/* Set the number of compositions */


k = 6;

/* Read-in file */
fread(CI,sizeof(char),m,fin);

/* Make sure TI is initialized to zero first */


for (i=0;i<64;i++)
{
for (j=0;j<64;j++)
{
TI[i][j] = 0;
}
}

i=0;
j=0;

IFSP(k,order,CI,TI,x,y,&i,&j);

130
/* Save file with "txt" extension */
fout = fopen(fileout,"wb");
fwrite(TI,sizeof(char),m,fout);

fclose(fout);
fclose(fin);
free(fileout);
}

131
/* This program encodes a 10000 letters long text by generating a */
/* permutation matrix */
/* by means of the IFS described in (4.58) of the dissertation. */
/* The key is a 10-digit string, indicating a permutation of the ten */
/* functions within the IFS. */
/* For example 0123456798 leaves functions w0-w7 unchanged but relabels */
/* w8 w9, and w9 w8. */
/* */
/* usage: pmgLifse filename key */
/* */
/* key contains unrepeated digits from 0 to 9 */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void IFSP(int k,int ordr[],char A[],char B[],double x,long *j)


/* This function traverses the computation chart of Figure 4.7 */
{
int c,cnt;
double xk,b;

if (k == 0)
{
c = (int) floor(x)-1;

/* These "if" statements should never activate, used to check for bugs */
if (*j>9999)
{
printf("Error index j exceeds 9999\n");
exit(0);
}
if (c>9999)
{
printf("Error index c exceeds 9999\n");
exit(0);
}
B[*j] = A[c];

/* Unquote the following line for debugging purposes */


/* printf("(%d) -> (%d)\n",c,*j); */

132
*j = *j+1;
if (*j>9999)
{
*j = 0;
}
}
else
{
for (cnt=0;cnt<10;cnt++)
{
b = (float)ordr[cnt]*1000.0+0.9;
xk = x/10 + b;
IFSP(k-1,ordr,A,B,xk,j);
}
}
return;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*key,*auxstr,T[10000],C[10000];
FILE *fin, *fout;
double x=1.5,auxreal;
long i,j,k,m,side,check;
int auxint1,auxint2;
int order[10];

if (argc != 3)
{
printf("\n usage: pmgifse filename key \n");
exit(0);
}

filein = *++argv;
key = *++argv;

auxint2 =1;

/* Check key */
for (i=0;i<10;i++)
{
order[i] = *(key+i)-48;

133
if ((order[i]> 9)||(order[i] < 0))
{
printf("key must be selected using the digits 0, . . . ,9 only\n");
exit(0);
}

/* printf("order[%d] = %d\n",i,order[i]); */

auxint2 = (order[i]+1)*auxint2;
}
if (auxint2 != 3628800)
{
printf("Improper key\n");
exit(0);
}

fileout = malloc(strlen(filein)*sizeof(*fileout));
sprintf(fileout,"%s.crp\0",filein);

if ((fin = fopen(filein,"r"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

if (m != 10000)
{
printf("This version only encodes 10000 character text files. \n");
exit(0);
}

/* Set the number of compositions */


k = 4;

/* Read-in file */
fread(T,sizeof(char),m,fin);

134
/* Make sure the cryptofile is initialized to zero */
for (i=0;i<10000;i++)
{
C[i] = 0;
}

j=0;

IFSP(k,order,T,C,x,&j);

/* Save file with "crp" extension */


fout = fopen(fileout,"w");
fwrite(C,sizeof(char),m,fout);

fclose(fout);
fclose(fin);
free(fileout);
}

135
/* This program decodes a 10000 letter long text by generating a */
/* permutation matrix */
/* by means of the IFS described in (4.58) of the dissertation. */
/* The key is a 10-digit string, indicating a permutation of the ten */
/* functions within the IFS. */
/* For example 0123456798 leaves functions w0-w7 unchanged but relabels */
/* w8 w9, and w9 w8. */
/* */
/* usage: pmgLifsd filename key */
/* */
/* key contains unrepeated digits from 0 to 9 */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void IFSP(int k,int ordr[],char A[],char B[],double x,long *j)


/* This function traverses the computation chart of Figure 4.7 */
{
int c,cnt;
double xk,b;

if (k == 0)
{
c = (int) floor(x)-1;

/* These "if" statements should never activate, used to check for bugs */
if (*j>9999)
{
printf("Error index j exceeds 9999\n");
exit(0);
}
if (c>9999)
{
printf("Error index c exceeds 9999\n");
exit(0);
}
B[c] = A[*j];

/* Unquote the following line for debugging purposes */


/* printf("(%d) -> (%d)\n",*j,c); */

136
*j = *j+1;
if (*j>9999)
{
*j = 0;
}
}
else
{
for (cnt=0;cnt<10;cnt++)
{
b = (float)ordr[cnt]*1000.0+0.9;
xk = x/10 + b;
IFSP(k-1,ordr,A,B,xk,j);
}
}
return;
}

void main(int argc, char *argv[])


{
char *filein,*fileout,*key,*auxstr,C[10000],T[10000];
FILE *fin, *fout;
double x=1.5,auxreal;
long i,j,k,m,side,check;
int auxint1,auxint2;
int order[10];

if (argc != 3)
{
printf("\n usage: pmgifsd filename key \n");
exit(0);
}

filein = *++argv;
key = *++argv;

auxint2 =1;

/* Check key */
for (i=0;i<10;i++)
{
order[i] = *(key+i)-48;

137
if ((order[i]> 9)||(order[i] < 0))
{
printf("key must be selected using the digits 0, . . . ,9 only\n");
exit(0);
}

/* printf("order[%d] = %d\n",i,order[i]); */

auxint2 = (order[i]+1)*auxint2;
}
if (auxint2 != 3628800)
{
printf("Improper key\n");
exit(0);
}

fileout = malloc(strlen(filein)*sizeof(*fileout));
sprintf(fileout,"%s.txt\0",filein);

if ((fin = fopen(filein,"r"))==NULL)
{
printf("%s does not exist! \n",filein);
exit(0);
}

/* Get file size */


fseek(fin,0,SEEK_END);
m = ftell(fin);
rewind(fin);

if (m != 10000)
{
printf("This version only encodes 10000 character text files. \n");
exit(0);
}

/* Set the number of compositions */


k = 4;

/* Read-in file */
fread(C,sizeof(char),m,fin);

/* Make sure the text file is initialized to zero */

138
for (i=0;i<10000;i++)
{
T[i] = 0;
}

j=0;

139
IFSP(k,order,C,T,x,&j);

/* Save file with "txt" extension */


fout = fopen(fileout,"w");
fwrite(T,sizeof(char),m,fout);

fclose(fout);
fclose(fin);
free(fileout);
}

140
Appendix D

Substitution Encoding
SSPRNG

141
/* This program encodes takes two files as its input and writes to the */
/* standard output, character by character, their sum modulo 255. */
/* Combined with prngit and prngdt it can implement the substitution */
/* scheme of Section 4.3. */
/* */
/* usage: combine file1 file2 */
/* */
/* Result = file1 + file2 (mod 255) goes to the standard output. */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void main(int argc, char *argv[])


{
char *file1,*file2,*F1,*F2;
FILE *f1, *f2;
long i,m1,m2;
char rslt;

if (argc != 3)
{
printf("\n usage: combine file1 file2 produces file1+file2 \n");
exit(0);
}

file1 = *++argv;
file2 = *++argv;

if ((f1 = fopen(file1,"rb"))==NULL)
{
printf("%s does not exist! \n",file1);
exit(0);
}

if ((f2 = fopen(file2,"rb"))==NULL)
{
printf("%s does not exist! \n",file2);
exit(0);
}

/* Get file sizes */

142
fseek(f1,0,SEEK_END);
m1 = ftell(f1);
rewind(f1);
fseek(f2,0,SEEK_END);
m2 = ftell(f2);
rewind(f2);

if (m1 != m2)
{
printf("files must be equal size \n");
exit(0);
}

/* Allocate memory */
F1 = malloc(m1*sizeof(*F1));
F2 = malloc(m2*sizeof(*F2));

/* Read-in files */
fread(F1,sizeof(*F1),m1,f1);
fread(F2,sizeof(*F2),m2,f2);
for (i=0;i<m1;i++)
{
rslt = *(F1+i) + *(F2+i);
printf("%c",rslt);
}
free(F2);
free(F1);
fclose(f2);
fclose(f1);
}

143
/* This program takes two files as its input and writes to the */
/* standard output, character by character, their difference modulo 255. */
/* Combined with prngit and prngdt it can implement the substitution */
/* scheme of Section 4.3. */
/* */
/* usage: decombine file1 file2 */
/* */
/* Result = file1 - file2 (mod 255) goes to the standard output. */

#include <stdio.h>
#include <stdlib.h>

void main(int argc, char *argv[])


{
char *file1,*file2,*F1,*F2;
FILE *f1, *f2;
long i,m1,m2;
char rslt;

if (argc != 3)
{
printf("\n usage: decombine file1 file2 result = file1 - file2 \n");
exit(0);
}

file1 = *++argv;
file2 = *++argv;

if ((f1 = fopen(file1,"rb"))==NULL)
{
printf("%s does not exist! \n",file1);
exit(0);
}

if ((f2 = fopen(file2,"rb"))==NULL)
{
printf("%s does not exist! \n",file2);
exit(0);
}

/* Get file sizes */


fseek(f1,0,SEEK_END);
m1 = ftell(f1);

144
rewind(f1);
fseek(f2,0,SEEK_END);

145
m2 = ftell(f2);
rewind(f2);

if (m1 != m2)
{
printf("files must be equal size \n");
exit(0);
}

/* Allocate memory */
F1 = malloc(m1*sizeof(*F1));
F2 = malloc(m2*sizeof(*F2));

/* Read-in files */
fread(F1,sizeof(*F1),m1,f1);
fread(F2,sizeof(*F2),m2,f2);
for (i=0;i<m1;i++)
{
rslt = *(F1+i) - *(F2+i);
printf("%c",rslt);
}
free(F2);
free(F1);
fclose(f2);
fclose(f1);
}

146
Apendix E

Secret Sharing Schemes

147
/* This program implements the iterated function system of (4.55) */
/* and 4.66 to produce the images necessary for example 4.4.1. */
/* */
/* usage genimg alpha1 alpha2 N basename */
/* */
/* alpha1 and alpha2 define the parameter for the two IFS WA and */
/* WB, N is the number of points to iterate and basename is the */
/* the base name for the binary files that will contain the data */
/* for the attractors. Four of them are calculated:basenameA is */
/* the attractor for the system with alpha1 as parameter, */
/* basenameB is the file for GB, basenameDE is for GD and finally, */
/* basenameDD for GD used in decoding. */

#include <stdio.h>
#include <stdlib.h>
#include <math.h>

void IFSAB(double *x,double *y, double alpha)


/* This function implements the IFS of (4.55) */
{
double xk,yk,xi,yi,rrnum;
int irnum,select;

xi = *x;
yi = *y;

irnum = rand();
rrnum = (float)irnum*5.0/RAND_MAX;
select = (int)floor(rrnum);

switch (select)
{
case 0: xk = alpha*xi + 65;
yk = yi + 65*alpha;
break;
case 1: xk = xi + alpha;
yk = alpha*yi + 65;
break;
case 2: xk = alpha*xi + 1;
yk = yi + alpha;
break;

148
case 3: xk = xi + 65*alpha;
yk = alpha*yi + 1;
break;
case 4: xk = (alpha-1)*xi + 66;
yk = (alpha-1)*yi + 66;
break;
}
*x = xk/(1+alpha);
*y = yk/(1+alpha);
return;
}

void IFSWD(double *x, double*y, double alpha1, double alpha2)


/* This function implements the IFS of (4.66) */
{
double rrnum;
int irnum,select;

irnum = rand();
rrnum = (float)irnum*2.0/RAND_MAX;
select = (int)floor(rrnum);

switch (select)
{
case 0: IFSAB(x,y,alpha1);
break;
case 1: IFSAB(x,y,alpha2);
break;
}
return;
}

void main(int argc, char *argv[])


{
char *basestr, *alpha1str, *alpha2str, *Nstr,*name;
long N,count;
double x=1.5,y=1.5;
double alpha1, alpha2;
double rGA[64][64],rGB[64][64],rGDE[64][64],rGDD[64][64];

149
double minga,maxga,mingb,maxgb,mingde,maxgde,mingdd,maxgdd;
char GA[64][64],GB[64][64],GDE[64][64],GDD[64][64];
int i,j;
FILE *fp;

if (argc != 5)
{
printf("usage: genimg alpha1 alpha2 N basename\n");
exit(0);
}

/* Read parameters */

alpha1str = *++argv;
alpha2str = *++argv;
Nstr = *++argv;
basestr = *++argv;

alpha1 = atof(alpha1str);
alpha2 = atof(alpha2str);
N = atol(Nstr);

/* Clear rawimages */
for (i=0;i<63;i++)
{
for (j=0;j<63;j++)
{
rGA[i][j] = 0.0;
rGB[i][j] = 0.0;
rGDE[i][j] = 0.0;
rGDD[i][j] = 0.0;
}
}

/* Generate rGA */
for (count=0;count<N;count++)
{
IFSAB(&x,&y,alpha1);
i = (int) floor(x)-1;
j = (int) floor(y)-1;
if ((i>63)||(j>63))
{

150
printf("index exceed matrix dimensions i= %d j = %d\n",i,j);

}
rGA[i][j] = rGA[i][j] +1;
}

/* Generate rGB */
for (count=0;count<N;count++)
{
IFSAB(&x,&y,alpha2);
i = (int) floor(x);
j = (int) floor(y);
rGB[i][j] = rGB[i][j] +1;
}

/* Generate rGDE */
for (count=0;count<N;count++)
{
IFSWD(&x,&y,alpha1,alpha2);
i = (int) floor(x);
j = (int) floor(y);
rGDE[i][j] = rGDE[i][j] +1;
}

/* Generate rGDD */
for (count=0;count<N;count++)
{
IFSWD(&x,&y,alpha1,alpha2);
i = (int) floor(x);
j = (int) floor(y);
rGDD[i][j] = rGDD[i][j] +1;
}

/* Get maximums and minimums */


maxga = rGA[1][1];
minga = rGA[1][1];
maxgb = rGB[1][1];
mingb = rGB[1][1];
maxgde = rGDE[1][1];
mingde = rGDE[1][1];
maxgdd = rGDD[1][1];
mingdd = rGDD[1][1];

151
for (i=0;i<63;i++)
{
for (j=0;j<63;j++)
{
if (rGA[i][j] > maxga) maxga = rGA[i][j];
if (rGA[i][j] < minga) minga = rGA[i][j];
if (rGB[i][j] > maxgb) maxgb = rGB[i][j];
if (rGB[i][j] < mingb) mingb = rGB[i][j];
if (rGDE[i][j] > maxgde) maxgde = rGDE[i][j];
if (rGDE[i][j] < mingde) mingde = rGDE[i][j];
if (rGDD[i][j] > maxgdd) maxgdd = rGDD[i][j];
if (rGDD[i][j] < mingdd) mingdd = rGDD[i][j];
}
}

/* Normalize the attractors */

for (i=0;i<63;i++)
{
for (j=0;j<63;j++)
{
rGA[i][j] = (rGA[i][j]-minga)*255.9/(maxga-minga);
rGB[i][j] = (rGB[i][j]-mingb)*255.9/(maxgb-mingb);
rGDE[i][j] = (rGDE[i][j]-mingde)*255.9/(maxgde-mingde);
rGDD[i][j] = (rGDD[i][j]-mingdd)*255.9/(maxgdd-mingdd);
}
}

/* create character matrices */


for (i=0;i<63;i++)
{
for (j=0;j<63;j++)
{
GA[i][j] = (char)floor(rGA[i][j]);
GB[i][j] = (char)floor(rGB[i][j]);
GDE[i][j] = (char)floor(rGDE[i][j]);
GDD[i][j] = (char)floor(rGDD[i][j]);
}
}

/* Save files */

152
name = malloc(strlen(basestr)+3);

sprintf(name,"%sA",basestr);
fp = fopen(name,"wb");
fwrite(GA,sizeof(char),4096,fp);
fclose(fp);

sprintf(name,"%sB",basestr);
fp = fopen(name,"wb");
fwrite(GB,sizeof(char),4096,fp);

153
fclose(fp);

sprintf(name,"%sDE",basestr);
fp = fopen(name,"wb");
fwrite(GDE,sizeof(char),4096,fp);
fclose(fp);

sprintf(name,"%sDD",basestr);
fp = fopen(name,"wb");
fwrite(GDD,sizeof(char),4096,fp);
fclose(fp);

free(name);
}

154
Appendix F

Complexity Analysis
Programs

155
/* This program calculates the k-order entropy of a sequence whose basic */
/* elements are ASCII characters. */
/* */
/* usage: kentropy ni [nf] infile */
/* */
/* ni - is the minimum order or size of the ngrams considered */
/* nf - is the maximum order or size of the ngrams considered */
/* infile - is the name of the file to analyze. */
/* */
/* Symbols are grouped with no overlap. */

# include "tcomplexity.h"

void main(int argc, char *argv[])


{
int ni,nf,k;
char *ninumber,*nfnumber,*inname;
char *ngram,*inchar;
FILE *infile;
double H;

link root;

if (argc < 3)
{
printf("usage: kentropy ni [nf] infile\n");
}
else
{
ninumber = *++argv;
ni = atoi(ninumber);

if (argc == 4)
{
nfnumber = *++argv;
inname = *++argv;
nf = atoi(nfnumber);
}
else
{
inname = *++argv;
nf = ni;
}

156
/* Open files */
infile = fopen(inname,"r");

if (infile == NULL)
{
printf("Non-existent input file \"%s\"\n",inname);
}
else
{
if (ni <= 0)
{
printf("n-grams should be larger than zero\n");
}
else
{
/* This is where the actual processing takes place */

for (k = ni; k <= nf; k++)


{
/* Initialize n-gram */
/* the "k+1" in the following assignment is to */
/* accomodate for the terminating null character */
ngram = malloc(k+1);
inchar = malloc(k+1);
rewind(infile);
inchar = fgets(ngram,k+1,infile);

root = makeroot(ngram);

while (!feof(infile))
{
inchar = fgets(ngram,k+1,infile);
if (inchar != NULL)
{
addgram(&root,ngram);
}
}
calcprob(&root);
sortchain(&root,1);
/* Now proceed to calculate the entropy */
H = chainentropy(&root);

157
destroylink(&root);
/* printf("%d,%f\n",k,H); */
printf("%f\n",H);
}
}
}
fclose(infile);
}
}

158
/* This library contains functions necessary to analize the complexity of */
/* sequences of elementary messages. */
/* */
/* tcomplexity.h */

# include <stdio.h>
# include <stdlib.h>
# include <ctype.h>
# include <math.h>

typedef struct
{
char *key;
long int f;
double p;
void *rlink;
} link;

link makeroot(char *key)


/* Creates the root link of a chain */
{
int ksize;
link newlink;
ksize = strlen(key)+1;
newlink.key = malloc(ksize);
strcpy(newlink.key,key);
newlink.rlink = NULL;
newlink.f = 1;
newlink.p = 0.0;
return newlink;
}

void appendlink(link *root,char *key)


/* Appends a link to the right-side of a chain */
{
int ksize;
link *plink;
plink = root;

while (plink->rlink != NULL)


{
plink = plink->rlink;

159
}
plink->rlink = malloc(sizeof(link));
plink = plink->rlink;
ksize = strlen(key)+1;
plink->key = malloc(ksize);
strcpy(plink->key,key);
plink->f = 1;
plink->p = 0.0;
plink->rlink = NULL;
return;
}

void showlink(link slink)


/* Shows all right-side links in a chain */
{
link *plink;
printf("ngram: %s f: %d p: %f\n",slink.key,slink.f,slink.p);
if (slink.rlink != NULL)
{
plink = slink.rlink;
showlink(*plink);
}
return;
}

void savelink(link slink, FILE *out)


/* This saves all links after slink into a specified file */
{
link *plink;
fprintf(out,"%s %f\n",slink.key,slink.p);
/* fprintf(out,"\"%s\",\"%f\"\n",slink.key,slink.p); */
if (slink.rlink != NULL)
{
plink = slink.rlink;
savelink(*plink,out);
}
return;
}

void addgram(link *root,char *key)


/* Looks for the place to insert information about an ngram */
{

160
link *plink;
plink = root;
while ((plink != NULL)&&(strcmp(key,plink->key) != 0))
{
plink = plink->rlink;
}
if (plink == NULL)
{
appendlink(root,key);
}
else
{
plink->f++;
}
return;
}

void calcprob(link *root)


/* This function calculates the probabilities of the ngrams in a chain */
{
long int N;
link *plink;

plink = root;
N = 0;

while (plink != NULL)


{
N = N + plink->f;
plink = plink->rlink;
}

plink = root;

while (plink != NULL)


{
plink->p = (double) plink->f / N;
plink = plink->rlink;
}
return;
}

161
void sortchain(link *root, int method)
/* Sorts the chain that starts in root, if method is 1 according to */
/* probability, if it is 0 according to ngram value. */
{
link *plink1,*plink2;
int sorted=0;
char *tkey;
int tf;
double tp;

while (!sorted)
{
plink1 = root;
sorted = 1;

while ((plink1->rlink) != NULL)


{
plink2 = plink1->rlink;
if (method == 1)
{
/* Take decision according to probability value */
if (plink1->f < plink2->f)
{
sorted = 0;
tkey = plink1->key;
tf = plink1->f;
tp = plink1->p;

plink1->key = plink2->key;
plink1->f = plink2->f;
plink1->p = plink2->p;

plink2->key = tkey;
plink2->f = tf;
plink2->p = tp;
}
}
else
{
/* Take decision according to key value */
if (strcmp(plink1->key,plink2->key) > 0)
{
sorted = 0;

162
tkey = plink1->key;
tf = plink1->f;
tp = plink1->p;

plink1->key = plink2->key;
plink1->f = plink2->f;
plink1->p = plink2->p;

plink2->key = tkey;
plink2->f = tf;
plink2->p = tp;
}
}
plink1 = plink1->rlink;
}
}
return;
}

void destroylink(link *root)


/* Deallocates memory associated to a chain */
{
link *plink;
plink = root->rlink;

free(root->key);
free(root);

if (plink != NULL) destroylink(plink);

return;
}

double chainentropy(link *root)


/* Calculates the entropy of a list of ngrams */
{
double H=0.0;
link *plink;
if (root != NULL)
{
plink = root->rlink;
H = (-1.0*root->p)* log(root->p)/log(2) + chainentropy(plink);
}

163
return H;
}

void shiftin(char *new, char ins, int sz)


{
int i;
for (i=0;i<sz-1;i++)
{
*(new+i) = *(new+i+1);
}
*(new+sz-1) = ins;
}

164
/* This unit contains several definitions and procedures that are */
/* useful when dealing with sequences. */
/* sequence.h */

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <assert.h>
#include <math.h>

#define BIN 2
#define ASC 3
#define HEX 16
#define DEC 10

typedef struct
{
char *p; /* p is the base address of the sequence */
long int unsigned asize; /* Stores the number of atoms */
char unsigned unit; /* Sets the size in bits of an atom */
long int unsigned element; /* Points to an atom in the sequence */
long int unsigned size; /* Stores the size of the sequence (bytes)*/
char unsigned bunit; /* Minimum number of bytes necessary/atom */
} Sequence;

/**************** Procedures associated to sequences ******************/

void adjust_bunit(Sequence *s)


/* Determines and fixes the minimum amount of space necessary to store */
/* an atom. */
{
if ((s->unit % 8) > 0) s->bunit = s->unit / 8 +1;
else s->bunit = s->unit/8;

void create_sequence(Sequence *s, char unsigned asize, char unsigned unit)


/* This function creates a sequence s and reserves memory for it */
{
s->asize = asize;
s->unit = unit;
s->size = (asize*unit)/8;

165
if (((asize*unit) % 8) > 0) s->size++;
adjust_bunit;
s->p = malloc(s->size);
s->element = 0;
/* printf("symbol size %d\n",s->size); */
}

void init_sequence(Sequence *s, char *string)


/* This function initializes a sequence with the contents of a string */
/* Note: since the string is null terminate this particular symbol */
/* \0 would be the last part of the sequence */
{
s->size = strlen(string)+1;
s->asize = s->size;
s->p = malloc(s->size);
strcpy(s->p,string);
s->element = 0;
s->unit = 1;
adjust_bunit(s);
}

void load_sequence(Sequence *s, char *filename)


/* This function loads a sequence with the contents of a file */
{
FILE *fp;
long int unsigned fsize,count;

fp = fopen(filename , "rb");
if (fp == NULL)
{
printf("Error: Could not open %s for input\n", filename);
assert(fp);
}
/* Get the size of the file */
fseek(fp,0,SEEK_END);
fsize = ftell(fp);
printf("File %s is %Ld bytes in size \n",filename,fsize);
fclose(fp);

s->p = malloc(fsize);

166
if (s == NULL)
{
printf("Error: unable to allocate %Ld bytes for the sequence\n",fsize);
assert(s);
}

s->size = fsize;
s->asize = fsize;
s->element = 0;
s->unit = 1;
adjust_bunit(s);
fopen(filename,"rb");

count = fread(s->p,1,fsize,fp);
if (count != fsize)
{
printf("Warning: number of bytes read and filesize are different.\n");
}
}

void unload_sequence(Sequence s, char *filename)


/* This function unloads a sequence into file */
{
FILE *fp;
long int unsigned count;

fp = fopen(filename , "wb");
if (fp == NULL)
{
printf("Error: Could not open %s for output\n", filename);
assert(fp);
}
fwrite(s.p,1,s.size,fp);
fclose(fp);
}

void set_unit(Sequence *s, char usz)


/* This function sets the length, in bits, of an atom */
{
s->unit = usz;

167
s->asize = s->size*8 / usz;
adjust_bunit(s);
}

void set_position(Sequence *s, long int unsigned pos)


/* Sets the element pointer to pos */
{
if (pos < s->asize)
{
s->element = pos;
}
else
{
printf("Warning: position not updated, off-limits\n");
}
}

long int unsigned get_position(Sequence *s)


/* Returns current position of the sequence pointer */
{
return s->element;
}

void reset_position(Sequence *s)


/* Resets the position indicator to zero */
{
s->element = 0;
}

int end_of_sequence(Sequence s)
/* Returns a 1 if at end of sequence and zero otherwise */
{
if (s.element == s.asize-1) return 1;
else return 0;
}

char get_bit(Sequence s, long int unsigned bitnum)


/* Retrieves bit number bitnum in the sequence */

168
{
long int unsigned index;
char remainder,cchar,shift,count;

/* printf("Bit number : %d\n",bitnum); */


index = bitnum/8;
/* printf("Offset : %d\n",index); */
remainder = bitnum % 8;
/* printf("Remainder : %d\n",remainder); */
shift = 7-remainder;
/* printf("Shift : %d\n",shift); */
cchar = *(s.p+index);
/* printf("Actual char: %02X \n",cchar); */
cchar = cchar>>shift;
/* printf("After shift: %02X \n",cchar); */
cchar = cchar&0x01;
/* printf("After AND: %02X \n",cchar); */
return cchar;
}

void get_subsequence(Sequence *d,Sequence *s, long int unsigned base)


/* This function fetches a subsequence from the sequence s and stores */
/* in the sequence d. The size in atoms of the sequence d determines */
/* the number the number of atoms fetched. No update on the current */
/* element pointer is done. */
{
int aux,index,rem;
char unsigned ibit;
long int totbits;

if ((d->unit) != (s->unit))
{
printf("Error: Destination and Source atom sizes differ\n");
assert(0);
}

totbits = (d->asize)*(d->unit);

if ((base+d->asize) <= s->asize)


{

/* Make sure reserved space is initialized to zero */

169
for (aux=0;aux<d->size;aux++)
{
*(d->p+aux) = 0;
}

for (aux=0; aux<totbits;aux++)


{
index = aux/8;
rem = aux % 8;
ibit = get_bit(*s,base*(s->unit)+aux);
ibit = ibit << (7-rem);
*(d->p+index)=(*(d->p+index)|ibit);

}
}
else
{
printf("Warning: no subsequence retrieved\n");
}
}

int same_symbol(Sequence s1, Sequence s2)


/* This function returns of 1 if s1 and s2 are the same symbol */
/* and zero otherwise. */
{
int tflag=1;
long int unsigned count=0;
/* check that unit sizes and lengths are the same */

if ((s1.size != s2.size)||(s1.unit != s2.unit)) tflag = 0;

while ((tflag ==1) && (count < s1.size))


{
if (*(s1.p+count)!= *(s2.p+count)) tflag = 0;
count++;
}
return tflag;
}

int is_greater_symbol(Sequence s1, Sequence s2)


/* This function returns of 1 if s1 > s2 and zero otherwise */

170
{
int gflag=0,eflag=1;
long int unsigned count=0;
/* check that unit sizes and lengths are the same */

if ((s1.size != s2.size)||(s1.unit != s2.unit))


{
printf("Warning: comparing symbols of different size\n");
gflag = 0;
}

while ((gflag ==0) && (count < s1.size) &&(eflag == 1))


{
if ((char unsigned)*(s1.p+count) > (char unsigned)*(s2.p+count)) gflag = 1;
if ((char unsigned)*(s1.p+count) < (char unsigned)*(s2.p+count)) eflag = 0;
count++;
}
return gflag;
}

void get_next_subsequence(Sequence *d,Sequence *s)


/* This function fetches a subsequence from the sequence s and stores */
/* in the sequence d. The size in atoms of the sequence d determines */
/* the number the number of atoms fetched. */
/* The subsequence is fetched from the current pointer afterward this */
/* is incremented BY ONE and not the length of readed data */
{
get_subsequence(d,s,s->element);
if (s->element < s->asize-1) s->element++;
}
/* element pointer is done. */

void binary_rep(char unsigned *snum,char num)


/* This function takes a decimal integer and produces its binary */
/* representation. */
{
int aux;
char unsigned anum;

171
anum = num;
for (aux=7; aux>=0;aux--)
{
if ((anum%2) == 0) *(snum+aux) = '0'; else *(snum+aux) = '1';
anum = anum /2;
}
*(snum+aux) = '\0';
}

void show_sequence(Sequence s, char mode)


/* This function displays a sequence as a binary string, an ASCII */
/* stream, a series of hexadecimal numbers or a series of decimal */
/* numbers according to the value of mode. */
{
int aux,num;
char unsigned snum[10];
char unsigned tnum;

for (aux=0;aux<s.size;aux++)
{
if (mode == BIN)
{
if ((aux % 8) == 0) printf("\n");
num = *(s.p+aux);
binary_rep(snum,num);
printf("%s ",snum);
}
else
{
if (mode == ASC)
{
if ((aux % 72) == 0) printf("\n");
printf("%c",*(s.p+aux));
}
else
{
tnum = *(s.p+aux);
if (mode == HEX)
{
if ((aux % 24) == 0) printf("\n");
printf("%02X",tnum);

172
if (((aux+1) % 4) == 0) printf(" "); else printf("-");
}
else
{
if ((aux % 16) == 0) printf("\n");
printf("%03d",tnum);
if (((aux+1) % 4) == 0) printf(" "); else printf("-");
}
}
}
}
printf("\n");
}

void destroy_sequence(Sequence *s)


/* De-allocate resources for the sequence */
{
free(s);
s->size = 0;
s->element = 0;
s->unit = 0;
}

/***********************************************************************
/

typedef struct
{
char *s;
long int frequency;
void *next;
} Data_element;

/************************ List methods ****************************/

Data_element *create_list(Sequence symbol)


/* This function returns a pointer to the beginning of a list of */
/* elements whose type is Data_element */
{

173
Data_element *temp;

temp = malloc(sizeof(Data_element));
temp->s = malloc(symbol.size);
temp->next = NULL;
temp->frequency = 1;
if (memcpy(temp->s,symbol.p,symbol.size) == NULL)
{
printf("Warning: couldn't copy symbol into list\n");
}
return temp;
}

void add_to_list(Data_element **root, Sequence symbol)


/* Adds an element to a list that begins at root */
{
Data_element *tpointer, *tnew;
Sequence tsymbol,tnsymbol;

if (*root == NULL)
{
*root = create_list(symbol);
}
else
{
/* a list exists and must be updated */

tpointer = *root; /* point to the first element in the list */


tsymbol = symbol; /* copy size and characteristics of the symbol */
tsymbol.p = tpointer->s; /* point to information in list */

tnsymbol = symbol; /* prepare the "next" symbol in the list */


if (tpointer->next != NULL) tnsymbol.p = ((Data_element *)(tpointer->next))->s;

if (is_greater_symbol(tsymbol,symbol) == 1)
{
/* Insert at the beginning */
tnew = create_list(symbol);
tnew->next = *root;
*root = tnew;
}
else

174
{
/* Advance pointer until the end of the list is reached, or a symbol */
/* match has occured or an insertion point has been encountered. */

while
(((tpointer->next!=NULL)&&(same_symbol(tsymbol,symbol)==0)&&(is_greater_symbo
l(tnsymbol,symbol)==0))==1)
{
tpointer = tpointer->next;
tsymbol.p = tpointer->s;
if (tpointer->next != NULL) tnsymbol.p = ((Data_element *)(tpointer->next))->s;
}
/* We are at the insertion point, determine action to follow */
if (same_symbol(tsymbol,symbol) == 1)
{
/* Update frequency */
tpointer->frequency++;
}
else
{

if (tpointer->next == NULL)
{
/* At end of the list and NOT the same symbols, so must append symbol to the end
*/
tnew = create_list(symbol);
tpointer->next = tnew;
}
else
{
/* Somewhere in between the list, insert new symbol */
tnew = create_list(symbol);
tnew->next = tpointer->next;
tpointer->next = tnew;
}
}
}
}
}

void destroy_list(Data_element *root)

175
/* This function releases the memory used in a list */
{
Data_element *troot;
if (root != NULL)
{
destroy_list(root->next);
free(root->s);
free(root);
}
}

long int list_total(Data_element *root)


/* Returns the sum of the frequencies of the elements in a list */
{
if (root != NULL)
{
return root->frequency + list_total(root->next);
}
else
{
return 0;
}
}

void display_list(Data_element *root, Sequence symbol,char mode )


/* This function displays the contents of a list according to the */
/* formating information present in symbol, specifically the number */
/* of bytes a symbol employes is of importance. Unpredictable results */
/* occur when called with a symbol whose size is actually larger than */
/* the memory space allocated for the symbols within the list */
{
if (root != NULL)
{
symbol.p = root->s;
show_sequence(symbol,mode);
printf("With frequency: %d\n",root->frequency);
display_list(root->next,symbol,mode);
}
}

176
void save_f_data(Data_element *root,char *filename)
/* This function saves the frequency information in a list as ASCII numbers */
{
FILE *fp;
Data_element *tpointer;
long int count=1,Nk;
float p;

fp = fopen(filename , "w");
if (fp == NULL)
{
printf("Error: Could not open %s for output\n", filename);
assert(fp);
}

Nk = list_total(root);
tpointer = root;
while (tpointer != NULL)
{
p = (float)tpointer->frequency/Nk;
fprintf(fp,"%5d , %5f\n",count,p);
tpointer = tpointer->next;
count++;
}
fclose(fp);
}

void save_s_data(Data_element *root, Sequence symbol, char *filename)


/* This function saves the symbols in a list as ASCII characters */
{
FILE *fp;
Data_element *tpointer;
long int aux;

fp = fopen(filename , "w");
if (fp == NULL)
{
printf("Error: Could not open %s for output\n", filename);
assert(fp);
}

tpointer = root;
while (tpointer != NULL)

177
{
for (aux=0;aux<symbol.size;aux++)
{
fprintf(fp,"%c",*(tpointer->s+aux));
}
fprintf(fp,"\n");
tpointer = tpointer->next;
}
fclose(fp);
}

Data_element *form_list(Sequence *s, long int symsize)


/* This function forms a probabilistic distribution list of symbols of */
/* symsize atoms in length. It is understood that the sequence s has */
/* its unit size fixed by a previous call to set_unit(). */
{
Sequence symbol;
Data_element *Pk;
create_sequence(&symbol,symsize,s->unit);
reset_position(s);
get_next_subsequence(&symbol,s);
Pk = create_list(symbol);
while ((get_position(s) < s->asize-symsize))
{
get_next_subsequence(&symbol,s);
add_to_list(&Pk,symbol);
}
reset_position(s);
return Pk;
}

double entropy(Data_element *root)


/* This function returns the entropy (using logarithm base 2) of a list */
/* of frequencies. */
{
Data_element *tpointer;
long int Nk;
double p,H=0.0;

178
Nk = list_total(root);
tpointer = root;
while (tpointer != NULL)
{
p = (double)tpointer->frequency/Nk;
H = H - p*log(p)/log(2);
tpointer = tpointer->next;
}
return H;
}

double complexity(Sequence *s,long int unsigned hterm)


/* This function calculates the actual complexity of a sequence s */
/* up to its hterm component. The whole complexity is obtained by */
/* setting hterm equal to the size in atoms of the sequence. */
{
Data_element *Pk;
long int count;
double C=0.0;
if (hterm > s->asize)
{
printf("Warning: re-adjusting highest term in complexity count\n");
hterm = s->asize;
}
for (count=1;count <= hterm;count++)
{
Pk=form_list(s,count);
C = C + entropy(Pk);
printf("calculating entropy for symbols of size: %d, cum C: %f\n",count,C);
destroy_list(Pk);
}
return C;
}

void save_complexity_vector(Sequence *s,long int unsigned hterm,char *filename)


/* Saves the components up to hterm of the complexity vector in a file */
{
FILE *fp;

179
Data_element *Pk;
long int count;
double Ck=0.0;

fp = fopen(filename , "w");
if (fp == NULL)
{
printf("Error: Could not open %s for output\n", filename);
assert(fp);
}

if (hterm > s->asize)


{
printf("Warning: re-adjusting highest term in complexity count\n");
hterm = s->asize;
}
for (count=1;count <= hterm;count++)
{
Pk=form_list(s,count);
Ck = entropy(Pk);
fprintf(fp,"%f\n",Ck);
destroy_list(Pk);
}
close(fp);
}

double get_k(long int unsigned n,int q)


/* returns k as a solution to q^k + k = n - 1, where q is the length */
/* of the alphabet (for our purposes q could be 2^unit) and n is the */
/* length of a sequence. k is the "turning point" in the sequence */
/* Newton-Raphson's method is used to obtain k. */
{
double xo, xn;
double rn, rq;

rn = (double)n;
rq = (double)q;

xn = log(rn+1)/log(rq);
do
{
xo = xn;
xn = xo - (pow(rq,xo)+xo-rn+1)/(pow(rq,xo)*log(rq)+1);

180
} while (fabs(xn-xo)>1e-4);
xn = ceil(xn);
return xn;
}

double get_k2(long int unsigned n,int q)


/* returns k as a solution to k*q^k = n , where q is the length */
/* of the alphabet (for our purposes q could be 2^unit) and n is the */
/* length of a sequence. k is the "turning point" in the sequence */
/* Newton-Raphson's method is used to obtain k. */
{
double xo, xn;
double rn, rq;
int count=0;

rn = (double)n;
rq = (double)q;

xn = log(rn)/log(rq);
xo = 0;

while ((fabs(xn-xo)>1e-4)&&(count<10))
{
count ++;
xo = xn;
xn = xo - (xo*pow(rq,xo)-rn)/(pow(rq,xo)+pow(xo,2)*pow(rq,xo-1));
}

if (fabs(xn-xo)>1e-4)
{
xn = 0;
while (xn*pow(rq,xn)<rn)
{
xn = xn+1;
}
if (xn*pow(rq,xn)>rn)
{
xn = xn-0.5;
}
}
/* xn = ceil(xn); */
return xn;
}

181
double max_complexity(long int unsigned n, int q)
/* Returns the maximum possible complexity of a sequence of length n */
/* formed from characters of an alphabet with q elements. */
{
double t1, t2, kt;
double rn, rq, rk;
long int unsigned k;

rn = (double)n;
rq = (double)q;

kt = get_k(n,q);

t1 = kt*(kt+1)*log(rq)/(2*log(2));
t2 = 0;
for (k=kt+1;k<=n;k++)
{
rk = (double)k;
t2 = t2+log(rn-rk+1.0)/log(2);
}
return t1+t2;
}

double max_complexity2(long int unsigned n, int q)


/* Returns the maximum possible complexity of a sequence of length n */
/* formed from characters of an alphabet with q elements. */
/* No overlap */
{
double t1, t2, t3, kt;
double rn, rq, rk;
long int unsigned k;

rn = (double)n;
rq = (double)q;

kt = get_k2(n,q);

t1 = kt*(kt+1)*log(rq)/(2*log(2));
t2 = (rn-kt)*log(rn)/log(2);
t3 = 0;

182
for (k=kt+1;k<=n;k++)
{
rk = (double)k;
t3 = t3-log(rk)/log(2);
}
return t1+t2+t3;
}

183
/* This file calculates the maximum complexity a sequence of length */
/* n can achieve, given an alphabet of size q */
/* This particular program is used in creating figure 1 of chapter 2 */
/* in the dissertation: maximum complexity vrs sequence length and */
/* alphabet size. */
/* mcomplex.c */

# include <stdio.h>
# include "sequence.h"

main()
{
long int nm =1000;
int q[6] = {2,4,8,16,32,64};
int i,n;
char *s;
FILE *fp;
double cmplx;

s = malloc(9);

for (i=0;i<=5;i++)
{

sprintf(s,"Cq%d.ASC",q[i]);
fp = fopen(s,"w");
for (n=1;n<=nm;n++)
{
cmplx = max_complexity2(n,q[i]);
fprintf(fp,"%f\n",cmplx);
}
fclose(fp);
}
}

184
References:

[1] G. J. Simmons, Editor, Contemporary Cryptology: The Science of Information


Integrity; IEEE PRESS, 1992, pp. vii-viii.

[2] R. J. Anderson, F. A. P. Petitcolas, On the Limits of Steganography; IEEE Journal on


Selected Areas in Communication, Special Issue on Copyright & Privacy Protection,
vol. 16, no. 4, May 1998, pp. 474-481.

[3] J. H. Moore, Protocol Failures in Cryptosystems; Proceedings of the IEEE, vol. 76,
no.5, May 1988.

[4] E. F. Brickwell, Cryptanalysis: A Survey of Recent Results; appearing in


Contemporary Cryptology: The Science of Information Integrity; G. J. Simmons,
Editor, IEEE PRESS, 1992, pp. 501-540.

[5] B. Schneier, Security Pitfalls in Cryptography; [online] Available at


http://www.counterpane.com/pitfalls.html, 1998.

[6] B. Schneier, Why Cryptography is Harder than it Looks; [online] Available at


http://www.counterpane.com/pitfalls.html, December 23, 1996.

[7] FIPS 140-1, Security Requirements for Cryptographic Modules; Federal Information
Processing Standards Publication 140-1, U.S. Department of Commerce/N.I.S.T.,
National Technical Information Service, Springfield, Virginia, 1994.

[8] C. E. Shannon, Communications Theory of Secrecy Systems; Bell Systems Technical


Journal, vol. 28, October, 1949, pp. 656-715.

[9] M. E. Hellman, An Extension of the Shannon Theory Approach to Cryptography;


IEEE Transactions on Information Theory, vol. IT-23, no. 3, May, 1977, pp. 289-294.

[10] A. G. Konheim, Cryptography: A Primer; John Wiley & Sons, New York, 1981, pp.
7-10, pp. 64-189.

[11] J. L. Massey, Contemporary Cryptology: An Introduction; appearing in


Contemporary Cryptology: the science of information integrity; G. J. Simmons,

185
Editor, IEEE PRESS, 1992, pp. 1-39.
[12] B. Schneier, Applied Cryptography; John Wiley & Sons, USA, 1994, pp. 42-65.

[13] B. B. Mandelbrot, The Fractal Geometry of Nature; W. H. Freeman and Company,


New York, 1983, pp. 4-6.

[14] H. O. Peitgen, H. Jürgens, D. Saupe, Chaos and Fractals: New Frontiers in Science,
Springer-Verlag, New York, 1992, pp. 63-134, 168-178, 297-352.

[15] G. Cantor, Über Unendliche Lineare Punktmannigfaltigkeiten V; Mathematische


Annalen 21, 1883, pp. 545-591.

[16] G. Peano, Sur une Courbe, qui Remplit toute une Aire Plane; Mathematische
Annalen 36, 1890, pp. 157-160.

[17] D. Hilbert, Über die Stetige Abbildung einer Linie auf ein Flächenstück;
Mathematischen Annalen 38, 1891, pp. 459-460.

[18] R. L. Devaney, A First Course in Chaotic Dynamical Systems: Theory and


Experiment; Addison-Wesley Publishing Company, Inc., USA, 1992, pp. 176-202,
114-131, 133-151, 97-130.

[19] K. Falconer, Fractal Geometry: Mathematical Foundations and Applications; John


Wiley & Sons, Great Britain, 1995, pp. 25-68,184-188, 197-223, 69-91, 138-139.

[20] H. von Koch, Sur une Courbe Continue sans Tangente, Obtenue par une
Construction Géometrique Élémentaire; Arkiv för Matematik 1, 1904, pp. 681-704.

[21] A. V. Aho, R. Sethi, J. D. Ullman, Compilers: Principles, Techniques, and Tools;


Addison-Wesley Publishing Company, USA, 1988, pp. 165-181.

[22] M. F. Barnsley, with the assistance of Hawley Rising III, Fractals Everywhere; 2nd
Ed., Academic Press, Inc., 1993, pp. 74-81, pp 115-170, pp. 94-101.

[23] L. Padulo, M. A. Arbib, System Theory: A Unified Approach to Continuous and


Discrete Systems; Hemisphere Publishing Corporation, Toronto, Canada, 1974, pp.
21, 90-102.

[24] L. D. Landau, E. M. Lifshitz, translated from the Russian by J. B. Sykes, J. S. Bell,


Course of Theoretical Physics Volume 1: Mechanics; 2nd Ed., Pergamon Press, Great
Britain, 1969, pp. 1-5.

[25] E. N. Lorenz, Deterministic Nonperiodic Flow; Journal of Atmospheric Sciences, vol

186
20, 1963, pp. 130-141.

[26] J. L. McCauley, Chaos, Dynamics and Fractals: An Algorithmic Approach to


Deterministic Chaos; Cambridge University Press, Great Britain, 1993, pp. 82-84, 6-
40.

[27] M. P. Kennedy, Applications of Chaos in Communications; Appearing in Intelligent


Methods in Signal Processing and Communications, D. Docampo, A. R. Figueiras-
Vidal, F. Pérez-González (Editors), Birkhäuser, Boston, 1997, pp. 243-261.

[28] L. M. Pecora, T. L. Carroll, Synchronization in Chaotic Systems; Phys. Rev. Letters,


64(8), 1990, pp. 821-824.

[29] M. J. Ogorzalek, Taming Chaos - Part I: Synchronization; IEEE Trans. on Circuits


and Systems-I: Fundamental Theory and Applications, vol. 40, no. 10, October, 1993,
pp. 693-699.

[30] M. J. Ogorzalek, Taming Chaos - Part II: Control; IEEE Trans. on Circuits and
Systems-I: Fundamental Theory and Applications, vol. 40, no. 10, October, 1993, pp.
700-706.

[31] K. M. Cuomo, A. V. Oppenheim, S. H. Strogatz, Synchronization of Lorenz-Based


Chaotic Circuits with Applications to Communications; IEEE Trans. on Circuits and
Systems-I: Fundamental Theory and Applications, vol. 40, no. 10, October, 1993, pp.
626-633.

[32] H. D. Abarbanel, P. S. Linsay, Secure Communications and Unstable Periodic Orbits


of Strange Attractors; IEEE Trans. on Circuits and Systems-II: Analog and Digital
Signal Processing, vol. 40, no. 10, October, 1993, pp. 693-699.

[33] Y. Liu, Nonlinear Dynamics and Cryptosystems; American Institute of Physics


Conference Proceedings, Chaotic, Fractal, and Nonlinear Signal Processing, Mistic,
CT, 1995, pp. 762-776.

[34] M. R. Garey, D. S. Johnson, Computers and Intractability: A Guide to the Theory of


NP-Completeness; W. H. Freeman and Company, 19th Printing, New York, 1997, pp.
18-44.

[35] B. J. Copeland , D. Proudfoot, Alang Turing's Forgotten Ideas in Computer Science,


Scientific American, April, 1999, pp. 99-103.

[36] J. Kurths, U. Schwarz, A. Witt, R. T. Krampe, M. Abel, Measures of Complexity in

187
Signal Analysis; American Institute of Physics, 1996, pp. 33-54.

[37] C. E. Shannon, A Mathematical Theory of Communication; Bell Systems Technical


Journal, vol. 27, July, 1948, pp. 379-423.

[38] C. E. Shannon, A Mathematical Theory of Communication; Bell Systems Technical


Journal, vol. 27, October, 1948, pp. 623-656.

[39] A. J. Menezes, P. Van Ororschot, S. A. Vanstone, Handbook of Applied


Cryptography, CRC Press, U.S., 1997, pp. 217.

[40] J. L. Balcázar, J. Díaz, J. Gabarró, Structural Complexity II; EATCS Monographs on


Theoretical Computer Science, Springer-Verlag, Berlin Heidelberg, 1990, pp. 219-
234.

[41] G. J. Chaitin, A Theory of Program Size Formally Identical to Information Theory;


Journal of the Association for Computing Machinery, vol. 22, no. 3, July, 1975, pp.
329-340.

[42] J. L. Massey, Shift-Register Synthesis and BCH Decoding; IEEE Transactions on


Information Theory, Vol. IT-15, No. 1, January 1969, pp. 122-127.

[43] R. A. Rueppel, Stream Ciphers; appearing in Contemporary Cryptology: The Science


of Information Integrity, Chapter 2, Simmons, G. J., editor, IEEE Press, U.S., New
York, 1992, pp. 76-83.

[44] C. H. Bennet, P. Gács, M. Li, P. M. B. Vitányi, W. H. Zurek, Information Distance,


IEEE Transactions on Information Theory, vol. 44, no. 4, July 1998, pp. 1407-1423.

[45] J. Ziv, N. Merhav, A Measure of Relative Entropy Between Individual Sequences


with Applications to Universal Classification; IEEE Transactions on Information
Theory, vol. 39, no. 4, July, 1993, pp. 1270-1279.

[46] W. H. Zurek, Thermodynamic Cost of Computation, Algorithmic Complexity and the


Information Metric; Nature, vol. 341, September, 1989, pp. 119-124.

[47] C. E. Shannon, Prediction and Entropy of Printed English; Bell Systems Technical
Journal, vol. 30, January, 1951, pp 50-64.

[48] M. S. Hart, Project Gutenberg: Fine Literature Digitally Re-Published; [online]


Available at http://www.promo.net/pg/, December 2,1998.

[49] R. Kuc, Introduction to Digital Signal Processing; McGraw-Hill Book Company,

188
Singapore, 1988, pp. 390-417.

[50] T. Y. Li, J. Yorke, Period Three Implies Chaos; American Mathematical Monthly,
vol. 82, 1975, pp. 985-992.

[51] A. B. Campbell, Applied Chaos Theory: A Paradigm for Complexity; Academic


Press Inc., USA, 1993, pp. 81-125.

189
[52] M. J. Feigenbaum, Universal Behavior in Nonlinear Systems; appearing in
Universality in Chaos, P. Cvitanovi_, Ed., 2nd Edition, Institute of Physics
Publishing, Bristol and Philadelphia, 1989, pp. 49-84.

[53] P. Cvitanovi_, Ed.,Universality in Chaos; Institute of Physics Publishing, 2nd


Edition, Bristol and Philadelphia, 1989.

[54] J. C. Hart, Fractal Image Compression and Recurrent Iterated Function Systems;
IEEE Computer Graphics and Applications, July, 1996, pp. 25-33.

[55] A. Papoulis, Probability, Random Variables, and Stochastic Processes, 2nd Ed.,
McGraw-Hill, pp 500-564, Singapore, 1984.

[56] J. Kelsey, B. Scheneir, D. Wagner, C. Hall, Cryptanalytic Attacks on Pseudorandom


Number Generators; Fast Software Encryption, Fifth International Proceedings,
Springer-Verlag, March, 1998, pp. 168-188.

[57] G. J. Chaitin, Information-Theoretic Computational Complexity; Invited Paper, IEEE


Transactions on Information Theory IT-20, 1974, pp. 10-15.

[58] G. J. Chaitin, Randomness in Arithmetic; Scientific American 259,. No. 1, July, 1988,
pp. 80-85.

[59] E. D. Nering, Linear Algebra and Matrix Theory; 2nd edition, John Wiley & Sons,
Inc., USA, 1970, pp. 194-201.

[60] D. Stinson, Visual Cryptography & Thresholding Schemes; Dr. Dobb's Journal,
April, 1998, pp. 36-43.

[61] W. R. Scott, Group Theory; Dover Publications, Inc., New York, 1987, p. 6.

[62] R. R. Stoll, Set Theory and Logic, Dover Publications, Inc., New York, 1963, pp.
329-33.

[63] G. J. Chaitin, Randomness and Mathematical Proof; Scientific American 232, No. 5,
May, 1975, pp. 47-52.

[64] M. Kac, Statistical Independence in Probability, Analysis, and Number Theory;


Carus Mathematical Monographs, Mathematical Association of America, no. 12,
1959, p. 16.

190

You might also like