Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/340095458

Quantum Chaos and the Brain

Research · March 2020


DOI: 10.13140/RG.2.2.20160.48645

CITATIONS READS

0 564

1 author:

Manahel AR Thabet
IBCHN
10 PUBLICATIONS   16 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Quantum Research View project

All content following this page was uploaded by Manahel AR Thabet on 23 March 2020.

The user has requested enhancement of the downloaded file.


Quantum Chaos and the Brain

Manahel A Thabet – IBCHN


March 2020

On a very basic level, the entire universe runs on the rules of physics, which are mathematical
concept which our reality conforms to. On a similarly basic level, our entire being is run
based on our brain. Then, the need to explore the brain using the tools of our most basic
reality should emerge naturally. In this report, we shall first lay down some theoretical
groundwork, and then see in what ways quantum mechanics and chaos emerge in our brain.

Chaos
Differential equations
The basis of chaos is in the behaviour of a system of differential equations. A differential
equation is an equation which describes the change of a variable in regard to time. Consider
one of the most basic differential equations, Hook’s equation:

𝑑2 𝑥
= −𝑘𝑥 (1.1)
𝑑𝑡 2

For the sake of clarity, a first time-differential will have a single do as a superscript, and a
second time-differential will have two dots as a superscript, reformulating Hook’s law as:

𝑥̇ = −𝑘𝑥 (1.2)

In essence, Hook’s law describes a behaviour of a weight oscillating on a spring, where the
more it is displaced from the point of origin, the greater the acceleration it will experience
towards the point of origin.

A system of differential equations will be of the form:

𝑥̇ = 𝑓(𝑥) + 𝑔(𝑦) + 𝐿(𝑥, 𝑦) + 𝐶


𝑦̇ = 𝑘(𝑦) + ℎ(𝑥) + 𝑃(𝑥, 𝑦) + 𝐷
(1.3)
In this form, you can see how the current state of x and y influence each other. Such a system
will describe the behaviour of two variables. The classical example is predator vs. prey,
where x describes the number of prey, and y describes the number of predators:

𝑥̇ = 𝑓(𝑥) − 𝑔(𝑦)
𝑦̇ = −𝑘(𝑦) + ℎ(𝑥)
(1.4)
Model assumptions:
• There is infinite food
• The populations of prey and predators are controlled purely but their own population
and not outside forces
• Prey meeting will produce offspring
• Predators have infinite appetite
• Predators starve and die

MANAHEL A THABET 2020


1
https://orcid.org/0000-0003-3593-552X
• We can have fractions of animals

Explanations of the equations:


• 𝑓(𝑥): This describes the increase in the prey population. When prey meet, they
produce offspring, so the more prey there are, the more rapidly the prey population
will grow.
• 𝑔(𝑦): this is a limiting factor of the prey population, describing how the greater the
predator, the more they would hunt prey, and the more quickly the prey population
will decrease.
• 𝑘(𝑦): This describes predator starvation. The more predators there are, the more they
would starve, and the more quickly the population will decrease.
• ℎ(𝑥): This is the prey being eaten, allowing the predators to increase their population.
The more prey there are, the more quickly the predator population will increase.

You will probably notice that this model is a bit basic, as it doesn’t allow for direct predator-
prey interaction. A more realistic model will be the Lotka–Volterra equations:

𝑥̇ = 𝛼𝑥 − 𝛽𝑥𝑦
𝑦̇ = −𝛾𝑦 + 𝛿𝑥𝑦
(1.5)
Here, the xy parameter allows for direct predator-prey interaction.

With our equations in place, we can start our simulation. But how do we run the simulation?

We draw a grid called the “phase space”, where the x-axis is our x value, and the y-axis is our
y value. We choose an initial value for our x and y, and for each step calculate 𝑥̇ and 𝑦̇ . The
𝑥̇
two values will give us a pseudo-vector in the form of (𝑦̇ ). It is a pseudo-vector since it is
still dependent on a time-differential. In order to turn it into a proper vector we must decide
on the length of our time-step: the smaller it is, the more accurate our simulation will be but
it’d be more calculation intensive, the larger it is, the less accurate our simulation will be, but
the less intensive It will be.

With our time step decided, we can multiply our 𝑥̇ and 𝑦̇ with it, and have a proper vector in
𝑑𝑥
the form of (𝑑𝑦 ). We take our vector, add it to our initial point, and repeat the process.

How will our simulation look like? It will be something in the form of this:

MANAHEL A THABET 2020


2
https://orcid.org/0000-0003-3593-552X
Figure 1. 1 Predator and prey simulation

But this is just the picture from a very limited view of the entire phase space. A more
complete picture of the space can be seen in figure 1.2:

MANAHEL A THABET 2020


3
https://orcid.org/0000-0003-3593-552X
Figure 1. 2 Predator-prey phase space

It may not be clear from the figure, but there is a stable cycle hiding in the phase space:

MANAHEL A THABET 2020


4
https://orcid.org/0000-0003-3593-552X
Figure 1. 3 Stable cycle highlighted

This stable cycle is one where any point on it will result in the simulation staying within it
regardless of how long it is run. In this particular case, the points outside and inside the cycle
will tend to move away from it. Another feature one would notice is the dot in the middle of
the cycle, this dot is a stable point where the phase space will remain in (i.e. 𝑥̇ and 𝑦̇ are zero)

Chaos proper

This will be a casual description of chaos. For a more formal definition refer to appendix 1.

Chaos has two properties:


1. The phase space does not have non-repelling stable cycles
2. Sensitivity to initial conditions

Strange attractor
Attractors are a general term to points or cycles which attract the phase space towards them
(much like the point in figure 1.2). Chaotic dynamics introduce a unique type of attractor, a

MANAHEL A THABET 2020


5
https://orcid.org/0000-0003-3593-552X
strange attractor. A strange attractor possesses the sensitivity to initial conditions of chaos.
Additionally, it also has a fractal structure where self-similarity can be observed. If one is
familiar with Cantor’s, a strange attractor’s self-similarity manifests in a similar fashion to
the set.

Phase space of a certain dimension may be influenced by a strange attractor of a higher


dimension. To demonstrate how that may be done, observe figure 1.4. The figure shows
arbitrary data of the changes in a population from one generation to another. Pay attention
that while the figure is two-dimensional, the phase space is in fact one-dimensional. With no
analysis (or looking at the legend), it is impossible to distinguish between the chaotic and the
random pattern. If the phase space is “expanded” to higher dimension, I may be possible to
observe the strange attractor acting on the phase space.

Figure 1. 4 two time series, one chaotic, and one random

In order to bring the phase space to a higher dimension, we shall define two new axes:
population at t+1, and the population at t+2. In short, x-axis is P(t), y-axis is P(t+1), and z-
axis is P(t+2). Figure 1.5 shows the higher dimensional phase space being plotted. See how
while the two-dimensional representation does show some structure, it is still lacking when
compared to the three-dimensional representation.

MANAHEL A THABET 2020


6
https://orcid.org/0000-0003-3593-552X
Figure 1. 5 2D and 3D representation of the time series

Quantum Mechanics
The Wavefunction
Let us first begin with the time-independent Schrödinger equation:
𝐻Ψ = 𝐸Ψ

(2.1)
This equation describes that the energy of a system can be described as an operator called the
Hamiltonian. Ψ is the Greek letter psi, and represents the wavefunction, which describes the
particles in the system. For example, in a trivial system, Ψ can take on the form:

Ψ(𝑥) = 𝐴𝑠𝑖𝑛(𝑥)
(2.2)

Ψ has two main property. One is that it is continuous, meaning that it is a smooth curve that
is drawn by a single uncut line. The second one is normalization. Normalization states that
the area under the curve that Ψ describes is 1, and is formulated as:
∫ Ψ ∗ Ψ𝑑𝜏 = 1
(2.3)

The Hamiltonian in one dimension and time-independent usually takes on the form of
ℏ2 𝑑2
𝐻 = − ( ) ( 2 ) + 𝑉(𝑥)
2𝑚 𝑑𝑥
(2.4)
Where:
• ℏ is planks constant divided by 2π
• m is the mass
• V is the potential energy along the x-axis

MANAHEL A THABET 2020


7
https://orcid.org/0000-0003-3593-552X
To explore this equation, let us test it in a simple system: a completely free particle moving in
one dimension. In this system, the potential energy will be constant, so we can choose it to be
zero. Additionally, we shall assume that the particle has been travelling in this system for a
while, making it time-independent. In such a system, the Hamiltonian will look like
ℏ2 𝑑2
𝐻 = − ( ) ( 2)
2𝑚 𝑑𝑥
(2.5)
Resulting in the Schrödinger equation looking like:
ℏ2 𝑑2
− ( ) ( 2 ) ψ = 𝐸ψ
2𝑚 𝑑𝑥
(2.6)
The solution for this equation is
2𝑚𝐸
𝜓(𝑥) = 𝐴𝑒 𝑖𝑘𝑥 + 𝐵𝑒 −𝑖𝑘𝑥 , 𝑘 = √( )
ℏ2
(2.7)
𝑖𝑥
Since 𝑒 = 𝑐𝑜𝑠𝑥 + 𝑖 𝑠𝑖𝑛𝑥, a alternative form is
𝜓(𝑥) = 𝐶 cos(𝑥) + 𝐷 𝑖 sin(𝑥)
(2.8)
But we’ll stick to equation 2.7

Now, imagine a barrier. We’ll set the barrier’s energy to be higher than that of our particle. In
a classical system, our particle will not be able to cross the barrier, but in out quantum
system, the math tells a different story.

The system we shall explore is that described in figure 2.1.

Figure 2. 1 2-dimensional potential barrier

MANAHEL A THABET 2020


8
https://orcid.org/0000-0003-3593-552X
Using the definitions of the Hamiltonian in equations 2.4 and 2.5, we divide the three zones
as such:
ℏ2 𝑑2
𝑍𝑜𝑛𝑒 𝐴: 𝐻 = − ( ) ( 2 )
2𝑚 𝑑𝑥
ℏ2 𝑑2
𝑍𝑜𝑛𝑒 𝐵: 𝐻 = − (2𝑚) (𝑑𝑥 2 ) + 𝑉(𝑥)

ℏ2 𝑑2
𝑍𝑜𝑛𝑒 𝐶: 𝐻 = − (2𝑚) (𝑑𝑥 2 )

(2.9)
Where V is the energy of the potential barrier.

Therefore, the solutions for equation 2.7 will become

2𝑚𝐸
𝑍𝑜𝑛𝑒 𝐴: 𝜓(𝑥) = 𝐴𝑒 𝑖𝑘𝑥 + 𝐵𝑒 −𝑖𝑘𝑥 , 𝑘 = √( )
ℏ2
2𝑚(𝐸)
𝑍𝑜𝑛𝑒 𝐴: 𝜓(𝑥) = 𝐴𝑒 𝑖𝑘𝑥 + 𝐵𝑒 −𝑖𝑘𝑥 , 𝑘 = √( )
ℏ2
2𝑚(𝐸−𝑉)
𝑍𝑜𝑛𝑒 𝐵: 𝜓(𝑥) = 𝐴𝑒 𝑖𝑘𝑥 + 𝐵𝑒 −𝑖𝑘𝑥 , 𝑘′ = √( )
ℏ2
2𝑚𝐸
𝑍𝑜𝑛𝑒 𝐶: 𝜓(𝑥) = 𝐴𝑒 𝑖𝑘𝑥 + 𝐵𝑒 −𝑖𝑘𝑥 , 𝑘 = √( )
ℏ2
(2.10)

Since E<V, k’ is imaginary. This means that equation 2.10 for zone B can be rewritten as
𝜓(𝑥) = 𝐴′𝑒 −𝑖𝑥 + 𝐵′𝑒 𝑘𝑥
(2.11)
This will be a mixture of exponentially decaying and increasing exponential functions. Note
that for situations where E<V the wavefunction does not oscillate. Now, assume that B
extends infinitely. 𝐴′𝑒 −𝑖𝑥 will tend to zero and 𝐵′𝑒 𝑘𝑥 will tend to infinity. Here we see that
because of equation 2.3, the wavefunction must consist only of the decaying exponential. The
important point that that even in an infinitely long potential barrier, the wavefunction will
still slightly penetrate it.
Going back to the system in figure 2.1. Here, since the barrier isn’t infinite, we cannot ignore
the exponentially increasing part of equation 2.11. Noting the continuous property of the
wavefunction, we can watch the coefficients at the points where the zones meet. We shall all
the x=0 coordinate Boundary 1, and the x=l coordinate Boundary 2:
𝐵𝑜𝑢𝑛𝑑𝑎𝑟𝑦 1: 𝐴 + 𝐵 = 𝐴′ + 𝐵′
𝐵𝑜𝑢𝑛𝑑𝑎𝑟𝑦 2 = 𝐴′𝑒 −𝑘𝑙 + 𝐵′𝑒 𝑘𝑙 = 𝐴′′𝑒 𝑖𝑘𝑙 + 𝐵′′𝑒 −𝑖𝑘𝑙
(2.12)
And the continuity of the slopes at the two points:

MANAHEL A THABET 2020


9
https://orcid.org/0000-0003-3593-552X
𝐵𝑜𝑢𝑛𝑑𝑎𝑟𝑦 1: 𝑖𝑘𝐴 − 𝑖𝑘𝐵 = −𝑘𝐴′ + 𝑘𝐵′
𝑘𝑙 𝑖𝑘𝑙
𝐵𝑜𝑢𝑛𝑑𝑎𝑟𝑦 2 = −𝑘𝐴′ + 𝑘𝐵 ′𝑒 = 𝑖𝑘𝐴′′𝑒 − 𝑖𝑘𝐵′′𝑒 𝑖𝑘𝑙

(2.13)
With these two equations it is possible to find the coefficients in some real systems.

Calculating tunnelling
Let us consider the scenario when a particle is prepared with momentum carrying it
towards the right. We can infer that coefficient B” is zero. This is because the wavefunction
𝐵′′𝑒 −𝑖𝑘𝑙 denotes a particle travelling to the left on the right side of the barrier, and as the
momentum is to the right, it is impossible.
On the left side of the barrier, a particle can be found going to the left, as the barrier
can reflect incoming particles. We can therefore conclude a relationship between B and the
probability of reflection, and that relationship is 𝑃𝑟𝑒𝑓 = |𝐵 2 |. By experimentally measuring
the reflection probability, we can find B.
Alternatively, we can identify |𝐴′′2 | as the probability a particle penetrates the barrier
end emerges on the right of it.
Now, assume a particle was not reflected, what is the probability it’ll be carried
through through? This transmission probability is defined as 𝑃𝑡𝑟𝑎𝑛𝑠 = |𝐴′′2 |/|𝐴2 |. This is the
probability that a particle incident on the left of the barrier emerges on its right.
By manipulating equations 2.12 and 2.13, we get at the result:
1 (𝑒 𝑘𝑙 − 𝑒 −𝑘𝑙 )2
𝑃𝑡𝑟𝑎𝑛𝑠 = ; 𝐺= , 𝑘 = √{2𝑚(𝑣 − 𝐸)/ℏ2 }
1+𝐺 𝐸 𝐸
4(𝑉 )(1 − 𝑉 )
(2.14)
Notice that 𝑃𝑡𝑟𝑎𝑛𝑠 tends buy never reaches zero.

Chaos in the brain


Strange attractors in EEG signals
In research done by J. Roschke and E. Basar, it was shown that there is presence of
multiple strange attractors in EEG signals. It has been thought that the random noise in the
signal was just that, noise, but the two show that the noise is in fact the presence of a strange
attractor.
The issue of distinguishing between noise and a strange attractor in an EEG signal is
mainly dimensionality. Strange attractors have finite dimensionality, while noise does not. By
calculating the dimensionality, of the ‘noise’ in the signal, one will be able to know whether
the signal is noisy or is affected by a strange attractor.
When Roschke and Basar evaluated their data, they observed attractors with
dimensions in the range of 3.5-5 and 8-9. This concludes that while there may be some noise
in EEG signals, a contributing factor to the unpredictable behaviour of the signal is chaos.
(For a broader explanation, refer to appendix 2)

MANAHEL A THABET 2020


10
https://orcid.org/0000-0003-3593-552X
Self-Similarity in Hyperchaotic Data
How does one distinguish between a circular noisy signal and a strange attractor? One
utilized the cantor-set properties of the strange attractor. In essence, a strange attractor will in
stractures which are similar to themselves, but decreasing in size (similar to a fractal). Figure
3.1 demonstrates such a behavior:

Figure 3. 1cross section of a strange attractor

In this figure, a cross section of a strange attractor is shown. See how each U is essentially a
smaller similar pattern. A chaotic pattern in the brain will demonstrate such a pattern.
Now, look at figure 3.2. In this figure, brain signal was recorded and mapped. Each slide is a
different cross section of the signal. In each slide, you can see self-similarities, showing the
chaotic nature of brain signals.

MANAHEL A THABET 2020


11
https://orcid.org/0000-0003-3593-552X
Figure 3. 2 cross sections of a signal

Chaotic Dynamics Mediate Brain State Transitions

MANAHEL A THABET 2020


12
https://orcid.org/0000-0003-3593-552X
In their paper “Chaotic Dynamics Mediate Brain State Transitions, Driven by Changes in
Extracellular Ion Concentrations,” Rasmussen et. Al (2017) drew a connection between brain
state and the dynamics of the phase space of ion concentration and membrane potential (Vm)
of cortical neurons. They explored three different brain states: asleep, quiet awake, and active
awake. Previous studies demonstrated that Vm experiences stable oscillations during the
sleeping state, but during the waking states the oscillations are supressed and Vm is
maintained closer to threshold.

They chose to use a neural model called the “averaged neuron model,” where the behaviour
of the neuron is regulated by the concentration of three ions, K+, Na+, and Ca2+ (Figure 3.3).
Note the voltage-gated Ca2+ channel; that channel shows how while Ca2+ concentration can
affect Vm, Vm also affects the concentration of Ca2+. This results in a highly dynamic system,
which is difficult to explore.

Figure 3. 3 average neuron model with its ion channels

MANAHEL A THABET 2020


13
https://orcid.org/0000-0003-3593-552X
Experimentally, they determined that changes to the concentration of Ca2+ had the greatest
effect on the transitions between the three stages, and in particular from asleep to quiet
awake. In the different brain states, they monitored the Ca2+ concentration and Vm. The phase
space diagrams are shown in figures 3.4-6.

As can be seen, the asleep brain state is cyclical, with figure 3.4 showing a stable cycle. This
is unsurprising as previous observations showed Vm to oscillate, a behaviour that is not
chaotic. The interesting observation is in the waking states, with both figure 3.5 and 3.6
showing chaotic dynamics. The justification offered for this behaviour in the paper is the
need for different firing patterns, which the waking state requires.

Figure 3. 4 Asleep phase space

MANAHEL A THABET 2020


14
https://orcid.org/0000-0003-3593-552X
Figure 3. 5 Quiet awake phase space

MANAHEL A THABET 2020


15
https://orcid.org/0000-0003-3593-552X
Figure 3. 6 Active awake phase space

Quantum in the Brain


Quantum amplification
As is widely understood, the brain is a highly complex system, with billions of interacting
parts. Modelling the state of the brain in a phase space will result in one of multibillion
dimensions. Such a system will be highly chaotic.

Let us assume that the brain is a purely classical system. This will mean that the brain is a
hierarchal system with each layer being governed by non-linear interactions. At the basis of
the hierarchy will be the molecules and atoms. It is uncontroversial to say that the quantum
fluctuations of these atoms and molecules will result in vastly different states in a phase space
of a chaotic system modelling them. It has been proposed that the quantum fluctuations
average each other out, and do not have any effects on the brain-state. Let us propose the
alternative: The hierarchal structure of the brain combined with the high variability in the
initial conditions at the basis of the hierarchy, will result in vastly different brain state given
that two identical brain states were simulated. Or, as stated by Jeffrey Satinover:

“[Q]uantum dynamics alters the final outcomes of computation at all levels – not by
producing classically impossible solutions but by having a profound effect on which
of many possible solutions are actually selected”

MANAHEL A THABET 2020


16
https://orcid.org/0000-0003-3593-552X
The justification for this proposition lies in each level of the brain’s hierarchy inheriting the
chaos of the lower layer, amplifying it, and passing it along to the higher layer, all the way to
the resulting brain state.

Non-trivial contribution to neural processing


Surprisingly, processes involving long-lasting quantum coherence have been found in
biological processes; in particular the quantum coherence involved the electron transfer
required for photosynthesis. It has been observed that that photopigments utilize quantum
mechanics for the light-excitation of electrons. The mechanism they utilize allows for light to
find the fastest path to the electron, experiencing reduced scattering, and delivering more
energy to the electron. This phenomenon has been observed in both photosynthetic bacteria
and marine algae, showing that evolution is capable of selecting for quantum mechanical
processes that enhance biological functions. In humans, our photoreceptors have been shown
to rely on quantum mechanics. Rhodopsin, a protein found in our photoreceptors has been
shown to possess coherent quantum states. As Werner Loewenstein states: “Quantum
mechanics, not classical mechanics, rules the roost at this sensory outpost of the brain.”

So, it is shown that quantum mechanics can be used by organisms, and humans in particular.
But, can the brain utilize quantum mechanical phenomena? Paul Glimcher proposes that the
electrical signals in neurons may be affected by quantum phenomena. Saying:

“[T]hese data suggest that membrane voltage is the product of interactions at the
atomic level, many of which are governed by quantum physics and thus are truly
indeterminate events. Because of the tiny scale at which these processes operate,
interactions between action potentials and transmitter release as well as interactions
between transmitter molecules and post-synaptic receptors may be, and indeed seem
likely to be, fundamentally indeterminate”

Johnjoe McFadden shares Glimcher’s opinion:

“If neurons poised on the dynamics of individual membrane proteins are critical to
the initiation of a particular course of motor action or cognitive process, then the
consequent action or cognitive processes will be subject to non- deterministic
quantum dynamics”

Quantum neurobiology
Overall, quantum mechanical phenomena in the brain is proposed but not yet observed. It has
replaced the old “here be dragons” of the maps of old, with “here be quantum mechanics.”
The most notorious invoking of quantum mechanics is as an explanation for consciousness.
This explanation errs on the preposterous side, as it will quantum coherent states which are
far larger than a few molecules (on a scale within the magnitude of a neuron), and coherence
time long enough to retain and transmit information. The combination of the two in the
highly noisy environment of the brain is highly likely. As a side note, current quantum
computers require high isolation and temperature close to absolute zero to retain coherence
for algorithms to be run. Those usually involve only a few molecules, which are a few orders
of magnitude smaller than what will be required for the brain to use.

MANAHEL A THABET 2020


17
https://orcid.org/0000-0003-3593-552X
A note on physics and the brain
In the science fields there is a saying, “biology applied chemistry, chemistry is applied
physics, and physics is applied math.” This saying can be extended to neuroscience:
“neuroscience is applied biology.” While facetious, there is truth to it; the brain like
everything is subject to the laws of physics, but it is far too big and complex to be studied
using the laws of physics. Of course, there are cases such as ephaptic coupling, where electric
fields generated by the internal workings of a cell can affect the behaviour of neighbouring
cells, but they are yet to be demonstrate as having any noticeable effects on the workings of
the brain.

The greatest use for physics in the field of neuroscience is in the development of techniques
such as Magnetic resonance imaging (MRI) or Positron emission tomography (PET). Outside
of these contributions there be dragons.

Sources
Chaos
J. Guckenheimer, & P. Holmes. (1983). Non-linear oscillations, dynamical systems, and
bifurcations of vector fields. Springer
R.L. Devaney. (1986). An introduction to chaotic dynamical systems. Benjamin/Cummings
Hirsch, M. W., Smale, S., Devaney, R. L., & Hirsch, M. W. (2004). Differential equations,
dynamical systems, and an introduction to chaos. San Diego, CA: Academic Press
Boeing, Geoff. (2016). Visual Analysis of Nonlinear Dynamical Systems: Chaos, Fractals,
Self-Similarity and the Limits of Prediction. Systems. 4. 37
Quantum mechanics
Griffiths, D. J. (2005). Introduction to quantum mechanics. Upper Saddle River, NJ: Pearson
Prentice Hall
Peter W. Atkins, and Ronald S. Friedman. (2011). Molecular Quantum Mechanics. OUP
Oxford
Sakurai, J. J., & Napolitano, J. (2017). Modern Quantum Mechanics. Cambridge: Cambridge
University Press
Chaos in the brain
Rössler O.E., Hudson J.L. (1990) Self-Similarity in Hyperchaotic Data. In: Başar E. (eds)
Chaos in Brain Function. Springer, Berlin, Heidelberg
Röschke J., Başar E. (1990) The EEG is Not a Simple Noise: Strange Attractors in
Intracranial Structures. In: Başar E. (eds) Chaos in Brain Function. Springer, Berlin,
Heidelberg
Rasmussen, Rune & Jensen, Mogens & Heltberg, Mathias. (2017). Chaotic Dynamics
Mediate Brain State Transitions, Driven by Changes in Extracellular Ion Concentrations.
Cell Systems. 5. 1-13
Quantum Mechanics in the brain
Jedlicka Peter. (2017). Revisiting the Quantum Brain Hypothesis: Toward Quantum
(Neuro)biology? Frontiers in Molecular Neuroscience 10, 366-374
Glimcher, P. W. (2005). Indeterminacy in brain and behaviour. Annu. Rev. Psychol.56, 25–
56.
McFadden, J. (2002). The conscious electromagnetic information (cemi) field theory: the
hard problem made easy? J. Conscious. Stud. 9, 45–60
Anastassiou, Costas & Perin, Rodrigo & Markram, Henry & Koch, Christof. (2011). Ephaptic
coupling of cortical neurons. Nature neuroscience. 14. 217-23

MANAHEL A THABET 2020


18
https://orcid.org/0000-0003-3593-552X
Appendices
Appendix 1: Formal definition taken from the Encyclopaedia of Math

The dynamical systems (or models describing deterministic evolution, cf. Dynamical
system) considered are differential equations , with , a differentiable
manifold and a vector field on , and differentiable mappings
which may or may not be invertible. For a given initial state , the corresponding
evolution is the solution of the differential equation with or, in the case of a
mapping, the function given by . The last case is the discrete-time
situation, the first case that of continuous time. Even if the evolutions can be defined for
negative time, only the part with positive time is considered. Also, only bounded evolutions
are considered here, i.e. evolutions , , with , respectively , whose closure, as
a subset of , is compact. It is assumed that there is a metric defined on .

One says that such a dynamical system is chaotic if there is a subset which has positive
measure (for every measure in the Lebesgue measure class) which is invariant in the sense
that every evolution starting in stays in , and such that the evolutions in have the
following properties:

1) no evolution starting in is periodic or quasi-periodic; an evolution is quasi-periodic


if it can be written as with independent over the
rationals and periodic with period 1 in all its variables, an evolution is quasi-periodic if it
can be written as with independent over the
rationals and periodic with period 1 in all its variables;

2) no evolution in tends to a periodic or quasi-periodic evolution as time tends to infinity;

3) (sensitive dependence on initial conditions) there is some positive constant such that for
each and each , there is some , in an -neighbourhood of , such that for
some positive time the evolutions starting in and are more than apart.

Appendix 2: J. Roschke and E. Basar reasoning for the presence of strange attractors in
EEG signals
Let us now imagine the periodic movement of a metronome or a pendulum swinging
from left to right and back again. From the viewpoint of geometry, this motion is said to
remain within a fixed cycle forever. This is the second kind of attractor, the limit cycle. All of
the various types of limit cycles share one important characteristic: regular, predictable
motion. The third variety, the strange attractor, is irregular, unpredictable, or simply
strange. For example, when a heated or moving fluid moves from a smooth, or laminar, flow
to wild turbulence, it switches to a strange attractor.
Chaotic behavior in deterministic systems usually occurs through a transition from an
orderly state when an external parameter is changed. In studies of these systems, particular
attention has been devoted to the question of the route by which the chaotic state is
approached. An increasing body of experimental evidence supports the belief that apparently
random behavior observed in a wide variety of physical systems is caused by underlying
deterministic dynamics of a low-dimensional chaotic (strange) attractor. The behavior

MANAHEL A THABET 2020


19
https://orcid.org/0000-0003-3593-552X
exhibited by a chaotic attractor is predictable on short time scales and unpredictable (random)
on long time scales.
The unpredictability, and so the attractor's degree of chaos, is effectively measured by
the parameter "dimension". Dimension is important to dynamics because it provides a precise
way of speaking of the number of independent variables inherent in a motion. For a
dissipative dynamical system, trajectories that do not diverge to infinity approach an attractor.
The dimension of an attractor may be much less than the dimension of the phase
space that it sits in. In other words, once transients die out, the number of independent
variables to the motion is much less than the number of independent variables required to
specify an arbitrary initial condition. With the help of the concept of dimension it is possible
to discuss this precisely. For example, if the attractor is a fixed point, there is no variation in
the final space position; the dimension is zero. If the attractor is a limit cycle (pendulum) the
phase space varies along a curve; the dimension is one. Similarly, for quasi-periodic motion
with n incommensurate frequencies, motion is restricted to an n-dimensional torus
(dimension of chaotic attractors).
Noise is a common phenomenon in systems with many degrees of freedom. Under the
influence of noise, observables show irregular behavior in the time and broadband Fourier
spectra. There is an important difference between a noise signal and chaotic fluctuations
resulting from the motion of a larger number of system dimensions. The noise signal does not
have a finite dimension, whereas chaotic systems with differential equations show finite
dimensionality. This difference can be shown by the evaluation of the dimension.
The field potentials of the cat brain showed almost stable mean values of 5.06,
4.58,4.37 for the GEA, the HI, and the RF, respectively. In other words, various structures of
the brain indicate the existence of various chaotic attractors with fractal dimensions. The
signals measured in these different structures do not reflect properties of noise signals, but
reflect behavior of strange attractors of quasi-low dimension. The measured dimensions in
these various structures are seemingly different attractors.
Our computations, which are not yet finished and may be theoretically imperfect,
showed that the spontaneous activity of the cat cortex depicted a dimension of around 8-9.
The acoustical evoked potentials during the waking stage showed much lower dimensionality
than did the spontaneous EEG during the same waking state: the dimension of the evoked
potential usually varied between 3.5 and 5.

MANAHEL A THABET 2020


20
https://orcid.org/0000-0003-3593-552X

View publication stats

You might also like