Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 14

Random walk

Introuction
A random walk is a mathematical concept that describes a path formed by a
series of random steps on some mathematical space.
The idea is prominently used in various scientific fields, including physics,
economics, and biology, to model seemingly random yet dynamic
phenomena.

In its simplest form, consider a one-dimensional random walk on a line


where at each step, a walker moves either one unit to the right or one unit to
the left with equal probability. Each step is independent of the previous
steps, making the process a good example of a Markov process, where the
future state depends only on the current state and not on the sequence of
events that preceded it.

The concept can be extended to more complex scenarios, such as multiple


dimensions or with step probabilities that vary. In finance, for instance, the
random walk hypothesis is used to model the unpredictability of stock
market prices.In physics, it's a foundational model for diffusion processes.

Random walks also form the basis for more advanced computational
algorithms, like the Monte Carlo method used in numerical simulations.
Moreover, they have interesting connections to other mathematical theories
like probability theory, graph theory, and percolation theory.
How can we describe this mathematically?

The simplest random walk to understand is a 1-dimensional


walk. Suppose that the black dot below is sitting on a number
line. The black dot starts in the center.

Then, it takes a step, either forward or backward, with equal


probability. It keeps taking steps either forward or backward
each time. Let's call the 1st step a1, the second step a2, the third
step a3 and so on. Each "a" is either equal to +1 (if the step is
forward) or -1 (if the step is backward). The picture below
shows a black dot that has taken 5 steps and ended up at -1 on
the number line.

Suppose we put the black dot at 0 and then let it take N steps
(where N is any number). Now we want to know how far the
black dot travels after it has taken N steps. Of course the
distance traveled after N steps will vary each time we repeat
the experiment, so what we want to know is, if we repeat the
experiment many many times, how far the black dot will have
traveled on average. Let's call the distance that the black dot
has traveled "d". Keep in mind that d can either be positive
or negative, depending on whether the black dot ends up to
the right or left of 0. We know that for any one time that we
repeat the experiment,
d = a1 + a2 + a3 + ... + aN

Now we use the notation <d> to mean "the average of d if we


repeated the experiments many times":

<d> = <(a1 + a2 + a3 + ... + aN)> = <a1> + <a2> + <a3> + ... +


<aN>

But <a1>=0, because if we repeated the experiment many


many times, and a1 has an equal probability of being -1 or +1,
we expect the average of a1 to be 0. So then,

<d> = <a1> + <a2> + <a3> + ... + <aN> = 0 + 0 + 0 + ... + 0 = 0

This isn't too surprising if you think about it. After all, <d> is
the average location of the black dot after N steps, and since
the dot is equally likely to move forward or backwards, we
expect d to be 0, on average

What does a random walk have to do with real life?

Whew! Now that we're through with all of that math, how can we
relate random walks to real life? Random walks and the
mathematics that govern them are found everywhere in nature.
When gas particles bounce around in a room, changing direction
every time they collide with a another particle, it is random walk
mathematics that determines how long it will take them to travel
from one location to another. The particles in a drop of food
coloring added to a glass of water will spread out, partially due to
currents in the water, and partially due to a random walk.
Random walks and baseball????

Random walks can even apply to baseball. Consider a baseball team


that is perfectly average -- that is, they have a 50% chance of
winning or losing each game. The outcome of each game can be +1
(if they win) or -1 (if they lose). Then as the baseball season
progresses, this perfectly average team hops forward and backward
on the number line. Their position "d" on the number line indicates
how many more games they have won than lost (a negative "d"
means that they have lost more games than won). There are 162
games in a baseball season, so for a perfectly average team, we
would expect sqrt(162) (approximately 12-13) more games won than
lost or more games lost than won to be completely normal. A
difference of 12 would be a record of 87-75 or 75-87. Sometimes a
single game will determine who advances to the playoffs, but
roughly 12 games out of every season are completely determined by
chance!
What about the world series? The world series is only 7 games long.
For two equally matched teams (50% probability of either team
winning), we would expect the number of wins minus the number of
losses to be roughly sqrt(7) or 2.6. So roughly 2-3 games out of 7 are
determined completely by chance. No wonder it's so hard to predict
the winner!
Example
An elementary example of a random walk is the random walk on the integer
number line Z which starts at 0, and at each step moves +1 or −1 with
equal probability .

Brownian motion is the random motion of particles suspended in a


medium (a liquid or a gas).
This motion pattern typically consists of random fluctuations in a particle's
position inside a fluid sub-domain, followed by a relocation to another
sub-domain. Each relocation is followed by more fluctuations within the
new closed volume. This pattern describes a fluid at thermal equilibrium,
defined by a given temperature. Within such a fluid, there exists no
preferential direction of flow (as in transport phenomena). More
specifically, the fluid's overall linear and angular momenta remain null
over time. The kinetic energies of the molecular Brownian motions,
together with those of molecular rotations and vibrations, sum up to the
caloric component of a fluid's internal energy (the equipartition theorem)

Foraging is searching for wild food resources. It affects an animal's


fitness because it plays an important role in an animal's ability to
survive and reproduce.[1] Foraging theory is a branch of
behavioral ecology that studies the foraging behavior of animals in
response to the environment where the animal lives.
Gambling (also known as betting or gaming) is the wagering of
something of value ("the stakes") on a random event with the intent of
winning something else of value, where instances of strategy are
discounted. Gambling thus requires three elements to be present:
consideration (an amount wagered), risk (chance), and a prize.[1] The
outcome of the wager is often immediate, such as a single roll of dice, a
spin of a roulette wheel, or a horse crossing the finish line, but longer
time frames are also common, allowing wagers on the outcome of a
future sports contest or even an entire sports season.

Stock random walk in stocks refers to the hypothesis that stock prices move
randomly and that past movement or trends cannot be used to predict future
movement. This concept is integral to the Efficient Market Hypothesis (EMH),
which suggests that at any given time, stock prices fully reflect all available
information.
Types of random walk
1) simple random walk can be described as follows:Starting Point: The
process begins at a defined origin, typically zero (0).Steps: At each time step,
the position changes by a fixed amount. For a one-dimensional simple random
walk, this change is typically +1 or -1 with equal probability of 1/2. This
represents a step to the right or a step to the left.
Independence: Each step is independent of previous steps. The direction of
the movement (right or left in one dimension) does not depend on the
position or movement at any previous time step.
Discrete Time and Space: The steps occur at regular time intervals, and the
positions are typically integer values.

Properties
Symmetry: If the steps are symmetric (i.e., the probability of moving in any
direction is the same), the random walk is considered unbiased.

Recurrence: In one and two dimensions, a simple random walk is recurrent,


meaning that it will eventually return to the starting point with probability 1.
In three or more dimensions, it is transient, meaning there is a non-zero
probability that it never returns to the starting point

Distribution of Position: As the number of steps increases, the distribution of


the position of a simple random walk becomes increasingly well-approximated
by a normal distribution, due to the Central Limit Theorem.
2) self-avoiding random walk is a path that a random walker takes such that
it never visits the same point more than once. This concept arises primarily in
the context of mathematical studies related to lattice networks and is also
significant in statistical mechanics, particularly in the study of polymer
chains.Here's how a self-avoiding walk (SAW) is generally structured:Starting
Point: The walk begins at a specific starting point on a lattice, such as a point in a
2-dimensional grid.Steps: At each step, the walker randomly selects a direction
and moves to the next lattice point.Constraint: The chosen direction must lead to
a lattice point that has not been previously visited by the walker. If there are no
unvisited neighboring points available, the walk either stops or is discarded,
depending on the context of the study or the specific rules of the
model.Purpose: The length of the walk can either be fixed in advance, or the
walk can continue until no further steps are possible.

properties
Exclusion Principle: Each site on the lattice can only be visited once, which
differentiates SAWs from simple random walks and introduces significant
complexity, especially in calculating probabilities of various paths.

Growth of Walks: The number of possible self-avoiding walks increases super-


exponentially with the number of steps. However, the growth rate, often
represented by the connective constant, is lattice-dependent.

Scaling and Universality: The properties of long SAWs exhibit scaling behaviors
that are believed to be universal, meaning that they do not depend on the
details of the lattice but only on the dimensionality of the space.

High-Dimensional Behavior: In dimensions four and higher, the properties of


SAWs approach those of ordinary random walks, due to the decreased
likelihood of self-intersection.
3) biased random walk is a stochastic process in which there is a non-equal
probability of taking steps in different directions. Unlike a symmetric or simple
random walk where each step has an equal likelihood of going in either
direction, a biased random walk includes a preference or bias towards one
direction.

properties
Directional Bias:In a one-dimensional biased random walk, at each step, the
probability of moving right (+1) might be p and moving left (-1) might be q, with
p + q = 1 and typically p ≠ q.The bias (p > 0.5 for rightward, p < 0.5 for leftward)
determines the dominant direction of the walk.

Expectation and Variance:The expected position after n steps is given by E[X_n]


= n(2p - 1), assuming the walker starts at 0 and moves right with probability p
and left with probability 1-p.The variance of the position after n steps is
Var(X_n) = 4np(1-p). This formula shows that even with a bias, the position
distribution spreads out, but the spread rate is influenced by how strong the
bias is.
Long-term Behavior and Recurrence:
• A biased random walk in one dimension is transient, meaning that unlike a
symmetric random walk (p = q = 0.5), it has a nonzero probability of not
returning to the starting point. The likelihood of ever returning to the origin
decreases as the bias away from the origin increases.
• In more than one dimension, biased random
Asymptotic Behavior:The position of the walker asymptotically tends to infinity
(either positive or negative) almost surely, and this is directed by the bias (p
versus q).
Hitting Probabilities:In a biased random walk, the probability of eventually
hitting a specific state or set of states can be computed, and unlike the
symmetric case, these probabilities are not always 1 (certainty). The
calculations for these probabilities depend significantly on the values of p
and q.

Self-avoiding random walk

In mathematics, a self-avoiding walk (SAW) is a sequence of


moves on a lattice (a lattice path) that does not visit the same
point more than once. This is a special case of the graph
theoretical notion of a path. A self-avoiding polygon (SAP) is a
closed self-avoiding walk on a lattice. Very little is known
rigorously about the self-avoiding walk from a mathematical
perspective, although physicists have provided numerous
conjectures that are believed to be true and are strongly
supported by numerical simulations.
1.What Is a Self-Avoiding Walk (SAW)?
1. A SAW is a sequence of moves on a lattice (like a grid)
where each step takes you from one lattice point to
another.
2. Crucially, a SAW never revisits the same point during
its journey.
3. Think of it as a path that avoids intersecting itself, akin
to a traveler navigating a maze without retracing their
steps.
4. SAWs have applications in modeling thread- and loop-
like molecules, such as proteins.

Properties of random walk

1. non-intersecting point is inherent to the nature of these walks.


By definition, a self-avoiding random walk is a path that does not
intersect with itself. This means that once a point is visited, it
cannot be revisited during the same walk. Thus, every point in a
self-avoiding random walk is a non-intersecting point.

2. Lattice-Based Movement in a lattice-based self-avoiding


random walk, the walker starts at a specific point and moves
through the lattice without revisiting or crossing any previously
visited points. Here's how lattice-based movement works:
•Directional Choices: The walker has a limited set of
possible moves at each step, constrained by the lattice
structure. In a square lattice, the options are typically up,
down, left, or right. In a cubic lattice, the choices include
upward, downward, leftward, rightward, forward, and
backward.

•Self-Avoidance: The key constraint is that the walker


cannot revisit any point it has already visited. This constraint
affects the direction of movement, as some directions may
be blocked by prior steps.

Boundaries and Limitations: In some cases, the lattice has


boundaries or other obstacles that limit the walker's
movement. This adds an additional constraint to the walk,
further complicating the path.

3. number of configurations in a self-avoiding random


walk grows exponentially. This exponential growth
reflects the combinatorial nature of SAWs, where
each additional step introduces new constraints and
reduces the number of valid paths.
•Critical Exponents and Scaling
Researchers often study critical exponents to understand the
scaling behavior of self-avoiding random walks. The critical
exponent for SAWs describes how the number of
configurations grows with the number of steps. This growth
is typically characterized by an exponential factor raised to
the power of the number of steps, with a critical exponent
indicating the exact rate of growth.

In a square lattice, for example, the growth rate can be


approximated as cN , where c is the growth constant, and
N is the number of steps. Determining this constant is a key
challenge in the study of SAWs.

You might also like