Professional Documents
Culture Documents
Random Signals
Random Signals
1. The set of all possible outcomes (elementary events, sample space) denoted Ω.
3. A probability measure P.
Specification of the triple (Ω, S, P) defines the probability space which models a real-
world measurement or experimental process.
Example:
S = {all possible sets Sk of outcomes} = {{1}, … , {6}, {1, 2}, … , {5, 6}, … , {1, 2, 3, 4, 5, 6}}
1. The set of all possible outcomes (elementary events, sample space) denoted Ω.
3. A probability measure P.
Specification of the triple (Ω, S, P) defines the probability space which models a real-
world measurement or experimental process.
How many events
Example: occur as a result of
one rolling?
Ω = {all the results (outcomes) of the dice roll} = {1, 2, 3, 4, 5, 6} ,
S = {all possible sets Sk of outcomes} = {{1}, … , {6}, {1, 2}, … , {5, 6}, … , {1, 2, 3, 4, 5, 6}}
1. The set of all possible outcomes (elementary events, sample space) denoted Ω.
3. A probability measure P.
Specification of the triple (Ω, S, P) defines the probability space which models a real-
world measurement or experimental process.
How many events
Example: occur as a result of
one rolling?
Ω = {all the results (outcomes) of the dice roll} = {1, 2, 3, 4, 5, 6} ,
S = {all possible sets Sk of outcomes} = {{1}, … , {6}, {1, 2}, … , {5, 6}, … , {1, 2, 3, 4, 5, 6}}
32 sets,
P = probability of all sets/events Sk .
out of all ?? possible sets.
4
Basic Probability Theory
Probability theory begins with three basic components.
1. The set of all possible outcomes (elementary events, sample space) denoted Ω.
3. A probability measure P.
Specification of the triple (Ω, S, P) defines the probability space which models a real-
world measurement or experimental process.
How many events
Example: occur as a result of
one rolling?
Ω = {all the results (outcomes) of the dice roll} = {1, 2, 3, 4, 5, 6} ,
S = {all possible sets Sk of outcomes} = {{1}, … , {6}, {1, 2}, … , {5, 6}, … , {1, 2, 3, 4, 5, 6}}
32 sets,
P = probability of all sets/events Sk .
out of all 64 possible sets.
5
𝑛 = numer of elementary events = 6, 𝑘 = size of a possible subset 𝑆𝑘
Number of all 𝑛 6
𝑛 6
possible subsets Sk = = = 1 + 6 + 15 + 20 + 15 + 6 + 1 = 64
𝑘 𝑘
(events) on set 𝑘=0 𝑘=0
∅ Ω
𝑛 𝑛!
=
𝑘 𝑘! 𝑛 − 𝑘 !
6
𝑛 = numer of elementary events = 6, 𝑘 = size of a possible subset 𝑆𝑘
Number of all 𝑛 6
𝑛 6
possible subsets Sk = = = 1 + 6 + 15 + 20 + 15 + 6 + 1 = 64
𝑘 𝑘
(events) on set 𝑘=0 𝑘=0
𝑛Blaise Pascal
𝑛!
=
𝑘 1623𝑘! - 𝑛1662
−𝑘 ! Pascal's triangle
Pascal created the concept of expected value.
7
An event is a result of some experiment.
8
From the Kolmogorov axioms, one can deduce other useful rules (no axioms !)
for calculating probabilities.
10
Random Process and Random Signal
A random process is a collection of time functions, or signals, corresponding to various
outcomes (results 𝜉 ) of a random experiment. For each outcome 𝜉, there exists
𝑥2 (𝑡)
A set 𝑿 𝑡, 𝜉 (process) of sample
functions or realizations, that we
will consider as deterministic
𝑥𝑁 (𝑡)
functions of time.
time
A random signal x(t) can be any signal from a set of signals 𝑿 𝑡, 𝜉 = 𝑥1 𝑡 , 𝑥2 𝑡 , … , 𝑥𝜉 𝑡 , …
which is the sample space for the random process 𝑿(𝑡, 𝜉 ).
11
A random process is not just one signal but
an ensemble of signals, as shown
schematically in the figure beside, for which
the outcome of the probabilistic experiment
could be any of the four waveforms indicated.
In our model, each waveform is deterministic,
but the process is probabilistic or random
because it is not known a priori which
waveform will be generated by the
probabilistic experiment.
Consequently, before obtaining the result of a
probabilistic experiment (or a priori), there is
uncertainty about what signal will be They will be synonyms for us:
produced. After the experiment (or a • random process or function set X(t, 𝜉 ),
posteriori), the result is completely • random signal x(t),
determined.
12
Cumulative Distribution Function
13
The CDF has the following properties:
14
For quantized signals when xi DX = {x1, x2, x3, ..., xk , …, xK} and appropriate
probabilities are defined as
and then
15
Probability Density Function - PDF
The PDF is defined as the derivative of the cumulative distribution function:
Since (2.6d) for continuous distributions (and continuous random signals x(t) )
the probability at a single point is always zero, as an integral over a single point.
17
Expected Value – Mean Value
In probability theory, the expected value of a random variable is intuitively the long-run
average value of repetitions of the experiment it represents. For example, the
expected value of a six-sided dice roll is 3.5 because the average of an extremely
large number of dice rolls is practically always nearly equal to 3.5.
More practically, the expected value of a discrete random variable is the probability-
weighted average of all possible values.
General definition for continuous random signal x(t) or random variable defined
on a probability space (Ω, S, P) is defined as
(2.7)
and for quantized signal as
(2.8)
18
Quantized Random Signals - Probability Mass Function (PMF - pk)
𝑝𝑘
𝑥𝑘
𝐶𝐷𝐹
𝑥𝑘
= xi pi = 1,0 0,2 +1,5 0,3 + 2,0 0,3 + 3,7 0,05 + 6,0 0,15 = 2,335 19
Expected Value Properties
For statistically independent signals, when a joint (cross) probability density function
and than
20
Variance
In probability theory, variance measures how far a set of numbers is spread out.
A variance of zero indicates that all the values are identical.
Variance is always non-negative: a small variance indicates that the data points tend
to be very close to the mean (expected value) and hence to each other, while a high
variance indicates that the data points are very spread out around the mean and from
each other.
The variance of a set of samples that is represented by random variable X is its
second central moment, the expected value of the squared deviation from the mean
μ = E[X] is:
(2.9a)
(2.9b)
Properties:
for (2.12)
independent
signals
22
For an random signals (process) representing an electrical signal – for example
a voltage u(t) - we can identify some common terms as follows:
• mean value u is the DC-component of u(t) 𝑢DC ,
• the square of the mean u2 is the DC-power u2 (t) 𝑢DC
2
,
• the mean-squared value E[u2] is the total average power P of u(t),
• the variance σu2 is the AC-power (the power in the time-varying part uAC(t)),
• the standard deviation σu is the RMS value of the time-varying part uAC(t), but not
𝑢RMS = 𝑃.
Be sure not to make the common mistake of confusing "the square of the mean" with
"the mean-squared value", which means "the mean of the square".
In general, the mean square is greater than the square of the mean (ultimately equal).
23
For dependent (correlated) random signals
Var 𝑥 𝑡 = V 𝑥 = 𝑥𝑘 − 𝜇𝑥 2 𝑝𝑘 .
𝑘
24
For dependent (correlated) random signals
Var 𝑥 𝑡 = V 𝑥 = 𝑥𝑘 − 𝜇𝑥 2 𝑝𝑘 .
𝑘
25
Uniform Probability Density Function (continuous case of p(x))
26
Expected value:
variance:
27
Expected value:
variance:
When a = - A, b = + A than
28
For given and uniform PDF formula is:
29
Uniform Probability Density Function - discrete case
Probability Mass Function (PMF or pmf)
A simple example of the discrete uniform distribution is throwing a fair regular die.
The possible values are 1, 2, 3, 4, 5, 6, and each time the die is thrown the
probability of a given score is 1/6.
30
Uniform Probability Density Function - discrete case
Probability Mass Function (PMF or pmf)
A simple example of the discrete uniform distribution is throwing a fair regular die.
The possible values are 1, 2, 3, 4, 5, 6, and each time the die is thrown the
probability of a given score is 1/6.
If two ideal dice are thrown and their values added, the resulting distribution is
31
Uniform Probability Density Function - discrete case
Probability Mass Function (PMF or pmf)
A simple example of the discrete uniform distribution is throwing a fair regular die.
The possible values are 1, 2, 3, 4, 5, 6, and each time the die is thrown the
probability of a given score is 1/6.
If two ideal dice are thrown and their values added, the resulting distribution is
no longer uniform since not all sums have equal probability!
32
Normal Probability Density Function (continuous p(x))
A random signal (or process ) x(t) is said to be normally (Gaussian) distributed with
mean µ and variance σ2 if its probability density function is:
( x − )2
−
1 2 2
f ( x) = e
. (2.16)
2
1777 - 1855 The normal distribution is sometimes informally called the bell curve.
33
Normal Probability Density Function (continuous p(x))
A random signal (or process) x(t) is said to be normally (Gaussian) distributed with
mean µ and variance σ2 if its probability density function is:
( x − )2
−
1 2 2
f ( x) = e
. (2.16)
2
?
1777 - 1855 The normal distribution is sometimes informally called the bell curve. 34
Normal Probability Density Function (continuous p(x))
A random signal (or process) x(t) is said to be normally (Gaussian) distributed with
mean µ and variance σ2 if its probability density function is:
( x − )2
−
1 2 2
f ( x) = e
. (2.16)
2
?
∞
2 π
න e−(𝑎𝑥) d𝑥 = ,
𝑎
−∞
1
where 𝑎= .
𝜎 2
1777 - 1855 The normal distribution is sometimes informally called the bell curve. 35
N(, )
39
Probability density of the sum of the signals
If u(t) = x(t) + y(t) and signals are statistically independent, that is 𝑝𝑥𝑦 𝑥, 𝑦 = 𝑝𝑥 (𝑥)𝑝𝑦 (𝑦),
thus CDF
The density probability function of the sum of independent signals is a linear convolution
of the marginal density functions
(2.17)
40
The Central Limit Theorem
The central limit theorem is one of the great results of mathematics.
It explains the omnipresent occurrence of the normal distribution in nature.
The theorem states that the average of many independent and identically distributed
random variables (identical and ) with finite variance tends towards a normal
distribution irrespective of the distribution followed by the original random variables.
The Central Limit Theorem describes the relationship between the distribution of
sample means and the population that the samples are taken from.
This tells us that the distribution of sample means has the same center as the
population, but it is not as spread out.
41
42
43
Correlations
A random signal x(t) can be any signal from a set of signals {x1(t), x2(t),…, xk(t),…}
which is the sample space for the random signal x(t).
The probability that x(t) will equal xk(t) is Prx(t)x(t) = xk(t) = px(t)[xk(t)] .
b) the auto- and cross- correlation functions are functions of the time difference only,
named lag , that is
When ensemble average is equal to the corresponding time averages the signal is called
ergodic. For ergodic signals, for all indeks k
x = E[x(t)] = −xk
and
T
1
Rxx(, k) = lim
T → T xk(t)xk(t+ )dt = Rxx () .
0
Ergodicity implies that each signal in the sample space is representative of the whole set.
For a process to be ergodic it must be stationary. The converse is not true.
All signals studied in this course will be ergodic.
46
A random process is called strict-sense (or strong-sense) stationary (SSS)
if its probability structure (as PDF, CDF) is invariant with time. Note that for
a Gaussian or an uniform process WSS implies SSS, because those process are
entirely determined by the their first and second moments (see pp. 29,33-35).
(2.22)
, (2.23a)
(2.23b)
48
Finally, for ergodic process (power signals) correlation functions are defined as
even (2.22)
function
, (2.23a)
odd
functions
(2.23b)
49
Finally, for ergodic process (power signals) correlation functions are defined as
(2.22)
, (2.23a)
(2.23b)
In automatic control, everyone knows that it is only appropriate to count the correlation of
the output (usually denoted by y) with the input (denoted as x or u). Then the order of the
indexes does not mean the order of the signals – it is only an alphabetical order).
50
Finally, for ergodic process (power signals) correlation functions are defined as
(2.22)
, (2.23a)
(2.23b)
In automatic control, everyone knows that it is only appropriate to count the correlation of
the output (usually denoted by y) with the input (denoted as u). Then the order of the
indexes does not mean the order of the signals (alphabetical order)
51
52
( x: result, after-effect, y: reason, cause ) ( y: result, after-effect, x: reason, cause )
53
Remember
(2.22)
, (2.23a)
(2.23b)
Rxx() = Rxx(-) ,
Rxx (0) = E[x(t)2] = x2 + x2 = Px ,
Rxx (0) Rxx() ,
lim Rxx () = x2.
→
If covariance 𝐶𝑜𝑣𝑥𝑦 0 = 0, we say that the signals x(t) and y(t) are
uncorrelated. Note again that the term “uncorrelated” in its common usage
means that the processes have zero covariance rather than zero correlation,
because
𝐶𝑜𝑣𝑥𝑦 𝜏 = 𝑅𝑥𝑦 𝜏 − 𝜇𝑥 𝜇𝑦 .
56