Matrix Review

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

STA 312 SPRING 2023

Matrix Definition
 A matrix is a rectangular array of elements arranged in R rows and C
columns.
 Matrices are denoted by capital, bold-face letters. Any given element of a
matrix is denoted by the corresponding lower case letter using subscripts to
denote the row and column labels.
 Example: x ij is the element from the ith row and jth column of the matrix X.

Example:

This dimension of the above matrix is I×J

 A matrix is a square matrix if the number of rows and columns are the same.
 The identity matrix has 1’s on the diagonal and zero’s everywhere else.

Vectors
 A column vector is a matrix having only one column.
 A row vector is a matrix having only one row.

Transposition
 The transpose of a matrix X, denoted X , is simply the matrix that is
obtained by interchanging the columns and the rows.
 In particular, row vectors are denoted by the transpose of the corresponding
column vector.

1|Page
Dr. Re-Mi Hage
STA 312 SPRING 2023

Example:

Matrix Addition
 Sizes of matrices must be identical to add (or subtract) them
 Simply add (or subtract) the corresponding elements

Example:

Scalar Multiplication
 To multiply a matrix by a constant, simply multiply each element in the
matrix by that constant.

Example:

2|Page
Dr. Re-Mi Hage
STA 312 SPRING 2023

Multiplication of Matrices
 Element ij is the sum-product of the ith row of the first matrix and the jth
column of the second matrix.
 Requires the number of columns of the first matrix match the number of
rows of the second matrix.
 𝐴×𝐵 ≠𝐵×𝐴
 In general, the dimensions of a product will be 𝐴 × 𝐵 × = (𝐴𝐵) ×

Special Definitions
 A matrix is said to be symmetric if it is equal to its transpose.
 A matrix is called a diagonal matrix if all off-diagonal elements are zero.
 The identity matrix is a diagonal matrix consisting of a diagonal of ones.

Inverses
 The inverse of a matrix A is the matrix A-1 such that AA-1 = A-1A = I
 Inverse exists only if original matrix is square and full rank.
 Note: The rank of a matrix is the maximum number of linearly independent
columns.
 . Suppose that A is diagonal matrix of the form

Example:

3|Page
Dr. Re-Mi Hage
STA 312 SPRING 2023

In the case of a matrix of dimension 2 × 2, the inversion procedure can be


accomplished by hand easily even when the matrix is not diagonal. In the 2× 2
case, we suppose that if:

a b a b 1 d −b
A= , then A = =
c d c d ad − bc −c a
Example:

2 2 1 4 −2 2 −1
A= , then A = =
3 4 2(4) − 2(3) −3 2 −3/2 2

As we check, we have:
3
2 −1 2(2) − 2( ) 2(−1) + 2(1)
2 2 2 1 0
AA = = = =I
3 4 −3/2 2 3 0 1
3(2) − 4( ) 3(−1) + 4(1)
2

Random Matrices
Consider a matrix of random variables
𝑢 𝑢 … 𝑢
𝑢 𝑢 … 𝑢
𝑈=
⋮ ⋮ ⋱ ⋮
𝑢 𝑢 … 𝑢

When we write the expectation of a matrix, this is shorthand for the matrix of
expectations. Specifically, suppose that the joint probability function of u11,
u12,...,u1c,...,un1,...,unc is available to define the expectation operator. Then we
define

4|Page
Dr. Re-Mi Hage
STA 312 SPRING 2023

𝐸𝑢 𝐸𝑢 … 𝐸𝑢
𝐸𝑢 𝐸𝑢 … 𝐸𝑢
𝐸𝑈 =
⋮ ⋮ ⋱ ⋮
𝐸𝑢 𝐸𝑢 … 𝐸𝑢

Consider the joint probability function for the random variables y1,...,yn and the
corresponding expectations operator. Then
𝑦 𝐸𝑦
𝐸𝑦 = 𝐸 ⋮ = ⋮
𝑦 𝐸𝑦

By the linearity of expectations, for a nonrandom matrix A and vector B, we have

𝐸(𝐴𝑦 + 𝐵) = 𝐴𝐸𝑦 + 𝐵

The variance of a vector of random variables is called the variance-covariance


matrix. It is defined by

𝑉𝑎𝑟𝑦 = 𝐸 (𝑦 − 𝐸 𝑦)(𝑦 − 𝐸 𝑦)′

𝑦1 − 𝐸𝑦
1
𝑉𝑎𝑟𝑦 = 𝐸 ⋮ (𝑦1 − 𝐸𝑦1 … 𝑦𝑛 − 𝐸𝑦
𝑛 )
𝑦𝑛 − 𝐸𝑦𝑛

𝑉𝑎𝑟𝑦1 𝐶𝑜𝑣 𝑦1 , 𝑦2 … 𝐶𝑜𝑣 𝑦1 , 𝑦𝑛


⎛ 𝐶𝑜𝑣 𝑦2 , 𝑦1 𝑉𝑎𝑟𝑦2 … 𝐶𝑜𝑣 𝑦2 , 𝑦𝑛 ⎞
=⎜ ⎟
⋮ ⋮ ⋱ ⋮
𝐶𝑜𝑣 𝑦𝑛 , 𝑦2 … 𝑉𝑎𝑟𝑦𝑛
⎝𝐶𝑜𝑣 𝑦𝑛 , 𝑦
1 ⎠

Since 𝐸 𝑦𝑖 − 𝐸𝑦𝑖 𝑦𝑗 − 𝐸𝑦𝑗 = 𝐶𝑜𝑣 𝑦𝑖 , 𝑦𝑗 𝑓𝑜𝑟 𝑖 ≠ 𝑗

And 𝐶𝑜𝑣 𝑦𝑖 , 𝑦𝑖 = 𝑉𝑎𝑟𝑦𝑖

In the case that y1,...,yn are mutually uncorrelated, we have that Cov y , y =
0 for i ≠ j then,
Vary 0 … 0
0 Vary … 0
Vary =
⋮ ⋮ ⋱ ⋮
0 0 … Vary
5|Page
Dr. Re-Mi Hage
STA 312 SPRING 2023

Further, if the variances are identical so that 𝑉𝑎𝑟𝑦 = 𝜎 , then we can write

𝑉𝑎𝑟𝑦 = 𝜎 𝐼,

where I is the n × n identity matrix.

It can be shown that Var (Ay + B) = Var (Ay) = A (Var y) A′


𝑎
𝑉𝑎𝑟 𝑎𝑦 = Var a′ y = a′(Vary) a = (𝑎 … 𝑎 ) (Vary) ⋮
𝑎
𝑛 𝑖−1
= 𝑎 𝑉𝑎𝑟𝑦 +2 𝑎𝑎 𝐶𝑜𝑣 𝑦 ,𝑦
𝑖=2 𝑗=1

6|Page
Dr. Re-Mi Hage

You might also like