Draft 1

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

On the implications of redundant structural information in the eectiveness of the Structural Vibration Filter (SVF).

Brendan Vidmar July 8, 2011


This short write up intends to show physically and mathematically the reasons for reduced accuracy of the SVF when co-linear inputs are used to obtain structural information. This reduction in accuracy has been observed experimentally in the proof of concept SVF tests.

Background

First, some mathematical preliminaries will be introduced, the most important being how to solve a linear system of the form: Ax = b when the matrix A is rectangular (n m). If A is n m, then obviously x is m 1 and B is n 1. In the case of the SVF, we are solving Hp = f where f is a vector of the force outputs from a modal impact test, H is the Frequency Response Function (FRF), and p is the input, all in the frequency domain. The goal is to solve for the input, using the output as well as the FRF, which is built using structural information of the system. Initially, one would assume performing p = H1 f to solve for p. But, how do we do this if H is rectangular? Commonly, when solving this equation on a digital computer, the QR decomposition method is used. The idea behind QR decomposition is to essentially factor our rectangular matrix, H, into two matrices H = QR

where Q is n m with orthonormal columns (linear independent with a length of 1) and R is a m m, upper triangular matrix with a non-zero determinant (invertible). Now, we will dive into the details behind this method. First, a review of how to compute a set of orthonormal vectors from H in order to build Q. Suppose we have a set of vectors {v1 , v2 , ...vn } and wish to convert them to an orthogonal basis, {u1 , u2 , ...un }. First, we will set v1 to u1 and construct the rest of the vectors orthogonal to this. Then dene up = proju1 v2 , (up is the portion of v2 along u1 ). Then u2 = v2 up and u2 is orthogonal to u1 . To nd the rest of the un s, a similar process is followed. I.e. u3 = v3 proju1 v3 proju2 v3 and so on. To convert the new orthogonal basis to a orthonormal one, just divide the new vectors by their length. Next, we will need a theorem from linear algebra: Let S = {a1 , a2 , ...., an } be an orthogonal basis for an inner product space and b is any vector from that space then, b= b, a2 b, an b, a1 a1 + a2 + .... + an a1 2 a2 2 an 2 (1)

and if S is orthonormal then b = b, a1 a1 + b, a2 a2 + ..... + b, an an A short, non-rigorous, proof of this is as follows: Essentially, Eqn. 1 says that b is a vector composed of the an s multiplied by some scalar, ki . b = k1 a1 + k2 a2 + .... + kn an In order to nd what these ki s are, take the inner product of both sides of the above equation with respect to ai . b, ai = k1 a1 + k2 a2 + .... + kn an , ai = k1 a1 , ai + k2 a2 , ai + .... + kn an , ai Recalling that S is a orthogonal basis, we have ai , aj Therefore we have b, ai = ki ai , ai = 0 for i = j.

Also, because ai has to be non-zero, ai , ai > 0, and this gives, ki = b, ai ai , ai 2

The inner product of ai with itself can be written as ai , ai = ai and so ki = b, ai ai 2


2

and we are good. To continue, lets suppose that H is indeed rectangular, and dene its columns as c1 , c2 , ...., cn . We dene the new m n matrix Q with the columns composed of the orthonormal vectors computed from H. I.e. H = [c1 , c2 , ..., cn ] and Q = [u1 , u2 , ...., un ] Because we calculated the ui s to be orthogonal from the ci s earlier, the ci s must be in the span of the ui s. Using this and the above theorem we can write each column ci of H as c1 = c1 , u1 u1 + c1 , u2 u2 + .... + c1 , um um c2 = c2 , u1 u1 + c2 , u2 u2 + .... + c2 , um um . . . cm = cm , u1 u1 + cm , u2 u2 + .... + cm , um um Hopefully it is visible to see where we are going c1 , u1 c2 , u1 c1 , u2 c2 , u2 R= c1 , um c2 , um H! Recalling that when the orthogonal matrix, Q was derived, the uk s were orthogonal to c1 , c2 , ..., ck1 (u3 is orthogonal to c1 and c2 , etc.). Therefore, all the inner products in R below the diagonal must be zero ( ci , uj = 0 with i < j) and we have an upper triangular matrix. To solve our linear system Hp = f we do the following. with this. Next, dene R as cm , u1 cm , u2 cm , um

Remembering that Q = [u1 , u2 , ...., un ], performing the product QR we get

HT Hp = HT p RT QT QRp = RT QT f RT Rp = RT QT f Rp = QT f follow? It can be shown that R is always non-singular and we can now easily solve our system.

Implementation

In order to implement the QR decomposition, an algorithm must be programmed with a digital computer. By the theory introduced above, it is easy to see that that the orthogonal vectors can be computed straightforwardly as: uk = vk proju1 vk proju2 vk .... projuk1 vk and then normalized. When this process is performed on a computer, commonly the vectors uk are not exactly orthogonal because of numerical roundo errors. This roundo error can cause the method to become numerically unstable. In order to reduce the errors from the nite precision of the computer, the orthogonal vectors can be computed in an iterative way, such as: uk = vk proju1 vk uk = uk proju2 uk . . .
(k2) uk (2) (1) (1) (1)

= uk

(k3) (k2)

projuk2 uk projuk1 uk

(k3) (k2)

uk = uk

Another, even more computationally accurate way of nding the orthogonal vectors is the so-called Householder reection. This method essentially takes a vector and reects it about some plane through the use of the Householder reection matrix : 2vv T H=I T v v In linear algebra terms, the matrix P = vv T vT v

is dened as a orthogonal projector with the range of P equal to the span of the vector v. Also vv T P =I T v v is an orthogonal projector with the range of P equal to the perpendicular span of v. This is shown in Figure ??.

Figure 1: .

You might also like