Professional Documents
Culture Documents
Morpho Tornic Sand Immuno System
Morpho Tornic Sand Immuno System
Morpho Tornic Sand Immuno System
Zenon Chaczko
(a)
University of Technology Sydney, NSW, Australia,
zenon@eng.uts.edu.au
Abstract
We assume a familiarity with the basic terminology of linear algebra and refer
the reader to [ Reference of SVD ] for a more complete coverage
We restrict our attention to matrices of real numbers and refer the reader to [ Reference ] for a
discussion of the SVD using complex numbers
Using the superscript T to denote the transpose of a vector or matrix we say two vectors
X and Y are orthogonal if
In two or three dimensional space this simply means that the vectors are perpendicular
Let A be a square matrix such that its columns are mutually orthogonal vectors of length 1 i e
XT X = 1 .
Furthermore it can be shown that there exist non unique matrices U and V such that
Henceforth we will assume the SVD has such a property The quantities si are called the
singular values of A and the columns of U and V are called the left and right singular
vectors respectively
and a simple calculation of dot products will show them to be mutually orthogonal
From the components of the SVD we can determine many properties of the original
matrix
Ax = 0
If
Fundamental to linear algebra is the notion of rank Numerous theorems begin with the condition
If matrix A is of full rank then the following property holds However if the matrix is rank
deficient ( or nearly so ) then small perturbations of the matrix values from round - off errors or
fuzzy data will yield a matrix which is of full rank Hence determining the rank of a matrix is non
trivial The SVD lends us a practical definition of rank as well as allows us to quantify the notion
of near rank deficiency.
Another use of the SVD provides a measure called a condition number which is related to the
measure of linear independence between the column vectors of the matrix The condition number
with respect to the Euclidean norm of a matrix A is
where smin and smax are the largest and smallest singular values of A
Using the condition number we can quantify the independence of the columns of A
Numerous practical problems can be expressed in the language of linear algebra A linear
system involves a set of equations in N variables For example consider the following linear
system
This problem can be expressed in terms of a coefficient matrix A a vector x of variables and a
vector b such that a solution to the linear system Ax = b is an assignment to the values of the vector
x For the above example A x and b are
Using the SVD of A we can determine if a solution exists as well as the general form of the
possible solutions x
z = d (1)
Problems in signal processing often use linear models for signals In ideal noise free condi
tions the measurement data can be arranged in a matrix where the matrix is known to be
rank deficient By this we mean that the signal is assumed to lie in a proper subspace of
Euclidean space However the presence of noise either from rounding error or instrument
error results in a measurement matrix that is often of full rank Usually the models assume
that the error can be separated from the data in that the noise component is that which
lies in a subspace orthogonal to the signal subspace For this reason the SVD is used to
approximate the matrix decomposing the data into an optimal estimate of the signal and
the noise components
Suppose A is the measurement matrix where each column consists of a signal component x and a
noise component n
where each Ci = xi + ni
The vector x representing the signal is known to lie in a rank k subspace though the precise
subspace is not known Therefore let x = Hc for a coefficient vector c and a matrix H whose
columns are the basis vectors of some rank k subspace The least squared error between A and Hc
is minimized by choosing H to be the optimal k rank approximation Ak to A.
Then the k columns of U , correspond to the k largest singular values , span the rank k subspace H.
The resulting error
Using the SVD as above we see that the original data matrix A is decomposed into the orthogonal
components
which corresponds to the orthogonal subspace defi ning the noise components.
The SVD analysis of a matrix A gives an easy and uniform way of computing its inverse A-1 ,
whenever it exists , or its pseudoinverse A+ , otherwise.
Namely if
Then
The singular values 1 ,…, r are locate on the diagonal of the matrix , and the reciprocals of the
singular values
are on the diagonal of the n x m matrix + . We know that for the pseudo inverse we have
A+ A+ = A
A+ = A-1
AA+ = U UT
A A+ A = A , A+ A A+ = A+
2. Morphotronic system
We define the morphotronic structure as a network of input and output of q x p matrices where
q p. In figure 1 we show one morphotronic network.
A Z1 Z3 Z5 B
Z2 Z4
The graph representation of one module of the morphotronic system is shown in figure 2
A Z B
Formally the input output between matrices in figure (2) is given by the expression
B=ZA (2)
where
Proposition 1 :
When q > p for the same data A and B we have many different values of Z for which
B=ZA
Proof :
For Z = Z + LT we have
B = ( Z + LT ) A = Z A + L A
B = ( Z + LT ) A = Z A + LT A = Z A.
So le solutions are
Now we can conclude that
AT L = 0 , L T A = 0
And
B = ( Z + LT ) A = Z A
Z1
A B
Z2
Z1 = B (AT A)-1 AT
Z2 = A (BT B)-1 BT
and
In the same loop we have another projection operator Q2 defined in this way
Whith Q2 A = A , and
Now we show, in a simple case , the geometric image of the projection operator Q2 for the matrix.
we have
P3
X = (X1 , X2 , X3 )
A1
S1
Q2X
P1
S2
A2
P2
For
B=ZA
We have
Z = B (BT A)-1 BT
In fact we have
Z1 = B (BT A)-1 BT
Z2 = A (AT B)-1 AT
We have the projection operator
Where
Where
In conclusion for the same diagram in figure 3 we have four different type of projection operators
Where A+ is the pseudo inverse of A. Now given the non singular quadratic matrix M , we have
Now when M = V -1 where V is the matrix of the eigenvectors of AT A and -1 is the a diagonal
matrix of the inverse of the square of the eigenvalues of AT A , for the well known properties of the
eigenvectors and eigenvalues we have
For example given
In conclusion with the projection operator , the transformation M we can found by the eigenvalues
and egivenvectros of AT A , the SVD for A. For a more general projection operator as
Q A = A , Q = A (BT A)-1 BT
For A = R B we have
Q = R B ( BT R B )-1 BT
with the change of b into BM the projection operator Q is invariant.. Now we must choose M in a
way that
(BM)T R BM = MT ( BT R B ) M = Identity
Given the eigenvectors matrix V of the matrix BT R B and the diagonal matrix of the sqaure of the
eigenvalues we put
M T ( BT R B ) M = I
M ( B M )T = M MT BT = ( BT R B )-1 BT = B+
When we put B M = U we have the traditional decomposition or SVD of the pseudo inverse of B
in this way
V -1 UT = M ( B M )T = M MT BT = ( BT R B )-1 BT = B+
B+ B = V -1 UT = Identity
We obtain
B = U VT
That id the SVD of B obtained by the morphotronic definition of the projection operator.
In conclusion by V , , U we can define the pseudo inverse of B in the projection operator and also
the projection operator. At the reverse by the eigenvalue and eigenvectors of BT R B , we can obtain
the V , , U in the SVD. We show that we can write the SVD as a particular case of the
morphotronic transfromation and loop or projection operator.
Definition 1.
max{| p 1 |,..., | p q
Definition 2
between cells
Definition 3
Definition 4
The formal immune network (FIN) is a set of cells W W0 where W0 is a set of cells that form
the ‘innate immunity” controlled by the cytokine.
Whereas we have that is the most affine to but is beyond the recognition
threshold. In this case in the Immuno Computing we add to W.
Definition 5
We define “epitope” any point P of the of the q dimensional space of the cells. In this dsapce we
have the immuno cells W but we have also all the possible cells included the “antigene”.
Given the matrix of the immune cells ( FIN or formal immune network ) in W in this way
Given the matrix of the Training Vectors ( training antigenic ) in this way
We can project the training vectors into the space of the immune cells or FIN by the projection
operator
For three dimension of the q space we show the points of the projections
p3
X2
Z V1
X3
X1
p1
V2
p2
Given the training matrix X of samples , we project the samples in the space of the immune cells or
FIN in a way obtain the learning process for the immune system. When we finish the learning
process we have the mature immune system. Now given a new cell Z we project this cell into the
immune system and we check where is the training projection or the immune cell that is affine to Z.
The immune cell that is affine the projection of Z activate all the immune system to destroy the
antigen Z.
6. Conclusion
In this paper we show that IC create by Tarakanov et al. 2003 is a special application of a more
general theory denoted morphotronic that include all the properties of the SVD.