Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

Unit 9 Tensors

UNIT 9
TENSORS
Structure

9.1 Introduction 9.4 Tensor Product


Expected Learning Outcomes 9.5 Vector and Tensor Fields
9.2 Tensors in Physics 9.6 Summary
9.3 Tensor Algebra 9.7 Solutions and Answers
Tensors of Rank 1
Tensors of Rank 2
Inner Product and Metric Tensor
Higher Rank Tensors
Special Tensors  ij and ijk
Contractions

9.1 INTRODUCTION
So far, you have studied about vectors and vector spaces. In the last unit of this
block, we will introduce tensors. We begin the discussion with a few examples of
tensors in physics. Then we discuss tensor algebra, tensor product, and vector
and tensor fields.

Expected Learning Outcomes


After studying this unit, you should be able to:
 define a tensor, and contravariant and covariant tensors;
 give examples of tensors in physics;
 determine whether a physical quantity transforms like a tensor;
 define metric tensor and state symmetry properties of a tensor;
 perform contraction of tensors; and
 define and determine tensor product.

9.2 TENSORS IN PHYSICS


Tensors appear whenever we deal with a vector space. Sometimes we do not
realize we are actually using a tensor. One of the simplest example of tensors is
the following: 95
Block 2 Vector Spaces, Matrices and Tensors
Example 9.1

Suppose x1, x 2, x 3 are the components of the position vector r of a particle.


These components are three numbers which depend on our choice of the
three cartesian coordinate axes. If we were to choose some other orientation
of the three axes, (but with the same fixed origin), these numbers will change
to some others, say x 1, x  2 , x  3 .

Now, the square of the distance of the particle from the origin is given by:
d 2  ( x 1)2  ( x 2 )2  ( x 3 )2

This distance does not change by merely changing the orientation of axes.
Therefore,
d 2  ( x 1 ) 2  ( x 2 ) 2  ( x 3 ) 2  ( x 1 ) 2  ( x  2 ) 2  ( x  3 ) 2

This equation is also written as:

d2   ij xi x j   ij xi x j


ij ij

where, as you know from UG physics,  ij is Kronecker delta, which is equal


to 1 for i = j and zero otherwise. The Kronecker delta is an example of a
second rank tensor with two indices. The invariant distance is a bilinear
function of components x i , as we shall explain in the next section.

Example 9.2
Another familiar example of tensors in physics is the cross product of vectors.
The angular momentum of a particle can be written as:
J  r  p,

or J1  x 2 p 3  x 3 p 2

J 2  x 3 p 1  x 1p 3

J 3  x 1p 2  x 2 p1

We can write the above equations as:

Ji  ijk xi p j
i , j ,k 1

Here ijk is a three indexed quantity which is completely anti-symmetric in the


three indices. Remember that an indexed quantity is called antisymmetric in
two indices if its sign changes when the two indices are interchanged. If the
two indices are equal, then the quantity is zero.
ijk is defined so that 123  1. For any permutation of the indices 1, 2, 3,
 1 or  1 depending on whether the permutation is even or odd (see
Unit 7). Thus,
123  231  312  1, 132  321  213   1,

and all other components, which contain a repeated index, are zero.
96
Unit 9 Tensors
ijk is an example of a tensor of rank 3. Here again, the components of the
vector J depend bi-linearly on the vectors r and p as seen by
J i   ijk x i p j .

Example 9.3
A rigid body is made of many point particles of masses m a , a  1, 2,..., N such
that their distances with respect to each other do not change. For such a body,

a rotation about an axis is specified by a common angular velocity  with

components (1,  2 , 3 ) . As you know from UG Physics, the direction of  is
the direction of the axis of rotation such that the rotation is in the right-handed

screw sense, and the magnitude  is equal to the angle by which the body
rotates per unit time.
All particles move in circles in planes perpendicular to the axis of rotation. The
velocity of the particle a at position ra is given by

v a    ra

and the angular momentum of the body is:



J  r a  p a   ma r a  (  r a )
a a

We can write the components of J as:


Ji   I ij  j
j

where Iij   ma [ij kl  ik jl ]xak xal


a

I ij are called components of the moment of inertia tensor or moment of


inertia matrix. The components, written fully are as follows:
I11   ma [( x a2 ) 2  ( x a3 )2 ]
a

I 22   ma [( x a3 ) 2  ( x a1 ) 2 ]
a

I33   ma [( xa1 )2  ( xa2 )2 ]


a

I23   ma xa2 xa3  I32


a

I31   ma xa3 xa1  I13


a

I12   ma xa1 xa2  I21


a

Notice that the diagonal components are positive, but the off-diagonal
components can have any sign. Also, ( x a2 ) 2  ( x a3 ) 2 is the square of the
distance of the particle a from the 1-axis. Thus, I11 is the sum of masses of
particles multiplied by the square of the distance from the axis 1. Similarly for
axes I 22 and I33 . 97
Block 2 Vector Spaces, Matrices and Tensors
If the body is mirror-symmetric about the 1-axis then, if there is a mass at
( x 1, x 2 , x 3 ) there is an exactly equal mass at (  x 1, x 2 , x 3 ) . When summed
over all particles, these terms cancel and I12  I 21  0 and I13  I 31  0 . If the
body is mirror symmetric about one of the other axes as well, then all the off-
diagonal components vanish.

Example 9.4
If x is a single variable then a relation like
y  ax  b

is called a linear relation. If b = 0, it is called a linear homogeneous relation.


If there are, say, two variables x 1, x 2 , then a set of linear homogeneous
relations are:
y 1  a1 1x1  a1 2 x 2

y 1  a2 1x1  a2 2 x 2

which can be written briefly as follows:


yi  aj
i
jx
j

Here ai j , i, j  1, 2 are coefficients which define the relation. We have purposely


written the expression for the coefficients with one index up and another down
in order to conform with the standard notation.
But, if x 1, x 2 and y 1, y 2 are components of vectors x and y, respectively, in a
two-dimensional space, then we expect the relation between vectors x and y
to hold irrespective of the basis used.
Therefore, if we change the basis as follows (see Unit 16):
ei  S
j
i
j
ej

the components of x change from x 1, x 2 to x 1, x 2 (and similarly y 1, y 2 to


y 1, y  2 ) as:

x i   (S 1T )i j x j , y i   (S1T )i j y j ,
j j

Then,
y i  (S j
1T i
) jyj  (S
j ,k
1T i
) j akj x k  (S
j ,k ,l
1T i
) j akj STk l xl

and this should be equal to the new coefficients connecting y s to x s :

y i   ai l x l
l

This shows us how the coefficients a ij (which relate vectors x and y)


themselves must transform as:
ali  (S 1T i
) ja j k S   (S
T k
l
1T i
) j Sl k a j k
98 j ,k j ,k
Unit 9 Tensors
Here the two-index quantities a j k are called components of a mixed tensor of
rank 2. Note that tensor components in the two bases themselves obey a
linear homogeneous relation, one index i transforming with S 1T while the
other index l with S. In the language of tensors, we say that if the basis e i of
vector space is changed with a matrix S then the first index i in ai j which
transform with inverse-transpose of S is said to transform contravariantly
(meaning in the opposite direction), while the index j which transforms by S
does so covariantly, that is, in the same direction.
Tensors are quantities which connect components of vectors in linear,
bilinear or multilinear fashion, independently of change of basis in the
vector space to which these vectors belong.

Having learnt a few examples of tensors, you can now begin the study of
tensors systematically.
There is always a vector space to start with. In the simple cases of
interest in physics only one vector space is involved.

9.3 TENSOR ALGEBRA


Let us begin by defining contravariant and covariant vectors which are tensors
of rank 1. A scalar is a tensor of rank zero.

9.3.1 Tensors of Rank 1


We now define contravariant and covariant vectors, which are tensors of
rank 1.
Contravariant Vectors
In the given vector space, once a basis is chosen, every vector v is equivalent
to a set of numbers v i , which are its components in that basis:

v v i ei
i

You have learnt in Unit 6 that when a different basis is chosen, then the
components of the same vector in the other basis are related to the
components in the first basis by the inverse-transpose matrix. Thus the
transformation of components goes in the opposite direction than the basis.
This is the reason why components of a vector transform contra-variantly.
Covariant Vectors
Next we consider linear functions on the vector space V. A linear function 
defined on a vector space V assigns to each vector v  V , a real number
(v ).

As an example, in three dimensions we can define for any vector r with


components (x, y, z) the following functions on it:
 (r )  r

 (r )  x 2

(r )  z 99
Block 2 Vector Spaces, Matrices and Tensors
If this mapping is linear, that is, for every v, u in V and any real number a if
( v  u)  ( v )  (u) and (av )  a( v )

then we call  a linear function.


In the examples above, only the function  is linear.

The advantage of a linear function on V is that it is not essential to give the


value of  on all vectors of V. It is sufficient to give their values on just the
basis vectors.
Let us define (the use of a lower index is part of standard notation):
a i  ( e i )

Then, provided v   v i e i , the value of  on v, using the linear nature of  is


i

 
( v)   v i e i
  

 i 

  v i (e i )
i

  ai v i
i

Under a change of basis,


ei  S
j
i
j
e j,

Therefore, the quantities similar to the ai in the new basis are:

a i  ( e i )

 
 
  Si e j 
j

 j 

 S j
i
j
 (e j )

 S j
i
j
aj

which shows that they transform covariantly. These numbers are called the
components of a covariant vector or components of a covariant tensor
of rank 1.
The set of all linear functions on the space V form a vector space by
themselves as follows:
Define the addition of  and  by the new function    acting on any vector
v  V such that
(  ) ( v )  ( v )  ( v )

Similarly define multiplication by a real number a as:


(a) ( v )  a( v )
100
Unit 9 Tensors
With these definitions the set V* of all linear functions on V becomes a vector
space whose zero vector is the linear function which assigns numbers zero to
each vector of V.
The space V* is called the vector space dual to V. Since each function  is
specified by n numbers ai , the dimension of this space, like that of V, is n.

It is important to remember that a vector does not change, only the


components of a fixed vector with respect to one basis change when
we calculate the components with respect to another basis.
Notation:
Notice the use of superscript, or upper index, for components of vectors in V
and subscript, or lower index, for components of those in V*. This is the
standard convention of classical tensor algebra and analysis adopted by
physicists.
9.3.2 Tensors of Rank 2
We now define covariant and contravariant tensors of rank 2. But first some
preliminaries.
Just as the set of linear functions on a vector space V form a vector space V*,
the set of all bi-linear functions which map a pair of vectors of V into real
number form an n 2 dimensional space.
A bilinear function t is a mapping which assigns to each pair of vectors
v, w V , a real number t(v, w) with the following properties:
t (u  v, w )  t (u, w )  t ( v, w ), t (av, w )  at ( v, w )
and t (u, v  w )  t (u, v )  t (u, w ), t (v, aw )  at (v, w )

where a is a real number.


Covariant Tensors of Rank 2
Again, as in the case of linear functions, the linear nature of t allows us to find
its value on any pair of vectors v, w V provided we know its value on all the
pairs of basis vectors. Let
t ij  t (e i , e j ),

Then
 
 
t ( v, w )  t  v i e i , w j e j 
 
 i i 

  v i w j t (e i , e j )
ij

  t ij v i w j
ij

You can see how t ij transform under a change of basis:

t ij  t (e i , e j )
 
t


S i
k
ek , S j
l
el 


k l 101
Block 2 Vector Spaces, Matrices and Tensors
 S
k ,l
i
k
S j t (e k , e l )
l

 S
k ,l
i
k l
S j tkl

All bilinear functions like t form a vector space because if t and s are both
bilinear functions then by defining sum t + s and multiplication by a real
number a as:
(t  s )(v, w )  t ( v, w )  s( v, w ), (at )(v, w )  a[t ( v, w )]

we get a bilinear function again. This space is n 2 dimensional because the


number of components in a general tij is n 2.

Contravariant tensors of Rank 2


Just as second rank covariant tensors are bilinear functions of a pair of vectors
v and w, second rank contravariant tensors are bilinear functions of a pair of
covariant vectors, say  and  : Let  have components a i and  have
components bi , then a general bilinear map or function T can be written as

T (, )  T ij ai b j
ij

We have written the coefficients of bilinear relation with upper indices. We will
see below that these indices correspond to contravariant components of a
tensor.
Under a change of basis, T (, ) should not change. Therefore, we require
that
T ( ,  )  T  ab  T  S
i, j
ij
i j
i, j
ij
i
k l
S j ak bl  T
k ,l
kl
ak bl

Therefore,

T  S
i, j
ij
i
k l
S j  T kl

Applying the inverse matrices (S 1 ) l and (S 1 ) l on both sides and summing


m n

over k, l we get

 T ijSi k (S 1)k m S j l (S 1)l n   T kl (S 1)k m (S 1)l n


i, j k ,l

The left hand side is simplified (after cancelling S with S 1 ) :

T mn  T kl(S1)k m (S1)l n


k,l

  (S 1T )mk (S 1T )nl T kl


k ,l

This shows that the two indices in the components T kl do indeed transform
contravariantly.
We now discuss inner product and metric tensor.
9.3.3 Inner Product and Metric Tensor
We have defined an inner product in a vector space V (Unit 6) as a bilinear
function v, w , v, w V .
102
Unit 9 Tensors
Therefore, every space with an inner product is equipped with a second rank
covariant tensor, called metric tensor with components:
g ij  e i , e j

where e i  is a basis in V. Note that by definition the inner product is


symmetric v, w  w, v . Therefore,

g ij  g ji

In physics, we deal with spaces in which the metric is non-degenerate, which


means that as a matrix G  [g ij ] has non-zero determinant. For such a metric,
the inverse matrix is written with the same symbol g but with upper indices as
follows:
G 1  [g ij ] = the inverse of the matrix G  [g ij ]

The inverse matrix of a symmetric matrix is also symmetric. Therefore,


g ij  g ji

We see below that g ij are components of a contravariant tensor of rank 2.

We start with the fact that in all bases, the definition of the inverse matrix is the
same:
G 1G  1  (G ) 1G  or g ij g jk   i k  g  ij g jk

Under a change of basis


g jk  S
m,n
i
m n
S j gmn

Then, the transformed

 gij gjk   g ijSi mS j ngmn  i k


j j ,m,n

As a matrix equation the last equality is:


(G ) 1SGS T  1 .

Applying on the right matrices with inverses to isolate (G ) 1 on the left, we


have:
(G)1  S 1T G1S 1

or in terms of matrix elements:

g  im  (S 1T ) ik g kl (S 1 ) l m  (S 1T ) i k (S 1T ) m l g kl

This shows that the inverse matrix is indeed a contravariant tensor of rank 2.
9.3.4 Higher Rank Tensors
Higher rank tensor components transform similarly with a matrix S for each
convariant index and S 1T for each contravariant index. For example, for a
sixth rank tensor with three contravariant and three covariant indices:

(T )ijklmn   (S 1T )i p (S 1T ) j q (S 1T )k r (S )l s (S )mt (S )mu (T )pqr stu


p,q,r ,s,t ,u 103
Block 2 Vector Spaces, Matrices and Tensors
Symmetry properties
Because of the tensor transformation formulas given in the general case, the
components which are symmetric or antisymmetric in two indices also remain
symmetric or antisymmetric in the indices in the same position. For example, if
(T )ijklmn   (T ) jiklmn

then

(T ) jiklmn   (S 1T ) j p (S 1T )i q (S 1T )k r (S )l s (S )mt (S )mu (T )pqr stu


p,q,r ,s,t ,u

  (S 1T )i q (S 1T ) j p (S 1T )k r (S )l s (S )mt (S )mu (T )pqr stu


p,q,r ,s,t ,u

If we interchange the names of indices p and q, it does not change the sum because
both indices are summed over the full range of their values. Then using
(T )qpr stu  (T )pqr stu the right hand side becomes:

(T ) jiklmn   (S 1T )i p (S 1T ) j q (S 1T )k r (S )l s (S )mt (S )mu (T )qpr stu


p,q,r ,s,t ,u

  (S 1T )i p (S 1T ) j q (S 1T )k r (S )l s (S )m t (S )m u (T ) pqr stu


p,q, r ,s,t ,u

  (T ) ijk lmn

9.3.5 Special Tensors  ij and ijk


In a vector space we can define infinitely many inner products. All we need to
do is to choose a basis {e i } , find a symmetric matrix g ij whose determinant is
not zero, and simply define
g ij  e i , e j

The components of the metric tensor in all other bases will be automatically
fixed by the transformation formula. This is the most general situation.
However, sometimes we wish to restrict ourselves to not all possible bases,
but a subset of it. As we have seen in Unit 6, for orthonormal basis {n i } in a
vector space,
n i , n j  0, if i  j ,
n i , n i  i   1 or  1
When all i   1 , we can write the metric components as Kronecker delta:
n i , n j   ij
The Kronecker delta is an invariant tensor in the sense that the value of its
components does not change when we transform from one orthonormal basis
{n i } to another, say {n i } .
Of course, the matrix connecting two orthonormal bases will have to satisfy
some conditions. If
n i   Si k n k ,
then  ij  ni , nj   Si k S j l n k , n l   Si k S j l  kl  (SST )ij
104 k,l k,l
Unit 9 Tensors
This shows that the matrices connecting two orthonormal bases must be
orthogonal matrices.
SS T  1
As we have seen in Unit 8, det S  1. If we are using orthonormal bases with
the same orientation, that is, all bases are either right-handed or all left-
handed, then, det S = 1.
The completely antisymmetric tensor of third rank with components ijk in
three-dimensional orthonormal basis is also invariant. Antisymmetry in any two
indices will give zero, and in three dimensions only those components are not
zero in which i, j, k take values 1, 2, 3 in some permutation.
In one orthonormal basis we define 123  1, then, by antisymmetry, the other
components are:
231  312  1, 132  321  213   1

The important point is that if we change a basis to another orthonormal basis


with same orientation, and if S is the matrix connecting the two bases then,

 
123  S1l S 2 m S3 n lm n  det S  1
l ,m,n

This shows that  is an invariant tensor which has the same values in all the
othornormal bases of same orientation.
Raising and lowering indices
If a vector space has an inner product, then the metric tensor g ij and its
counterpart g ij can be used for converting a contravariant vector into a
covariant one and vice versa.
Suppose v i are the components of a vector, then
ai   g ij v j
j

will transform covariantly. Similarly, if a i are the components of a covariant


vector, then
vi   g ij a j
j

transformcontravariantly.

SAQ 1
i) Prove that ai  g ij v j
transform covariantly, where g ij are the
components of the metric tensor of rank 2 and v i the components of a
contravariant vector.
ii) Prove that v i  g ij a j transform contravariantly.

This process is called lowering or raising the indices by g ij or g ij ,


respectively. It can be used also for higher rank vectors with several indices in
a similar manner. 105
Block 2 Vector Spaces, Matrices and Tensors
9.3.6 Contraction
In a mixed tensor, that is, a tensor whose components have both contravariant
and covariant indices, if one contravariant index is set equal to one covariant
index, and then they are summed over the whole range, we obtain a tensor
with two indices less. This process is called contraction. Let, for example,
T ijklmn
be a mixed tensor of rank 6. Suppose we contract the index i with index m,
then we obtain a tensor of rank 4 with components:
Q jk ln  T
i
ijk
ljn

SAQ 2
Prove that Q jkln transform as components of a mixed tensor of rank 4.

9.4 TENSOR PRODUCT


We have seen that the linear functions on a vector space give rise to covariant
vectors, and the bilinear functions give rise to second rank covariant tensors
which form a vector space of dimension n 2 .
Let  and  be two covariant vectors in V*. We define a bilinear function
denoted by    as
(  )(v, w )  ( v )( w )
Thus we have constructed a second rank covariant tensor    out of two
covariant vectors  and .
Remark:
Tensors like    form a subset of all covariant second rank tensors. Not
every bilinear function on a pair of vectors can be so written.
If components of covariant tensors  and  are a i , i  1,..., n and bi , i  1,..., n,
then components of    are the n 2 numbers
(  )(ei , e j )  ai b j
We say that    is the tensor product of vectors  and . This product
satisfies simple linearity properties: here ,  and  are three covariant
vectors and a a real number.
(  )          
  (   )        
  ()  (a)    a(  )

The tensor product v  w of contravariant vectors can be defined similarly.


They are bilinear functions of covariant vectors. In particular,
( v  w ) (, )  ( v )( w )

Dual basis
We have mentioned that the set of all linear functions on a vector space V
themselves form a vector space, called the dual spaceV*. If we choose a
106
Unit 9 Tensors

basis e i  in V, then a special basis { i } exists in V*, whose n covariant basis


vectors are defined as:

 i (e j )   i j .

These are the covariant vectors which, acting on a vector v give its i-th
component as result:
 i (v )  v i

This basis in V* allows us to define a basis in the vector space of second rank
covariant tensors through the n 2 tensors.
i   j , i , j  1,..., n

You can check that if t   t ij  i   j


i, j

Then t v, w    t ij  i (v )  j (w)  t ij v i w j


i, j

9.5 VECTOR AND TENSOR FIELDS


So far we have dealt with one vector space and its tensor products.
A vector field or a tensor field is concerned with an infinite number of vector
spaces (or tensor product of vector spaces). These spaces are situated at
each point in space or in space and time.
From UG physics, you know of many examples of fields.

The gravitational field of non-relativistic physics is an example. At each point


there is a vector equal to the acceleration due to gravity. The electric and
magnetic fields are also vector fields in non-relativistic physics, although we
learn in relativity that both electric and magnetic fields are different
components of a second rank antisymmetric tensor field.
Another area is fluid mechanics. The velocity of a fluid is a vector at different
points and the flow and dynamics of the fluid is determined by a second rank
tensor called the stress tensor.
Quantum field theory also requires the introduction of various tensor and
spinor fields which take their values as linear operators in a Hilbert space.
Notation: Einstein summation convention
We shall use the tacit assumption of a sum over the full range of a repeated
index without showing the sign of summation. This is called the Einstein
summation convention. Any exception to the convention will be explicitly
pointed out.

Example 9.5 : Tangent or Velocity Vectors are


Contravariant

When we work with space or space-time, we use coordinates to specify a


point. These coordinates can be chosen in a variety of ways. The most familiar
examples are the choice of cartesian coordinates (x, y, z), or the polar 107
Block 2 Vector Spaces, Matrices and Tensors
coordinates (r , , ) . To get familiarity with tensor notation let us write these
coordinates by indexed variables.
x  ( x 1, x 2 , x 3 )  ( x, y, z)
and
x   ( x 1, x  2 , x  3 )  (r , , )

Notation: In tensor analysis, it is customary to denote the space or time


coordinate indices by an upper index or superscript. This is to make sure that
(as we shall see) the velocity vector follows the transformation of components
of a contravariant vector which always has upper index.
If we want to describe the trajectory of a particle we have to provide the three
functions:
x(t )  ( x 1(t ), x 2 (t ), x 3 (t ))

or x (t )  ( x 1(t ), x  2 (t ), x  3 (t ))  (r (t ), (t ), (t ))

where t represents time.


We can think of these functions as a curve in the three-dimensional space.
The velocity components associated with the position x (t 0 ) of the particle at a
fixed time t 0 are determined by the three derivatives:

 dx 1 dx 2 dx 3   dx i 
v i ( x )  (v 1, v 2 , v 3 )   , ,  
 dt dt t dt t   dt  x (t0 )
 t0 0 0 
These are related to the polar coordinate derivatives
 dr d d   dx  i 
(v 1, v  2 , v  3 )   , ,  
 
 dt t0 dt t0 dt t0   dt  x (t0 )

by equations
dx  i x  i dx j x  i j
v  i ( x )    v (x)
dt x j dt x j
We can see v i or v  i as components of a vector. But what is the basis with
respect to which these components are calculated?
To answer that we define

v  vi  v i ei ,
x i
This is a linear differential operator in coordinates x and as we see,  / x i act
as basis vectors.
But since coordinates x  are functions of x, and vice versa, the same vector
can be written as:
 x  j  
v  vi  vi  v j
x i x i x  j x  j
which shows that the same vector v which has components v i in the basis
 / x i has the components v  i in the basis e i   / x  i .

The law of transformation of components of v is contravariant because the law


108 of transformation of basis vectors is
Unit 9 Tensors

x j  x j
e i   / x  i   ej
x  i x j x  i
If we define the matrix
x j
ti j 
x  i
x  j
then t 1i j 
x i
because
x  j x k x k
t 1i j t j k     ki
x i x  j x i

The law of transformation of basis and components as shown above is what is


expected from components of a contravariant vector:
x j x  i
e i  e j  ti j e j , v i  v j  (t 1T ) i j v j
x  i x j
Thus, velocity is a contravariant tensor of rank 1.

Example 9.6 : Scalar Fields and Gradient Fields


Let (x ) be a function of space coordinates x. What this means is that if P is a
point in space whose coordinates are x i then that point is assigned a number
(x ) which may be some physical quantity. But the same point P is also
assigned coordinates x  by the other coordinate system. The function  (x )
that describes the variation of the assigned number in x  coordinates is
therefore equal to
 ( x)   ( x )

This is the law of transformation of a scalar field from coordinate x to x  .


Scalar fields and gradient fields and covariant vector fields.
Given a scalar field (x ) in coordinates x, let d denote the infinitesimal
variation in the physical quantity  when coordinates are changed from point P
with coordinates x i to the point Q with coordinates x i  dx i :

d  dx i  a i ( x )dx i
x i
The quantities ai   / x i are called components of the gradient vector in
x coordinates.
The same change in  from P to Q in x  coordinates is:

d  dx  i  a i ( x )dx  i
x  i
Equating the two, we get the relationship between the components of the
gradient:
x i
ai( x)dxi  ai ( x )dx i  ai ( x )
x j 109
Block 2 Vector Spaces, Matrices and Tensors
Comparing the coefficients of dx  j in the first and last terms, we have
x i
aj ( x) 
i
ai ( x )  t j ai ( x )
x j

So, we can deduce that the gradient vector field is covariant.


This discussion also shows that the differential d is a linear functional on the
tangent vectors. If there is a curve, which in time dt moves from point P to Q
then the tangent vector v has components:
dx i
vi  ,
dt
and the rate of change of  along the curve is:

d  dx i
  ai ( x )v i ( x )  d( v )
dt x i dt
Therefore, as we have seen, the covariant vectors belong to the space dual to
the tangent space.
Not every covariant vector field is a gradient.
Any vector field ai (x ) which transforms as above under change of coordinates
is a covariant vector field.

Example 9.7 : The Metric Tensor Field


On a space with coordinates it is not always possible to choose Cartesian
coordinates for which the metric at each point takes the simple form g ij   ij .
A space on which Cartesian coordinates can be chosen is a specially simple
case and it is called ‘flat’, ‘zero curvature’ or simply Euclidean space.
Even if the space is Euclidean, it is useful to use non-cartesian coordinates
like polar coordinates in the usual three-dimensional flat space.
The metric tensor is defined by the inner product in the space of tangent
vectors. The space of tangent vectors is called the tangent space. There is a
tangent space at each point of space and the inner product in each tangent
space depends on the location of that point.
Since e i   / x i form a basis for tangent vectors, the components of the
metric tensor are:
 
g ij ( x )  e i , e j  ,
x i x j
and the tensor itself can be written in the basis of the dual basis dx i as:
g  g ij ( x )dx i  dx j

As a trivial example, we take the Euclidean space with cartesian coordinates


( x 1, x 2 , x 3 ) where g ij   ij :

g   ij dx i  dx j  dx 1  dx1  dx 2  dx 2  dx 3  dx 3 .

Remark: In physics literature, out of respect for the older physicists and
mathematicians, this equation is written as:
ds 2  (dx 1 ) 2  (dx 2 ) 2  (dx 3 ) 2
110
Unit 9 Tensors

Example 9.8
Calculate the components of the same Euclidean metric in polar coordinates.
The relations are
x 1  r sin  cos , x 2  r sin  sin , x 3  r cos ,

Therefore,
dx 1  (sin  cos )dr  r (cos  cos ) d  r (sin  sin ) d

so that
dx 1  dx 1  (sin  cos ) 2 dr  dr  r (sin  cos )
(cos  cos ) (dr  d  d  dr )
 r (sin  cos ) (sin  sin ) (dr  d  d  dr )

 r 2 (cos  cos )2 d  d


 r 2 (cos  cos ) (sin  sin ) (d  d  d  d)

 r 2 (sin  sin )2 d  d


with similar formulas for dx 2  dx 2 and dx 3  dx 3 .

SAQ 3
Show that for the above metric
g  dx 1  dx 1  dx 2  dx 2  dx 3  dx 3

 dr  dr  r 2 d  d  r 2 sin 2 d  d

In a more general setting, the metric tensor is defined in Einstein’s general


theory of relativity where, apart from the space, the time parameter is also
included among coordinates. Thus there are four coordinate variables of the
four-dimensional space-time, written as x   ( x 0  ct, x 1, x 2 , x 3 ) . Here c is
the velocity of light.
There are ten independent elements of the symmetric matrix
g  , ,   0, 1, 2, 3 . They represent the gravitational field. As an example, the
metric which represents the gravitational field of a massive body of mass M in
a spherically symmetric situation is given by the Schwartzschild metric:
1
g   1 
2GM  2 1  2GM  (dr  dr )
 c ( dt  dt )   
 rc 2   rc 2 
 r 2 (d  d)  r 2 sin 2 (d  d)

where, because of the spherical symmetry, only four components are non-
zero. In this formula t represents time, r , ,  the polar coordinates in space
with the mass M situated at the origin r = 0. The Newtonian gravitational
constant G appears naturally, and because the theory is relativistic, the
constant c, the speed of light, is also present. In the non-relativistic limit, the
tensor element g 00 (the coefficient of c 2 dt  dt ) gets related to the Newtonian
gravitational potential   GM / r as:
2 
g 00   1  
 c2  111
Block 2 Vector Spaces, Matrices and Tensors
Example 9.9 : The Stress Tensor
Newtonian mechanics is about point masses or rigid bodes. In those cases,
the internal forces do not play a central role because the third law guarantees
that they cancel each other.
When we apply Newton’s laws to continuous media like fluids we have to deal
with internal forces, like pressure, viscosity, etc. For a portion of fluid, the
forces by surrounding fluid are “area forces”. If fluid on one side is pushing
perpendicular to a small area, then the force per unit area is called pressure.
If the force is along the plane of the surface, it is a shear force.
There are two vectors involved in this situation: the direction and magnitude of
the force is one vector, and unit normal vector to the small element of the area
is another (see Fig. 9.1a).
Consider a small area da centered at a point x at time t with a normal unit
vector n.
We denote by F the force acting on matter situated immediately in the front of
this area element due to matter immediately behind. If n is unit normal vector
of the area then matter on the –n side of the area pushes the matter on the +n
side
In a perfect fluid which on the whole is at rest, the force is in the same
direction as n and proportional to area, so F  p( x, t ) nda . The constant of
proportionality p( x, t ) is called pressure at the point x at time t.
But, in general, the tangential or ‘shear’ forces are present due to the
tendency of one layer of fluid moving with a greater velocity trying to drag
adjacent matter which is moving with lower velocity.

Δx2
t21

Δx3
F t12

n Δx1

Fig. 9.1: a) Force F and normal unit vector n; b) tensors t 12 , t 21 .

To a good approximation, the relationship is linear, that is each component of


force is a linear combination of components of the normal vector:
F i ( x, t )  t ik ( x, t ) n k da
where (F 1, F 2 , F 3 ) are components of the force F, (n1, n 2 , n 3 ) are
components of n and da is the area.
The coefficients t ik are called components of the stress tensor.
Thus t ik is the force (acting near x) in the i-direction on matter in front of a unit
area element chosen perpendicular to k-direction by matter behind that
element (Fig. 9.1b).
112
Unit 9 Tensors
Let us now sum up the contents of this unit.

9.6 SUMMARY
In this unit, you have studied about tensors. You have learnt about

 tensors in physics;

 contravariant and covariant tensors;

 inner product and metric tensor;

 symmetry properties, contraction of tensors;

 tensor product dual basis; and

 vector and tensor fields.

9.7 SOLUTIONS AND ANSWERS


Self-Assessment Questions
1. i) ( a ) i   (g )ij (v ) j   Si k S j l gkl(S 1T ) j m v m
j j , k, l , m

  gkl Si k (S 1)m j S j l v m   Si k gkl v l


j , k, l , m k ,l

  S i k ak
k

ii) (v ) i  (g )ij aj   (S 1T )i k (S 1T ) j l g kl S j mSam


j k ,l , j , m

  (S 1T )i k (S 1)l j S j m g kl am   (S 1T )i k g km am


k, l , j , m k, m

  (S 1T )i k v k
k

2. (Q) jk ln  (T )ijk lin


i

    (S 1T )i r (S 1T ) j s (S 1T )k t Sl pSi qSn mT rst pqm


i r , s, t p, q, m

But  (S 1T )i r Si q  (S 1) ri Si q  r q


i i

Therefore

(Q) jk ln    (S 1T ) j s(S 1T )k t Sl pSn m T rst pqm


s, t p, m r

   (S 1T ) j s(S 1T )k t Sl pSn m Qst pm


s, t p, m

This shows that Qst pm are components of a tensor of rank 4 with two
contravariant and two covariant indices. 113
Block 2 Vector Spaces, Matrices and Tensors
3. dx1  (sin  cos ) dr  r (cos  cos ) d  r (sin  sin ) d

dx 2  (sin  sin ) dr  r (cos  sin ) d  r (sin  cos ) d

dx 3  (cos ) dr  r (sin ) d

so that
dx 1  dx 1  (sin  cos )2 dr  dr
 r (sin  cos ) (cos  cos ) (dr  d  d  dr )
 r (sin  cos ) (sin  sin ) (dr  d  d  dr )

 r 2 (cos  cos )2 d  d

 r 2 (cos  cos ) (sin  sin ) (d  d  d  d)

 r 2 (sin  sin )2 d  d

dx 2  dx 2  (sin  sin )2 dr  dr


 r (sin  sin ) (cos  sin ) (dr  d  d  dr )
 r (sin  sin ) (sin  cos ) (dr  d  d  dr )

 r 2 (cos  sin )2 d  d

 r 2 (cos  sin ) (sin  cos ) (d  d  d  d)

 r 2 (sin  cos )2 d  d

dx 3  dx 3  (cos )2 dr  dr  r (cos  sin ) (dr  d  d  dr )

 r 2 sin2  (d  d)

Add and check that all cross terms like


(dr  d  d  dr ), (dr  d  d  dr ), (d  d  d  d)
cancel and the coefficients dr  dr , d  d, d  d are respectively 1,
r 2 and r 2 sin 2  respectively.

114

You might also like