Professional Documents
Culture Documents
Lectures On The Theory of Grou - Ovsyannikov, Lev Vasilyevich ( - 6441
Lectures On The Theory of Grou - Ovsyannikov, Lev Vasilyevich ( - 6441
Group Properties of
Differential Equations
微分方程群性质理论讲义
Edited by N. H. Ibragimov
也 1lt 奇'J:P.ι ~l
HIGHEREOυCAτlON PRESS
•
NONLINEAR PHYSICAL SCIENCE
5ÔnÆ
This page intentionally left blank
NONLINEAR PHYSICAL SCIENCE
Nonlinear Physical Science focuses on recent advances of fundamental theories and
principles, analytical and symbolic approaches, as well as computational techniques
in nonlinear physical science and nonlinear mathematics with engineering applica-
tions.
Topics of interest in Nonlinear Physical Science include but are not limited to:
- New findings and discoveries in nonlinear physics and mathematics
- Nonlinearity, complexity and mathematical structures in nonlinear physics
- Nonlinear phenomena and observations in nature and engineering
- Computational methods and theories in complex systems
- Lie group analysis, new theories and principles in mathematical modeling
- Stability, bifurcation, chaos and fractals in physical science and engineering
- Nonlinear chemical and biological physics
- Discontinuity, synchronization and natural complexity in the physical sciences
SERIES EDITORS
Albert C.J. Luo Nail H. Ibragimov
Department of Mechanical and Industrial Department of Mathematics and Science
Engineering Blekinge Institute of Technology
Southern Illinois University Edwardsville S-371 79 Karlskrona, Sweden
Edwardsville, IL 62026-1805, USA Email: nib@bth.se
Email: aluo@siue.edu
·北京
剧盹
Aωh(Jr E"it(J~
LV Ov<ya盯nikov \l ail H. Ihrdgimo 飞
俨 2013 Highcr Edu陀 alion f>r陀ss Li m ,ted Comp3ny. 4 Dewai Daj ie. 1 阳。 120. R~;i;η~. l' R ,\'j ,,"
图书在版捕目 (C f P )数据
开
'" 787n ,,"" \ (1 92111'" \ 11 (, 11Itp://ww飞,v .landmço.com ffl
印 ?长 <).75 版 次 20 \3年 4 月革 l 版
When I studied at Novosibirsk State University (Russian) I was lucky to have such
brilliant teachers in mathematics as M.A. Lavrentyev, S.L. Sobolev, A.I. Mal’tsev,
Yu.G. Reshetnyak and others. But it were L.V. Ovsyannikov’s lectures in Ordinary
differential equations, Partial differential equations, Gas dynamics and Group prop-
erties of differential equations that were of the most benefit for me. I attended his
course “Group properties of differential equations” when I was a third-year student.
His lectures provided a clear introduction to Lie group methods for determining
symmetries of differential equations and a variety of their applications in gas dy-
namics and other nonlinear models as well as Ovsyannikov’s remarkable contribu-
tion to this classical subject. His lectures were spectacular not only due to the bril-
liant presentation of the material but also due to absolutely new discoveries for us. I
remember one of our most emotional student’s repeated exclamations like “Wonder-
ful, . . . Incredible!” every time when Ovsyannikov revealed most unusual properties
of symmetries or unexpected methods.
His lecture notes of this course were published in 1966 with the print of 300
copies only. Since then the Notes were neither reprinted nor translated into English,
though they contain the material that is useful for students and teachers but cannot be
found in modern texts. For example, theory of partially invariant solutions developed
by Ovsyannikov and presented in x3.5, x3.6 is useful for investigating mathematical
models described by systems of nonlinear differential equations. It is important to
make this classical text available to everyone interested in modern group analysis.
In order to adapt the text for modern students I made several minor changes
in the English translation. In particular, sections have been divided into subsections
and few misprints have been corrected. Part of the problems formulated in x3.7 have
been completely or partially solved since 1966. But we did not make any comments
on this matter in the present translation.
The theory of differential equations has two aspects of investigation, namely local
and global, no matter whether the equations arise from applied problems of physics
and mechanics or from abstract speculations (which is rather frequent in modern
mathematics). The local aspect is characterized by dealing with the inner structure
of a family of solutions and its investigation in a neighborhood of a certain point.
The global approach deals with solutions defined in some domain and having a given
behavior on its boundary.
It would certainly be erroneous to oppose these directions to each other. How-
ever, it is no good to ignore the differences in approaches either. While the global
approach necessitates the functional analytic apparatus, the local viewpoint allows
one to get along with algebraic means only. A brilliant example of a profound lo-
cal consideration is the famous Cauchy-Kovalevskaya theorem which is, in fact,
an algebraic statement. Moreover, it is an easy matter to notice that the theory of
boundary value problems also makes an essential application of various algebraic
properties of the whole family of solutions. Therefore, the local aspect of the alge-
braic theory of differential equations is quite vital.
The theory of group properties of differential equations descried in the present
lecture notes is a typical example of a local theory. It is especially valuable in inves-
tigating nonlinear differential equations, for its algorithms act here as reliably as for
linear cases.
In spite of the fact that the fundamentals of the theory of group properties were
elaborated in works of the Norwegian mathematician Sophus Lie more than a hun-
dred years ago, its development is desirable nowadays as well.
Methodological peculiarity of the present text is that its first chapter uses only the
simplest algebraic apparatus of one-parameter groups, which is especially advisable
for researchers engaged in applied fields. This allows one to solve the problem of
finding a group admitted by a given system of differential equations completely. The
second chapter is tailored to provide a deeper insight into the subject resulting from
solving determining equations. The group structure of the family of solutions itself
is discussed in the third chapter, which also suggests some new elements for the
theory. The latter are related to the notions of a partially invariant manifold of the
viii Preface
group, its defect of invariance, and the problem of reduction of partially invariant
solutions. The final section x3.7 suggests a qualitative formulation of several prob-
lems demonstrating possibilities for further development of the theory with no claim
to be complete.
The present lecture notes are written hot on the traces of a special course given
by the author in Novosibirsk State University during the 1965/1966 academic year.
Such a prompt decision was made to have the lecture notes published by the spring
examinations. Therefore, the lectures may appear to be “raw” to many extent, and
the author is ready to be completely responsible for that.
The quick release of the lecture notes would be impossible without the support of
administration of the university. A major technical work in preparing the manuscript
was done by the students V.G. Firsov, E.Z. Borovskaya, T.E. Kuzmina, N.I. Nau-
menko, M.L. Kochubievskaya and others. The author is sincerely grateful to all
these people.
Editor’s preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1.1.1 Definition
We will consider a family of transformations fTa g with the above properties de-
pending on a real parameter a that varies within an interval ∆ :
The family fTa g is said to be locally closed with respect to the product if there
exists a subinterval ∆ 0 ∆ such that
Tb Ta 2 fTa g
for any a; b 2 ∆ 0 : This leads to a function c = ϕ (a; b) which determines the multi-
plication law for transformations of fTa g according to the formula
Tb Ta = Tc :
Definition 1.1. The family fTa g is called a local one-parameter continuous trans-
formation group if it is locally closed with respect to the product and if the interval
∆ 0 can be chosen so that the following conditions hold.
1 There exists the unique value a0 2 ∆ 0 such that Ta0 is an identity transforma-
tion.
2 The function ϕ (a; b) is thrice continuously differentiable and the equation
ϕ (a; b) = a0 has the unique solution b = a 1 for any a 2 ∆ 0 :
Condition 2 means that the operation of inversion of transformations (Ta ) 1 =
Ta 1 is possible in fTa g:
Hereafter the symbol a 1 indicates only a definite value of the parameter and not
1
the inverse value of the number a; so that a 1 6= .
a
The choice of the interval ∆ 0 is not unique, generally speaking. If such an interval
is selected one can take any smaller interval instead of ∆ 0 : It means that we are
interested only in some sufficiently small neighborhood of a0 : The operations of
multiplication and inversion of transformations Ta are feasible only for values of
the parameter a from the above neighborhood. Therefore, the object introduced by
Definition 1.1 is termed as a local group. In what follows the sufficient closeness of
all considered values of the parameters a; b : : : to the value a0 is provided.
Further on, the term “group G1 ” will be used to indicate a local one-parameter
continuous transformation group.
1.1 One-parameter continuous transformation group 3
Generally introduction of the new parameter ā = ā(a); where ā(a) is a thrice con-
tinuously differentiable monotonous function, changes ϕ ; ∆ and ∆ 0 :
In what follows, we assume that a0 = 0 without loss of generality. Note that in
this case the definition leads to the following properties of the function ϕ (a; b) :
ϕ (a; b) = a + b:
ϕ (a; b + ∆ b) = c + ∆ c:
Tb+∆ b Ta = Tc+∆ c :
one obtains
1 1
Tc+∆ c Tc = Tb+∆ b Tb
due to the associative multiplication law. The equality has the form
ϕ (c 1 ; c + ∆ c) = ϕ (b 1 ; b + ∆ b) (1.1.6)
Applying the above equation to Eq. (1.1.6) and invoking that j∆ cj = Oj∆ bj one
obtains
V (c)∆ c = V (b)∆ b + O(j∆ bj2 ): (1.1.7)
Dividing both sides of Eq. (1.1.7) by ∆ b and taking the limit ∆ b ! 0; one arrives
at the differential equation
dc
V (c) = V (b) (1.1.8)
db
with the initial condition
cjb=0 = a:
Furthermore, equations (1.1.4) show that V (0) = 1:
Let us introduce the function
Z a
ā(a) = V (s)ds:
0
Tb Ta = Ta+b = Tb+a = Ta Tb :
1.1.3 Examples
x0 = x + a:
Here
ϕ (a; b) = a + b:
Translations in an N-dimensional space in the direction of the vector λ = (λ 1 ; : : : ;
λ N ) are given by
x0i = xi + λ i a (i = 1; : : : ; N):
1.1 One-parameter continuous transformation group 5
x0 = ax (∆ = (0; ∞)):
Here
ϕ (a; b) = ab:
Assuming that a = eā ; one arrives to the canonical parameter ā:
The group G1 of dilations in an N-dimensional space has the form
x0i = aν xi ;
i
It is clear from the geometric meaning of these transformations that the transition
to the canonical parameter is given by the formula a = sin ā: In this parameter the
rotation transformations take the standard form
where ā 2 ( π ; π ):
Example 1.4. The transformations
x y
x0 = ; y0 =
1 ax 1 ax
form the group of projective transformations on the (x; y) plane. Here a is a canoni-
cal parameter.
∂ fi
= ξ i ( f ) (i = 1; : : : ; N) (1.1.11)
∂a
with the initial conditions
Proof. Let us give a small increment ∆ a to the parameter a and write the equation
Ta+∆ a = T∆ a Ta in terms of the functions f i :
The Taylor expansion of both sides of the above equation with respect to ∆ a has the
form
∂ fi
f i (x; a + ∆ a) = f i (x; a) + ∆ a + O(j∆ aj2);
∂a
∂ fi
f ( f (x; a); ∆ a) = f (x; a) +
i i
( f ) ∆ a + O(j∆ aj2):
∂∆a ∆ a=0
Equating the right-hand sides of the system, dividing by ∆ a and letting ∆ a ! 0; one
obtains Eqs. (1.1.11). Equations (1.1.12) hold due to the fact that T0 is the identity
transformation. Thus, the direct statement is proved.
Conversely, let ξ i (x) be a given set of continuously differentiable functions.
Equations (1.1.11) provide a system of ordinary differential equations with the in-
dependent variable a: The assumptions of the theorem guarantee that this system
has a unique solution in a neighborhood of a = 0: The solution provides a one-
parameter family of transformations. Let us demonstrate that it is a group G1 : It is
manifest from Eqs. (1.1.12) that we have the identity transformation when a = 0:
Let us prove that the equation Tb Ta = Ta+b holds in a certain neighborhood of the
values a = 0: In terms of the function f we have to show that
Let
yi (b) = f i ( f (x; a); b); zi (b) = f i (x; a + b):
Calculating the derivatives of these functions one obtains
∂ yi ∂ f i ( f ; b)
= = ξ i (y)
∂b ∂b
since the functions f i satisfy Eqs. (1.1.11). Moreover, by virtue of Eqs. (1.1.12) one
has f i ( f (x; a); 0) = f i (x; a); i.e.
1.2 Infinitesimal operator of the group 7
dzi ∂ f i (x; a + b)
= = ξ i (z); zi (0) = f i (x; a):
db ∂b
Thus, functions yi (b) and zi (b) satisfy one and the same system of differential equa-
tions and the same initial data. The theorem on uniqueness of a solution of a system
of differential equations guarantees that
yi (b) = zi (b):
The existence of the inverse transformation follows from the fact that letting a 1 =
a one obtains
Ta (Ta ) 1 = Ta T a = T0 :
The later is the identity transformation by virtue of Eqs. (1.1.12).
Theorem 1.2 can be used for constructing local one-parameter transformation
groups.
We will use the usual convention on summation with respect to repeated indices.
Namely, if the upper and the lower indices are the same in a sum, the sign ∑ is
omitted for the sake of brevity. For instance, instead of the expression
N N
∑ ∑ Ai j a i b j ;
i=1 j=1
we write Ai j ai b j :
Definition 1.2. An infinitesimal operator of the group G1 is the linear differential
operator
∂
X = ξ i (x) i ; (1.2.1)
∂x
where ξ i (x) are determined in (1.1.10). Functions ξ i (x) are coordinates of the oper-
ator X :
Let us write out the operators of the groups G1 for Examples 1.1-1.4 from x1.1.
Example 1.5. The operator of the translation group x0 = x + a along the x-axis is
8 1 One-parameter continuous transformation groups admitted by differential equations
∂
X=
∂x
The general translation operator in E N along the vector λ has the form
∂
X = λi ; λ i = const:
∂ xi
Example 1.6. The operator of dilation group x0 = ax in the direction of the x-axis
is
∂
X =x
∂x
The general dilation operator in E N has the form
n
∂
X = ∑ ν i xi ; ν i = const:
i=1 ∂ xi
Example 1.7. The operator of the rotation group in the plane (x; y) is
∂ ∂
X =y x
∂x ∂y
Example 1.8. The operator
∂ ∂
X = x2 + xy
∂x ∂y
corresponds to the projective transformations of Example 1.4.
∂
X =y
∂x
with two variables x1 = x; x2 = y: Here ξ 1 = y; ξ 2 = 0 and the system (1.1.11) with
the initial conditions has the form
∂ x0 ∂ y0
= y0 ; = 0; x0 ja=0 = x; y0 ja=0 = y:
∂a ∂a
It follows that y0 is independent of a and the initial condition yields y0 = y: Now the
first equation provides x0 = ay + x: Thus, the finite transformations of the group G1
with the operator
∂
y
∂x
are
x0 = x + ay; y0 = y:
1.2 Infinitesimal operator of the group 9
If Ta 2 G1 ; then Ta F(x) is the function of the point x and the parameter a: Let us
calculate its derivative with respect to a at a = 0: Invoking Eqs. (1.1.10) and (1.2.1)
one has
∂ ∂ F(x0 ) ∂ x0i ∂ F(x)
Ta F(x) = = ξ i (x) = XF(x);
∂a a=0 ∂ x 0i ∂ a a=0 ∂ xi
or finally
∂
Ta F(x) = XF(x): (1.2.2)
∂a a=0
Hence, the principal linear part of the variation of F(x) under the transformations
Ta 2 G1 at a = 0 is given by the equation
Ta F(x) F(x) = XF a:
This is the reason why the operator X is called an infinitesimal operator (i.e. the
operator of the infinitesimal transformation).
For finite transformations Ta 2 G1 the following lemma holds.
Lemma 1.1. If x0 = Ta x; then
∂
F(x0 ) = X 0 F(x0 ); (1.2.3)
∂a
where
∂
X 0 = ξ i (x0 )
∂ x0i
Proof. One has
∂ ∂ F ∂ x0i ∂F
F(x0 ) = 0i = ξ i (x0 ) 0i = X 0 F(x0 );
∂a ∂x ∂a ∂x
where the second equality is obtained by using Eqs. (1.1.11).
10 1 One-parameter continuous transformation groups admitted by differential equations
The system of coordinates in the space E N was not supposed to be Cartesian in the
above reasoning. Therefore, it is important to consider the properties of the groups
G1 with the change of coordinates. Let
Ta : x0i = f i (x; a)
T: yi = yi (x) (i = 1; : : : ; N)
by definition. Hence
yi ( f (x; a)) = gi (y(x); a): (1.2.4)
Equations (1.2.4) hold identically in the variables x; a:
Considering transformations Ta : y0i = gi (y; a) as transformations in the system of
coordinates (x); one obtains new transformations T a : Equation (1.2.4) is written in
this notation in the form T Ta = T a T; whence
1
T a = T Ta T : (1.2.5)
∂ gi
ηi = :
∂ a a=0
Differentiating the identities (1.2.4) with respect to a and setting a = 0 we obtain
∂ yi j ∂ gi
ξ (x) = = η i (y);
∂ xj ∂ a a=0
∂ ∂
X = ξi = ηi i ;
∂ xi ∂y
where
∂ yi
ηi = ξ j= X(yi (x)):
∂xj
Thus, the transformation formula for the operator X with respect to a change of
coordinates yi = yi (x) has the form
∂ ∂
X = ξi = X (yi ) i (1.2.6)
∂x i ∂y
Theorem 1.3. Any one-parameter transformation group is similar to a group of
translations with respect to one of the coordinates.
Proof. It is known that for any contravariant vector fξ 1 ; : : : ; ξ N g there exists such a
system of coordinates where this vector is written in the form f1; 0; : : :; 0g: There-
fore, the statement follows from Lemma 1.2.
Example 1.10. Let us rewrite the operator
∂ ∂
X =x y
∂y ∂x
in the polar system of coordinates
p y
r = x2 + y2; ϕ = arctan
x
In this example, Equation (1.2.6) is written
∂ ∂
X = X(r) + X(ϕ )
∂r ∂ϕ
We have
∂r ∂r
X (r) = x y =0
∂y ∂x
12 1 One-parameter continuous transformation groups admitted by differential equations
and
∂ϕ ∂ϕ x y
X (ϕ ) = x y =x 2 y = 1:
∂y ∂x r r2
Thus, in polar coordinates one has
∂
X=
∂ϕ
This is the operator of translations along the coordinate ϕ :
1.3.1 Invariants
XF(x) = 0: (1.3.1)
d
F(x0 ) = 0:
da
It follows that the equation F(x0 ) = F(x) is satisfied identically in x and a: This
completes the proof.
Equation (1.3.1) is a linear partial differential equation of the first order and can
be written in a more detailed form as follows:
∂F
ξ i (x) = 0: (1.3.2)
∂ xi
It is known that such an equation has N 1 functionally independent solutions F =
I τ (x) (τ = 1; : : : ; N 1) and the general solution has the form
F(x) = Φ (I 1 (x); : : : ; I N 1
(x)); (1.3.3)
1.3 Invariants and invariant manifolds 13
x0 = x + a; y0 = y + 2a:
∂F ∂F
+2 = 0:
∂x ∂y
The associated (characteristic) system of ordinary differential equations contains in
this case one equation:
dx dy
=
1 2
Its first integral obviously is 2x y = C:
Example 1.12. Consider a group G1 given by the dilation operator
∂ ∂ ∂
X =x + 3y 2z
∂x ∂y ∂z
The characteristic system is
dx dy dz
= = ;
x 3y 2z
whence one obtains two independent integrals
y
= C1 ; x2 z = C2 :
x3
Here N = 3; two independent invariants are
y
I1 = ; I 2 = x2 z;
x3
14 1 One-parameter continuous transformation groups admitted by differential equations
and the general form of the invariant of the considered group G1 is given by
y
F = Φ 3 ; x2 z
x
with an arbitrary function of two variables Φ (I 1 ; I 2 ):
Example 1.13. The invariant for the group of rotations with the operator
∂ ∂
X =y x
∂x ∂y
has the form I = x2 +y2 : It can also be readily derived from the geometrical meaning
of transformations of the considered G1 :
Fα (I 1 (x); : : : ; I N 1
(x)) = 0 (α = 1; : : : ; A)
∂ ∂ ∂
X =x + 3y 2z
∂x ∂y ∂z
has the invariants
y
I1 = ; I 2 = x2 z
x3
and one obtains equations of its two-dimensional invariant manifolds in E 3 (x; y; z)
in the form y
F 3 ; x2 z
x
or equations of one-dimensional invariant manifolds in the form
1.3 Invariants and invariant manifolds 15
y y
F1 3
; x2 z = 0; F2 3
; x2 z = 0:
x x
It is obvious, that one can write these equations in the explicit form, namely
y = x3 ϕ (x2 z)
or
C2
y = C1 x3 ; z= (C1 ;C2 = const:);
x2
respectively.
Now let us investigate the invariance criterion of a given manifold. Consider a man-
ifold given by a system of equations
In order to formulate conditions for the functions ψ σ (x); the notion of the general
rank of a functional matrix is necessary.
Let M = M(x) be a matrix whose entries are given functions of a point x 2 E N :
The rank of the matrix M(x) at the point x is denoted by
R = R(M) = R(M(x)):
The rank R itself is also a function of the point x : R = R(x): The matrix M is said
to have the general rank at the point x0 2 E N if
R(x) = R(x0 )
has the general rank equal to s (the number of Eqs. (1.3.4)) at every point of M :
Example 1.14. Let ψ 1 = x2 ; ψ 2 = y5 : Then
2x 0
J= :
0 5y4
The matrix J has the general rank at any point (x0 ; y0 ) that does not belong to the
coordinate axes and has no general rank at any point of the coordinate axes on the
16 1 One-parameter continuous transformation groups admitted by differential equations
x2 = 1; y5 = 2
x2 = 1; y5 = 0
is not regularly defined. In the latter case the same manifold can be given by the
equations x2 = 1; y = 0 and it becomes regularly defined.
Theorem 1.5. Let the manifold M be regularly defined by Eqs. (1.3.4). The nec-
essary and sufficient condition for the invariance of the manifold M with respect to
the group G1 with the operator X is the validity of the equations
yσ = ψ σ (x) (σ = 1; : : : ; s);
∂
X = ξ i (x)
∂x
then equations (1.3.5) take the form
for our M : Considering Eqs. (1.1.11) at points in M ; one can rewrite them in the
form of two subsystems of equations
∂ fσ
= ξ σ ( f 1 ; : : : ; f s ; f s+1 ; : : : ; f N ) (σ = 1; : : : ; s);
∂a
∂ f s+α
= ξ s+α ( f 1 ; : : : ; f N ) (α = 1; : : : ; N s):
∂a
The initial values for the first subsystem are vanishing at the point x 2 M :
f σ ja=0 = 0:
x0σ = f σ (x; a) = 0 (σ = 1; : : : ; s)
with x 2 M satisfy the initial conditions and, by virtue of Eqs. (1.3.7), all equations
of the first subsystem. Hence,
x0σ = f σ (x; a) = 0 (σ = 1; : : : ; s)
for all x 2 M with every value of the parameter a due to the uniqueness of the
solution of the system (1.1.11). According to Definition 1.4, it means that M is
invariant with respect to G1 with the operator X :
Let us introduce the new quantities pki equal to the values of derivatives of the
functions (1.4.2)
∂ uk
pki = i (i = 1; : : : ; n; k = 1; : : : ; m)
∂x
and call them derivatives on the manifold Φ :
The derivatives pi0k on the manifold Φ 0 are obtained by differentiating Eqs. (1.4.3)
with respect to every xi : This provides the equations
j
∂ gk ∂ gk l 0k ∂ f ∂fj l
+ p = pj + l pi ; (1.4.4)
∂ xi ∂ u l i ∂ xi ∂u
which are to be solved with respect to the quantities p0kj : The latter operation pro-
vides a single-valued result when the values of jaj are sufficiently small due to the
fact that the parenthesis in the right-hand sides of Eqs. (1.4.4) are equal to δi (the
j
Kronecker symbol) when a = 0: Hence, the determinant of the system does not van-
ish and, due to the continuity of functions contained in Eqs. (1.4.4), its sign does not
change in a certain vicinity of a = 0. Therefore, the quantities pikj can be represented
as functions of x; u; p; a :
p0k k
i = hi (x; u; p; a) (k = 1; : : : ; m; i = 1; : : : ; n): (1.4.5)
Note that the representation (1.4.5) has no connection with the specific form of the
manifold Φ at all.
e
Let us introduce the space EeN of the dimension N
e = N +mn; where the quantities
x; u; p serve as point coordinates. This space will be called the prolongation of the
space E N :
The union of Eqs. (1.4.1) and (1.4.5) determines a family of transformations Tea in
e
EeN depending on the parameter a: The transformations Tea are said to be the pro-
1.4 Theory of prolongation 19
Te a = Tea 1 :
Let us find the operator of the group G e1 : Since G1 is given by Eqs. (1.4.1), its
operator has the form (see Definition 1.2)
∂ ∂
X = ξi + ηk k ; (1.4.6)
∂ xi ∂u
where
∂ f i ∂ gk
ξ = ξ (x; u) =
i i
; η = η (x; u) =
k k
:
∂ a a=0 ∂ a a=0
e1 transforms the variables x; u precisely as the group G1
Invoking that the group G
e1 yields
does, we conclude that Definition 1.2 of the operator Xe for the group G
20 1 One-parameter continuous transformation groups admitted by differential equations
∂
Xe = X + ζik k ; (1.4.7)
∂ pi
where
∂ hki
ζik = ζik (x; u; p) =
∂ a a=0
The operator Xe determined by Eqs. (1.4.7) and (1.4.8) is called the prolonged
operator obtained by prolonging the operator X of Eq. (1.4.6) with respect to the
functions u(x):
In what follows, we use the operator of total differentiation
∂ ∂
Di = + pki k (1.4.9)
∂ xi ∂u
Then Eqs. (1.4.8) take the compact form
Note that ζik are linear homogeneous forms with respect to the first derivatives of
the coordinates of the operator X of Eq. (1.4.6).
As a general example, consider the group G1 generated by the infinitesimal op-
erator
∂ ∂
X = ξ (x; y) + η (x; y)
∂x ∂y
in the space E 2 (x; y); and prolong this operator with respect to the function y(x): If
dy
p = y0 = ;
dx
then
1.4 Theory of prolongation 21
∂
Xe = X + ζ
∂p
The operator of total differentiation has the form
∂ ∂
D= +p
∂x ∂y
The formula (1.4.10) yields
∂η ∂η ∂ξ ∂ξ
ζ = D(η ) pD(ξ ) = +p p p2 (1.4.11)
∂x ∂y ∂x ∂y
Let us calculate the prolonged operators for some specific groups acting in
E 2 (x; y):
Example 1.15. The group of translations with respect to the x-axis. Its infinitesimal
operator has the form
∂
X=
∂x
In this case ξ = 1; η = 0: Substituting them into Eq. (1.4.11), one obtains ζ = 0:
Therefore, Xe = X : In such cases we say that the operator “does not prolong” which
means that the operator does not change after the prolongation.
∂ ∂
X =x + 2y
∂x ∂y
one has ξ = x; η = 2y: Substituting into Eq. (1.4.11), one obtains the dilation oper-
ator again, but in an extended space. Namely, ζ = 2p p = p and hence
∂
Xe = X + p
∂p
Problem 1.1. Prove that if X is the dilation operator in E N (see Example 1.12 of
x1.2), then the prolonged operator Xe is also a dilation operator in the extended space.
Example 1.17. For the rotation operator
∂ ∂
X =y x ;
∂x ∂y
∂
Xe = X (1 + p2)
∂p
∂ ∂ ∂
X = ξ (x; y; z) + η (x; y; z) + ζ (x; y; z)
∂x ∂y ∂z
Let us calculate the prolonged operator with respect to the function z(x; y): Here,
one has two operators of total differentiation:
∂ ∂ ∂ ∂
Dx = +p ; Dy = +q ;
∂x ∂z ∂y ∂z
where
∂z ∂z
p= ; q=
∂x ∂y
The prolonged operator has the form
∂ ∂
Xe = X + σ +τ ;
∂p ∂q
where
σ = Dx (ζ ) pDx (ξ ) qDx(η );
(1.4.12)
τ = Dy (ζ ) pDy (ξ ) qDy(η )
according to (1.4.10).
The above prolongation of the group G1 and its operator X is also called the “first
prolongation” (i.e. prolongation to derivatives of the first order of the variables u
with respect to the variables x). Likewise, one can define the second- and higher-
order prolongations (to derivatives of the second- and higher-order derivatives of u
with respect to x). Then the additional coordinates of the prolonged operator will still
be given by the formulae (1.4.10), but by taking into account that now the operator
Xe is prolonged with respect to the functions p(x); which results in the change of the
operators of total differentiation Di as well.
In particular, if one wants to carry out the second prolongation, one uses the
operators of total differentiation
e i = ∂ + pki ∂ + rikj ∂ ;
D (1.4.13)
∂ xi ∂ uk ∂ pkj
where
∂ pki
rikj =
∂xj
Let us write the second prolongation of the operator X (1.4.6) in the form
1.4 Theory of prolongation 23
e ∂
Xe = Xe + σikj k ; (1.4.14)
∂ ri j
where Xe is the first prolongation of the operator X given by Eqs. (1.4.7) and (1.4.10).
The additional coordinates σikj are given by the following formulae:
σikj = D
e i (ζ jk ) e i (ξ t ) (k = 1; : : : ; m; i; j = 1; : : : ; n);
rtkj D (1.4.15)
in accordance with Eqs. (1.4.10). The last term in Eqs. (1.4.15) obviously has the
summation over t = 1; : : : ; n:
For example, the additional coordinate of the first prolongation for the operator
∂ ∂
X =ξ +η
∂x ∂y
is given by the formula (1.4.11). Denoting
d2y
r = y00 = ;
dx2
we write the second prolongation in the form (1.4.14):
e ∂
Xe = Xe + σ (1.4.16)
∂r
In our case the operator of total differentiation (1.4.13) is written
e = ∂ + p ∂ +r ∂ ;
D
∂x ∂y ∂p
whereas formulae (1.4.15) gives the following coordinate σ of the operator (1.4.16):
σ = D(
e ζ) e ξ ):
rD(
∂ ∂
X = ξi + ηk k
∂ xi ∂u
The following equation is satisfied for any function F = F(x; u) :
24 1 One-parameter continuous transformation groups admitted by differential equations
e i F)
X(D Di (XF) = Di (ξ j )D j F: (1.4.18)
∂F ∂F ∂F ∂F ∂F
ζik Di (ξ j ) Di (η k ) = pkj Di (ξ j ) Di (ξ j )
∂ uk ∂xj ∂ uk ∂ uk ∂xj
∂F ∂F
= Di (ξ )j
+ pkj k
∂xj ∂u
= Di (ξ j )D j F;
∂ vk
qki =
∂ yi
∂ ∂
X 0 = X (yi ) + X(vk ) k
∂y i ∂v
so that
∂ ∂ ∂
Xe0 = X(yi ) i + X (vk ) k + ζi0k k
∂y ∂v ∂ qi
On the other hand,
∂
Xe = X + ζik k
∂ pi
and using Eq. (1.2.6) again, one has
e 0 = Xe(yi ) ∂ + X(v
(X) e k ) ∂ + X(q
e ki ) ∂
∂ yi ∂ vk ∂ qki
Note that yi and vk are independent of pki ; therefore Xe affects them in the same
way as the operator X: Hence,
26 1 One-parameter continuous transformation groups admitted by differential equations
e 0 = X (yi ) ∂ ∂ e ki ) ∂
(X) + X(vk ) k + X(q
∂y i ∂v ∂ qki
Φ: uk = ϕ k (x) (k = 1; : : : ; m)
we have
vk (x; u) = vk (y(x; u)):
Differentiating this equation with respect to xi ; we obtain
∂ vk l∂v
k ∂ vk ∂ y j l ∂y
j
+ p = + p ;
∂ xi i
∂ ul ∂ y j ∂ xi i
∂ ue
or
qkj Di (y j ) = Di (vk ):
The equation of the manifold Φ has been written in the variables x: However, it
can also be written in the variables y and result in the similar formulae
where
∂ ∂
D0i = + qli l
∂ yi ∂v
Let us rewrite the differential operator D0i in another form, i.e.
∂ ∂
D0i = + qli l
∂ yi ∂v
∂xj ∂ l∂x ∂
j ∂ uk ∂ l ∂u ∂
k
= + q + + q
∂ yi ∂ x j i
∂ vl ∂ x j ∂ yi ∂ uk i
∂ vl ∂ uk
∂ ∂
= D0i (x j ) + D0i (uk ) k
∂xj ∂u
= D0i (x j )D j :
qkj Di (y j ) = Di (vk )
one obtains
e kj Di (y j ) + qkj XD
Xq e i (y j ) = XD
e i (vk ): (c)
Let us turn the equation
ζ j0k = X(q
e kj )
into an equivalent one multiplying it by Di (y j ): Taking into account Eqs. (a), (b),
(c), we conclude that it is sufficient to prove the equation
e i (vk )
XD e i (y j ) = Di X(vk )
qkj XD qkj Di X(y j );
qkj Di (y j ) = Di (vk ):
Consider a system of differential equations (S) with respect to the unknown func-
tions uk (k = 1; : : : ; m) of independent variables xi (i = 1; : : : ; n): Let π be the highest
order of derivatives involved in (S): Equations (S) are considered as equations of a
manifold in a π times prolonged space E(x; e u): This manifold will be denoted by S
(without brackets).
Definition 1.6. The system (S) is said to admit a group G 1 if the corresponding
manifold S is a differential invariant manifold of the group G1 :
In other words, (S) admits G1 if the equations (S) remain unaltered under the
action of any properly prolonged transformation Ta 2 G1 :
Since every group G1 is characterized by its operator X; Definition 1.6 can be
obviously reformulated in terms of the operator X:
It is convenient to formulate the main property of solutions of the system (S)
admitting the group G1 by considering every solution as a manifold Φ E N :
Φ : uk = ϕ k (x) (k = 1; : : : ; m):
Then the property of the manifold Φ to be a solution of the system (S) can be
formulated as follows: a π times prolonged manifold Φ lies on the manifold S: This
property will be expressed by the formula
Φ
e S:
Theorem 1.7. If (S) admits the group G1 ; then Ta Φ will be a solution for all Ta 2 G1
together with Φ :
Proof. Since Φ
e S; one has Tea Φ
e Tea S: The invariance of S implies that
Tea S = S:
aΦ = T
Tg ea Φ
e S;
tions (S) are written in the form ensuring regularity of the manifold S (in the sense
of x1.3).
In order to solve the formulated problem, note that finding a group G1 admitted
by the system (S) is equivalent to determining the operator X of the group. It is
convenient to use the operator X for it is easy to write the invariance condition of
the manifold (see x1.3) precisely by means of X:
Thus, let us assume that a system of differential equations is given. To be specific,
we assume that it is of the first order:
∂ ∂
X = ξ i (x; u) + η k (x; u) k ; (1.5.1)
∂ xi ∂u
admitted by the system (S). According to Definition 1.6, one has to derive X from the
conditions of invariance of the manifold S; which is regularly given in the extended
space by equations (S) with respect to the prolonged operator Xe: Writing Xe by the
formulae (1.4.7) and (1.4.8) and applying Theorem 1.5, one can see that (S) admits
G1 with the operator (1.5.1) if and only if the equations
e α (x; u; p) = 0 (α = 1; : : : ; A)
XF (1.5.2)
S
The space L can be of any dimension from zero to infinity. If it equals to r; then
we write Lr instead of L: When r = 0; ∞ we also write L0 ; L∞ :
In a general case one has L = L0 ; i.e. the system (S) does not admit any nonzero
operator X and hence, any group G1 : However, in many important cases r > 0 as
one can see below.
If L(S) = Lr ; the system (S) is said to admit the space of operators L r :
Let us find the space L admitted by the ordinary differential equation y0 = f (x; y):
Here the system (S) consists of one equation
where
dy
p = y0 =
dx
We seek the operator admitted by Eq. (1.5.3) in the form
∂ ∂
X =ξ +η ;
∂x ∂y
∂ ∂ ∂
Xe = ξ +η + [ηx + p(ηy ξx ) p2 ξ y ]
∂x ∂y ∂p
The determining equations are obtained by acting by the operator Xe on Eq. (1.5.3)
and then by replacing the variable p by its value f (x; y) (transition to the manifold
S). This provides the determining equation
ηx + f (ηy ξx ) f 2 ξy = ξ f x + η f y : (1.5.4)
Since equation (1.5.4) contains two unknown functions, one of them can be chosen
to be arbitrary. Hence, L(S) = L∞ :
In order to construct the general solution of Eq. (1.5.4), we set that
η = ξ f +θ; (1.5.5)
∂θ ∂θ
+f = fy θ : (1.5.6)
∂x ∂y
1.5 Groups admitted by differential equations 31
This equation is obviously satisfied when θ = 0: Hence, equation (1.5.3) admits the
operator
∂ ∂
X0 = + f (x; y) ;
∂x ∂y
as well as the operators of the form
∂ ∂
X = ξ X0 = ξ
0
+f
∂x ∂y
with an arbitrary function ξ = ξ (x; y): One can also see from here that the admitted
space L is infinite-dimensional.
Let us demonstrate that if θ = θ (x; y) is a non-vanishing solution of Eq. (1.5.6),
then the function
1
M=
θ
is the integrating factor for Eq. (1.5.3).
Indeed, writing the initial equation in the form
dy f dx = 0
and multiplying it by M; one obtains the conditions of the integrating factor in the
form
∂ M ∂ (M f )
+ = 0:
∂x ∂y
Substituting here M = 1=θ one can see that the result coincides with Eq. (1.5.6).
Thus, we conclude that if equation (1.5.3) admits the operator
∂ ∂
X =ξ +η
∂x ∂y
is an integrating factor.
Generally speaking, the problem of finding θ from Eq. (1.5.6) is not easier than
the problem of integrating the initial equation (1.5.3). However, the indicated prop-
erty provides an effective tool for integrating Eq. (1.5.3) if the admitted operator X is
known incidentally. One can easily verify that this property is the basis of the whole
“elementary” theory of integration of ordinary differential equations by quadrature.
For instance, consider the equation
y
y0 =
x(y + ln x)
∂
X = xy
∂x
and that
y2
η fξ =
y + lnx
Thus, the given equation is equivalent to the equation
y + lnx 1
dy dx = 0;
y2 xy
with a total differential in the left-hand side.
Let us find the space L admitted by the equation y00 = f (x; y; y0 ): Denoting p = y0
and r = y00 ; one has the system (S) of one equation
∂ ∂
X = ξ (x; y) + η (x; y) (1.5.8)
∂x ∂y
In order to write the determining equation, one has to make the second prolongation
e
Xe of the operator X : It has been shown in x1.4 that the first prolongation Xe of
X is given by the formulae (1.4.11) and (1.4.17). According to Eqs. (1.5.2), the
determining equation is written
e
e
X(r f (x; y; p))jr= f = 0:
Substituting this expansion into Eq. (1.5.9), we have to equate the terms with the
same powers of p: This is the operation of splitting which, generally speaking, leads
to an infinite system of equations for ξ ; η : The latter system is also called a system
of determining equations.
Carrying out the indicated operation of Eq. (1.5.9), one obtains the following
system of determining equations
Of course, there is no chance here that the solution of the system provides some
Lr with r > 0: On the other hand, the question on the maximum possible value of r
is of interest. The answer is given by the following theorem by S. Lie.
Theorem 1.9. Equation (1.5.7) cannot admit the space Lr of operators of the form
(1.5.8) with r > 8:
Proof. Let us rewrite the first four determining equations in a compact form
where every function φσ ; ωσ depends only on ξ ; η as well as on their first and sec-
ond derivatives. The general solution of the system (1.5.11) depends on 6 + 6 = 12
arbitrary constants and due to linearity of Eqs. (1.5.11) is a linear form of these
constants itself. Moreover, equations (1.5.10) impose four independent relations on
these constants. Therefore, no more than 12 4 = 8 arbitrary constants remain,
which means that the space of solutions has the dimension r 8: The actual num-
ber of arbitrary constants remaining in the solution can be less than 8 ; since the
complete system of the determining equations contains other equations as well.
Let us demonstrate that the dimension 8 is reached by the equation
34 1 One-parameter continuous transformation groups admitted by differential equations
y00 = 0:
In this case the complete system of the determining equations is reduced to Eqs.
(1.5.10) and (1.5.11), where ϕσ = φσ = ωσ = 0:
The general solution of Eqs. (1.5.11) in this case has the form
ξ = A1 + A2 x + A3y + A4xy + A5 x2 + A6 y2 ;
Turning to partial differential equations, let us find operators admitted by the heat
equation
uy = uxx :
The following notation of derivatives is introduced:
The equation of the manifold S in the extended space has the form
S: q = r: (1.5.12)
∂ ∂ ∂
X = ξ (x; y; u) + η (x; y; u) + ζ (x; y; u) (1.5.13)
∂x ∂y ∂u
∂ ∂
Xe = X + α +β ;
∂p ∂q
1.5 Groups admitted by differential equations 35
α = Dx (ζ ) pDx (ξ ) qDx(η );
β = Dy (ζ ) pDy (ξ ) qDy(η );
and operators of total differentiation have the form
∂ ∂
Dx = +p ;
∂x ∂u
∂ ∂
Dy = +q :
∂y ∂u
The expanded form of the expressions α ; β is
e ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂
Xe = Xe + ρ +σ +τ = X +α +β +ρ +σ +τ ;
∂r ∂s ∂t ∂p ∂q ∂r ∂s ∂t
which provides
e
e
X(q r) = β ρ:
Therefore, the determining equation (1.5.2) is written
(β ρ )jr=q = 0: (1.5.14)
Equation (1.5.14) shows that one has to calculate only the expression for the coor-
ee
dinate ρ in the prolonged operator X: Using the operator of total differentiation
ex = ∂ + p ∂ + r ∂ + s ∂
D
∂x ∂u ∂p ∂q
and the formula (1.4.15) one obtains
ρ =D
ex α e xξ
rD exη :
sD
All preliminary formulae are ready now and one can write out the determining equa-
tion (1.5.14) in detail:
36 1 One-parameter continuous transformation groups admitted by differential equations
Since we have already turned to the manifold S of Eq. (1.5.12) by setting r = q; the
determining equations (1.5.14 0) should hold identically in the variables
x; y; u; p; q; s; t:
ηx + pηu = 0;
which in its turn splits with respect to p and gives two equations:
ηx = 0; ηu = 0:
Using these equations in (1.5.14 0) and splitting with respect to q one obtains
ξu = 0; ηy = 2ξx :
ηx = 0; ηu = 0; ηy = 2ξx ; ξu = 0;
(1.5.15)
ξy = ξxx 2ζxu ; ζuu = 0; ζy = ζxx :
1.5 Groups admitted by differential equations 37
In order to find the space L(S); one has to construct the general solution of the
system (1.5.15). The equation ζuu = 0 shows that ζ is linear with respect to u at
most, i.e.
ζ = a(x; y)u + b(x; y):
Furthermore, the equations
ηx = 0; ηu = 0
show that η = η (y) and therefore the equation ηy = 2ξx yields
1
ξ = η 0 (y)x + c(y):
2
Upon substituting the expression for ζ ; equation ζy = ζxx takes the form
whence, splitting with respect to the variable u; one obtains two equations:
ay = axx ; by = bxx :
1
2ax = η 00 (y)x + c0 (y):
2
Now the equation ay = axx becomes
1 00
ay = η :
4
Hence, all the determining equations (1.5.15) are satisfied if we find the functions
a(x; y); b(x; y); η (y); c(y) as solutions of the equations
1 00 1 0 1 00
ax = η (y)x c (y); ay = η (y); by = bxx : (1.5.16)
4 2 4
The compatibility condition for the first two equations (1.5.16) is written
whence
η 000 (y) = 0; c00 (y) = 0:
The latter equations have the general solution
Summing up the above results one obtains the following general solution of the
determining equations (1.5.15)
L∞ = L6 L0 : (1.5.19)
Let us dwell on operators from L0 and find out to which groups G1 they cor-
respond. According to Theorem 1.2, in order to construct G1 with X 0 of the form
(1.5.18) one has to solve the system of equations
∂ x0 ∂ y0 ∂ u0
= 0; = 0; = b(x0 ; y0 )
∂a ∂a ∂a
with the initial conditions
One can readily solve the system and verify that the corresponding transformations
Ta 2 G1 have the form
Ta : x0 = x; y0 = y; u0 = u + ab(x; y):
Since the function ab(x; y); together with b(x; y); is a solution of Eq. (1.5.16), i.e.
of the original heat equation, the above transformations Ta consist merely in adding
some solution of the equation uy = uxx to the solutions u of the same equation. It is
clear that the presence of such transformations Ta admitted by the equation uy = uxx
1.5 Groups admitted by differential equations 39
Φ : uk = ϕ k (x) (k = 1; : : : ; m)
be a solution of the system (S) admitting the group G1 : Then Ta maps the manifold
Φ into the manifold
Φ 0 : u0k = ϕ 0k (x0 );
which is also a solution of (S). Equations (S) are written in the variables (x; u) in the
form
gk (x; u; a) = ϕ 0k ( f (x; u; a)):
Since ϕ 0k (x) is a solution of (S), then omitting the prime one can formulate the
following rule of transformation of solutions. If
uk = ϕ k (x) (k = 1; : : : ; m)
uk = ϕ 0k (x; a)
define a new solution of the system (S) for any value of the parameter a: Let us
apply this procedure to some operators (1.5.20).
40 1 One-parameter continuous transformation groups admitted by differential equations
x0 = x + a; y0 = y; u0 = u:
The formula (1.5.21) yields the transformation of the solution u = ϕ (x; y) into the
solution u = ϕ (x + a; y):
Example 1.19. The operator X3 generates the group G1 :
x0 = ax; y0 = a2 y; u0 = u;
x0 = x; y0 = y; u0 = u:
x y p ax2
x0 = ; y0 = ; u0 = 1 ay e 4(1 ay) u:
1 ay 1 ay
Equation (1.5.21) has the form
p
ax2 x y
1 ay e 4(1 ay) u=ϕ ;
1 ay 1 ay
and provides the following formula for transformation of the solution ϕ (x; y) into
the solution:
1 ax2 x y
u= p e 4(1 ay) ϕ ; :
1 ay 1 ay 1 ay
1.5 Groups admitted by differential equations 41
1 x2
u= p e 4y
y
There exists another method for constructing new solutions from known ones.
It is not connected with constructing finite transformations Ta ; but applicable only
to linear homogeneous equations. Consider a solution depending on the parameter
a: Since the parameter a is not involved in (S), one can obtain a new solution by
differentiating the considered solution with respect to the parameter a: Let us apply
this observation to solutions provided by the formula (1.5.21).
If we denote the solution derived from the formula (1.5.21) by
uk = ϕ̄ k (x; a);
∂ϕk
uk = ξ i (x; ϕ ) η k (x; ϕ ) (k = 1; : : : ; m)
∂ xi
provide a solution together with uk = ϕ k (x): It is easy to remember these equations
since the right-hand side is the result of application of the operator X to the dif-
ference ϕ k (x) uk taken on the initial solution. Finally, one makes the following
conclusion for linear equations.
If
uk = ϕ k (x) (k = 1; : : : ; m)
is a solution of the linear (homogeneous) system (S) admitting the operator X; then
the functions
uk = X (ϕ k (x) uk )juk =ϕ k (x) (k = 1; : : : ; m) (1.5.22)
42 1 One-parameter continuous transformation groups admitted by differential equations
The spatial coordinate x and time t are independent variables, the velocity u; pres-
sure p and density ρ are functions of x and t: The isentropic exponent γ 6= 0 is a
constant.
We look for the infinitesimal operator admitted by the system (1.5.23) in the form
∂ ∂ ∂ ∂ ∂
X =ξ +η +ω +τ +σ
∂t ∂x ∂u ∂ρ ∂p
For the sake of simplicity, we will use the already known result and assume that the
coordinates of the operator X depend on the variables t; x; u; ρ ; p as follows:
ξ = ξ (t; x);
η = η (t; x);
ω = ω (t; x; u); (1.5.24)
τ = τ (t; x; ρ );
σ = σ (t; x; p):
∂ ∂ ∂ ∂
Xe = X + ζut + ζux + ζρ t + ζρ x
∂ ut ∂ ux ∂ ρt ∂ ρx
∂ ∂
+ζ pt + ζ px
∂ pt ∂ px
The operators of total differentiation have the form
∂ ∂ ∂ ∂
Dt = + ut + ρt + pt ;
∂t ∂u ∂ρ ∂p
∂ ∂ ∂ ∂
Dx = + ux + ρx + px
∂x ∂u ∂ρ ∂p
The additional coordinates of the prolonged operator are derived by the formulae
(1.4.10):
t; x; u; ρ ; p; ux ; ρx ; px :
We split Eqs. (I)—(III) with respect to the variables ux ; ρx ; px ; i.e. equate the coeffi-
cients of these variables to zero. Note that equation (II) contains only one term with
px ; namely, ξx px : Therefore,
ξx = 0:
Taking this equation into account we make further splitting and obtain the following
equations
ux : u(ωu ξt ) ηt + u(ωu ηx ) + ω = 0;
1 1 τ (I)
px : (ωu ξt ) + (σ p ηx ) = 0;
ρ ρ ρ2
ux : ρ (τρ ξt ) + ρ (ωu ηx ) + τ = 0;
(II)
ρx : u(τρ ξt ) ηt + u(τρ ηx ) + ω = 0;
ux : p(σ p ξt ) + p(ωu ηx ) + σ = 0;
(III)
px : u(σ p ξt ) ηt + u(σ p + ηx ) + ω = 0:
τ ρτρ = 0
1.5 Groups admitted by differential equations 45
σ pσ p = 0:
where a(t; x) and b(t; x) are arbitrary functions so far. Substituting these expressions
into Eq. (I) (px ); one obtains
Thus,
τ = (a 2ηx + 2ζt )ρ :
This completes the procedure of splitting of the determining equations with respect
to the variables ux ; ρx ; px and provides the equations
ξx = 0;
ω = (ηx ξt )u + ηt ;
(1.5.25)
σ = a(t; x)p;
τ = (a 2ηx + 2ξt )ρ ;
at + γηtx = 0:
which is equivalent to
3
at + ξtt = 0
2
due to the previous equations. Expressing at via ξtt ; one obtains two formulae:
3 γ
at = ξtt ; at = ξtt ;
2 2
whence
(γ 3)ξtt = 0: (1.5.27)
Let γ 6= 3; then ξtt = 0 and we have the following system of determining equa-
tions:
ηxx = 0; ηxt = 0; ηtt = 0; ξtt = 0; ax = 0; at = 0:
Its general solution is
a = a1 ; ξ = a2 t + a3 ; η = a4 t + a5 x + a6 ;
ξ = a0 t 2 + a2 t + a3 ;
a= 3a0t + a1;
η = a0tx + a4t + a5 x + a6:
As compared to the above, the dimension of the space is increased by one, i.e. one
obtains L7 : Since L6 results from L7 when a0 = 0; the case γ = 3 is considered
further. Equations (1.5.25) provide
ω = ( a0 t + a5 a2 )u + a0x + a4 ;
τ = ( a0 t + a1 2a5 + 2a2)ρ ;
σ = ( 3a0t + a1)p:
Let us introduce the constant a05 = a5 a2 instead of a5 for the sake of convenience.
Since a05 is an independent arbitrary constant, the prime will be omitted.
Finally, the general solution of the determining equations has the form
1.6 Lie algebra of operators 47
ξ = a 0 t 2 + a2 t + a3 ;
η = a0tx + a4t + a2x + a5x + a6;
ω = a0 (x tu) + a5u + a4; (1.5.28)
τ = ( a0 t + a1 2a5)ρ ;
σ = ( 3a0t + a1)p:
∂
X1 = ; (time translation)
∂t
∂
X2 = ; (space translation)
∂x
∂ ∂
X3 = x +t ; (dilation)
∂x ∂t
∂ ∂
X4 = t + ; (the Galilean translation)
∂x ∂u
∂ ∂ ∂
X5 = x +u 2ρ ; (dilation)
∂x ∂u ∂ρ
∂ ∂
X6 = ρ +p : (dilation)
∂ρ ∂p
In the case L7 (γ = 3) one more basis operator is added
∂ ∂ ∂ ∂ ∂
X7 = t 2 + tx + (x tu) tρ 3t p ;
∂t ∂x ∂u ∂ρ ∂p
We have already introduced in x1.5 the operations of addition of operators and their
multiplication by constants. We will consider one more operation now.
Let
∂ ∂
X = ξ i i and Y = η i i
∂x ∂x
be two operators in E N :
48 1 One-parameter continuous transformation groups admitted by differential equations
Definition 1.7. Commutator of the operators X and Y is a new operator [X;Y ] de-
termined by the formula
∂
[X;Y ] = (X η i Y ξ i) (1.6.1)
∂ xi
It can be obtained by the following principle. Consider the expression
as a result of the action of some operator on the function F(x): Expansion of this
expression cancels out the derivatives of the second order and provides the result of
the action of the commutator on F: Therefore, the formula
∂
[X ;Y ] = X η i Yξ i = XY YX (1.6.2)
∂ xi
holds.
The operation defined by the formula (1.6.1) maps any two operators X and Y
into their commutator [X ;Y ]: This operation is referred to as the operation of com-
mutation. Let us formulate some properties of the operation of commutation.
1 The commutator is bilinear with respect to X and Y; i.e. for any constants α
and β the following identity holds:
yi = yi (x) (i = 1; : : : ; N):
∂ ∂
X = X 0 = X(yi ) ; Y = Y 0 = Y (yi )
∂ yi ∂ yi
∂ ∂
[X 0 ;Y 0 ] = [X 0Y (yi ) Y 0 X(yi )] = [XY (yi ) Y X(yi )]
∂ yi ∂ yi
∂
= [X ;Y ](yi ) = [X ;Y ]0 :
∂ yi
x1 = 0; : : : ; xs = 0:
The necessary and sufficient conditions of invariance of such M with respect to the
operators
∂ ∂
X = ξ i i ; Y = ηi i
∂x ∂x
are obtained in Theorem 1.5 in the form
ξ σ jM = ξ σ (0; : : : ; 0; xs+1 ; : : : ; xn ) = 0;
∂ησ 0 ∂η
σ ∂ξσ ∂ξσ
ξτ +ξτ ητ ητ
0
τ
;
∂x ∂xτ ∂τ ∂τ0
0
Proof. We use the invariance of the commutator and the operation of prolongation
with respect to the system of coordinates. Let us introduce a system of coordinates
in E N so that the operator X becomes an operator of translation along one of the
coordinates. It is always possible according to Theorem 1.3. As it has been demon-
strated above, the operator of translation is “does not prolong”, so the prolongation
of the operator X has the form Xe = X: We assume that the operator of translation X
is
∂
X= 1
∂x
The alternative assumption
∂
X= 1
∂u
is considered likewise. Further, one has
∂ ∂ ∂ ∂
Ye = Y + ζik k = ξ i i + η k k + ζik k ; ζik = Di (η k ) pkj Di (ξ j ):
∂ pi ∂ x ∂ u ∂ pi
e Ye ] :
Let us compute the commutator [X;
e Ye ] = [X; Ye ] = [X;Y ] + X; ζik ∂
[X;
∂ pki
k
∂ ζi ∂
= [X;Y ] + :
∂ x1 ∂ pki
∂ξi ∂ ∂ ηk ∂
[X ;Y ] = + i ;
∂x ∂x
1 i ∂ x ∂ uk
one has
g ] = [X;Y ] + ζi0k ∂
[X;Y ;
∂ pki
where
1.6 Lie algebra of operators 51
∂ηk ∂ξ j ∂ k
ζi0k = Di pkj Di = ζ :
∂ x1 ∂ x1 ∂ x1 i
Hence,
∂ ζik ∂
[Xg
;Y ] = [X ;Y ] + e Ye ];
= [X;
∂ x1 ∂ pki
and the theorem is proved.
Theorem 1.12. Given any system of differential equations (S), the linear space L(S)
of operators admitted by the system (S) is a Lie algebra.
Proof. If (S) admits X and Y; i.e. X;Y 2 L(S); then the manifold S is invariant with
respect to Xe and Ye in the prolonged space. According to Theorem 1.10, S is also
invariant with respect to [Xe ; Ye ]; and according to Theorem 1.11,
e Ye ] = [X;Y
[X; g ]:
Thus, S is invariant with respect to [Xg ;Y ]; which means that (S) admits the commu-
tator [X;Y ] by definition, so that [X;Y ] 2 L(S) and Theorem 1.12 is proved.
Theorems 1.8 and 1.12 demonstrate that a commutator of any two operators from
L(S) is an operator in L(S): In particular, in case of a finite-dimensional Lr ; the
commutator of any two basis operators is a linear combination of basis operators.
It is convenient to write this circumstance in the table of commutators, where the
intersection of the k-th row and the l-th column gives the commutator [Xk ; Xl ]:
As an example, we provide the following table of commutators for basis opera-
tors of the Lie algebra L6 admitted by the heat equation uy = uxx :
X1 X2 X3 X4 X5 X6
1
X1 0 0 X1 X6 X4 0
2
1
X2 0 0 2X2 2X1 X3 X6 0
2
X3 X1 2X2 0 X4 2X5 0
X4 X6 2X1 X4 0 0 0
1 1
X5 X4 X3 + X6 2X5 0 0 0
2 2
X6 0 0 0 0 0 0
Chapter 2
Lie algebras and local Lie groups
Example 2.4. The Lie algebra L3 from Example 2.2 is isomorphic with respect to
the Lie algebra of the matrices
0 1
0 a1 a2
A = @ a1 0 a3 A
a2 a3 0
so that [u; v] 2 J:
The set Z of all vectors u 2 L; such that [u; v] = 0 for any v 2 L is an ideal in L:
This ideal is termed the center of the Lie algebra L: Here we will verify only the
property of Z to be a subalgebra. Let u1 ; u2 2 Z and v 2 L: Using the Jacobi identity
and noting that the second and the third terms vanish due to the assumptions that
[u2 ; v] = 0 and [u1 ; v] = [v; u1] = 0; we obtain [[u1 ; u2 ]; v]: Hence,
[u1 ; u2 ] 2 Z:
2.1 Lie algebra 55
uα (α = 1; : : : ; r):
and
ε ε γ
[ūα ; ūβ ] = C̄αβ uε = C̄αβ p ε uγ ;
one obtains the following rule for the change of structure constants:
ε γ γ
C̄αβ pε = Cσ τ pσα pβτ :
56 2 Lie algebras and local Lie groups
γ
It follows that Cαβ is a tensor of the third order which is twice covariant and once
contravariant.
The structure constants determine the Lie algebra Lr completely, since they allow
to find the commutator of any two elements in the coordinate form. Namely, if
u = aα uα ; v = bα uα ;
γ
where Cαβ are structure constants of Lr : The resulting equation demonstrates that
γ
the same Cαβ provide the structure constants of L0r in the basis fψ (uα )g: Conversely,
let fuα g and fuα0 g be bases in Lr and L0r ; respectively, defined so that
γ
[uα ; uβ ] = Cαβ uγ
and
γ
[uα0 ; uβ0 ] = Cαβ uγ0 :
Let us define an isomorphism ψ by the relation
ψ (uα ) = uα0
u = aα uα ;
2.2 Adjoint algebra 57
then
ψ (u) = aα uα0 :
It is manifest that ψ is a one-to-one mapping. Preservation of the commutator fol-
lows from (2.1.2). Theorem 2.1 is proved.
It is important to point out that there exists a Lie algebra Lr for every set of
γ γ
constants Cαβ ; satisfying the Jacobi relations (2.1.3), for which these Cαβ are its
structure constants. In order to construct such Lr one has to take any r-dimensional
vector space, choose some basis fuα g in it and introduce the operation of com-
mutation by the formula (2.1.2). Then equations (2.1.3) guarantee that the defined
commutator satisfies all axioms of Definition 2.1.
Finally, let us introduce several other notions connected with a Lie algebra.
The Lie algebra L is said to be simple if it does not contain ideals other than zero
(consisting of one zero vector) or other than the algebra L itself.
One can readily verify that the linear span of all commutators [u; v] of vectors of
the Lie algebra L is an ideal in L: It is called the derived algebra of the Lie algebra
L and is denoted by L(1) : One can construct the sequence of derived algebras L(k)
(k = 1; 2; : : :) by determining L(k) as the derived algebra of the Lie algebra L(k 1) :
A Lie algebra L is said to be solvable if L(k) = f0g (null algebra) for a certain
k < ∞:
The Lie algebra L is said to be semi-simple if it does not contain solvable ideals.
The term “inner derivation” is justified by the fact that the mapping Da acts on
the commutator [u; v] according to the formula similar to the derivation of a product
of functions, namely
Da [u; v] = [Da u; v] + [u; Dav]: (2.2.2)
Equation (2.2.2) is easily proved by using the Jacobi relations.
When the vector a runs through the whole Lie algebra L; one obtains a set of
inner derivation fDa g; where one can determine linear operations of summation
and multiplication by numbers that turn it into a linear vector space LD :
58 2 Lie algebras and local Lie groups
α Da + β Db = Dα a+β b: (2.2.3)
D0 u = [u; 0]:
Definition 2.5. The Lie algebra LD constructed according to Eqs. (2.2.1), (2.2.3),
(2.2.4) is called the algebra of inner derivations or the adjoint algebra of the Lie
algebra L:
Theorem 2.3. The adjoint algebra LD is isomorphic to the quotient algebra L=Z of
the Lie algebra L with respect to its center Z:
Proof. There exists a natural homomorphism ψ of L on LD ; namely
ψ (a) = Da :
Equation (2.2.3) shows that ψ is linear, whereas equation (2.2.4) entails that it pre-
serves the commutator. To complete the proof one has to verify that the kernel J
of the homomorphism ψ coincides with Z: By definition, a 2 J if and only if
ψ (a) = D0 : If a 2 Z; then according to Eq. (2.2.1), Da u = 0; so that ψ (a) = D0 ; i.e.
a 2 J : Conversely, if a 2 J; then Da = D0 and
[a; b] = [b; a] = Da b = D0 b = 0
2.2 Adjoint algebra 59
γ ∂
Eβ = Cαβ xα (β = 1; : : : ; r) (2.2.6)
∂ xγ
acting in the r-dimensional space of the points x (x1 ; : : : ; xr ) and consider their linear
combinations
E = eβ Eβ
with constant (i.e. independent of x) coefficients e1 ; : : : ; er : Let fEg be the set of all
such operators E:
Theorem 2.4. The set fEg is a Lie algebra of operators, isomorphic to the adjoint
algebra LD of the Lie algebra L:
Proof. The set fEg is a linear space of operators according to construction. The
axioms 1 —3 of Definition 2.1 always hold (see x1.6) for operators with a usual
definition of the commutator (see Definition 1.7). Therefore, in order to prove that
fEg is a Lie algebra, one has only to verify that [Eβ ; Eθ ] 2 fEg for any β ; θ =
1; : : : ; r: This is provided by straightforward calculations invoking the properties of
structure constants (2.1.3). One has
γ ∂ γ ∂ ∂
[Eβ ; Eθ ] = Cαβ xα γ (Cσε θ xσ ) Cαθ xα γ (Cσε β xσ )
∂x ∂x ∂ xε
γ ε γ ε ∂ γ γ ∂
= (Cαβ Cγθ Cαθ Cγβ )xα = (Cαβ Cγθε ε
+Cθ α Cγβ )xα ε
∂ xε ∂x
γ ε α ∂ γ ε α ∂ γ
= Cβ θ Cγα x = Cβθ C αγ x = Cβ θ Eγ :
∂ xε ∂ xε
Further, using the fact that equation (2.2.4) entails the equality
γ
[Dβ ; Dθ ] = Cβ θ Dγ
60 2 Lie algebras and local Lie groups
and comparing Eqs. (2.2.5) and (2.2.6) one concludes that the mapping ψ (Eα ) = Dα
is an isomorphism.
The Lie algebra of the operators fEg is said to be a representation of the adjoint
algebra LD :
γ ∂
E = eβ Eβ = eβ Cαβ xα ;
∂ xγ
then, according to Theorem 1.2, the transformations composing the corresponding
G1 can be obtained by integrating the following system of ordinary differential equa-
tions with the initial conditions:
dx0γ γ
= eβ Cαβ x0α ; x0γ (0) = xγ (γ = 1; : : : ; r): (2.2.7)
dt
Since (2.2.7) is a system of linear homogeneous equations with constant coeffi-
cients, the solution of (2.2.7) is a linear form of the initial data x1 ; : : : ; xr and can be
written in the form
γ
x0γ = fσ (t)xσ (γ = 1; : : : ; r): (2.2.8)
Equations (2.2.8) determine the desired transformations in E r composing the
group G1 : These transformations are linear. We will interpret them as transforma-
tions of coordinates of the vector u 2 Lr ; i.e. as transformations of the vectors u 2 Lr
given in the basis fuα g: These transformations are denoted by the symbol At so that
A0 is an identical transformation.
Theorem 2.5. The transformations At are automorphisms of the Lie algebra Lr :
γ γ
Proof. It is manifest that the mapping u0 = At u is one-to-one (since fσ (0) = δσ )
and linear. It remains to verify that this mapping preserves the commutator. It is
sufficient to prove this property for the basis vectors only, i.e. to show that
At [uα ; uβ ] = [At uα ; At uβ ]:
One has
γ γ
At [uα ; uβ ] = Cαβ At uγ = Cαβ fγσ (t)uσ ;
γ γ
[At uα ; At uβ ] = [ fα (t)uγ ; fβθ (t)uθ ] = Cγθ
σ
fα (t) fβθ (t)uδσ :
Setting
σ γ γ
qαβ = Cαβ fγσ (t) σ
Cγθ fα (t) fβθ (t);
one obtains
2.3 Local Lie group 61
σ
dqαβ
= eε Cτε
σ τ σ
qαβ ; qαβ (0) = 0
dt
upon simple but tedious calculations based on Eqs. (2.2.7) and (2.1.3). The unique-
ness of the solution of the above system of equations provides
σ
qαβ (t) 0;
ga gb = gc
implies
c = ϕ (a; b);
where ϕ (a; b) is a function determined on Ω Ω : The function ϕ (a; b) can be called
a multiplication law of elements of the group G: Sometimes the multiplication law
can be determined not for the whole group G; but only for some subset Gr 2 G;
which leads to the notion of a local group.
Definition 2.6. A subset Gr of the group G containing the unit element g0 is called
a local Lie group if the following conditions are satisfied:
(i) there is a one-to-one correspondence between elements of Gr with the points
a 2 Q of an open sphere Q E r with the center 0; so that g0 $ 0;
(ii) there exists ε > 0 such that ga gb 2 Gr and ga 1 2 Gr for any points a; b with
jaj < ε ; jbj < ε ;
(iii) the multiplication law c = ϕ (a; b) is a thrice continuously differentiable
function of coordinates of the points a and b:
Remark 2.1. In general Gr is not a group. Therefore the notion of a local Lie group
can be defined without the supposition that Gr is included into a group G: Then it is
a set with an associative operation of multiplication containing a unit and an inverse
transformation of elements. However, these operations are determined not for all
elements, but only for those that are “sufficiently close” to the unit element in the
meaning of Definition 2.6.
62 2 Lie algebras and local Lie groups
ga gb = gc : c = ϕ (a; b)
where the functions f α (ā) are thrice continuously differentiable and satisfy the con-
dition α
∂f
6= 0: (2.3.2 0 )
∂ āβ ā=0
Let us point out some simple properties of the multiplication law. Since the unit
element g0 corresponds to the point a = 0 (aα = 0; α = 1; : : : ; r); the equations
g0 g0 = g0 ; ga g0 = ga ; g0 gb = gb
yield
ϕ (0; 0) = 0; ϕ (a; 0) = a; ϕ (0; b) = b: (2.3.3)
By virtue of Eqs. (2.3.3), the Taylor expansion of ϕ (a; b) yields
The free index α appearing in this formula runs through the values 1 ; : : : ; r even
though it is not written explicitly. Many formulae follow this rule in what follows.
2.3.2 Subgroups
Using these functions, one can write the expansion of ϕ (a; b) as follows:
g(s)g(t) = g(s + t)
daα (t + s)
= Aβα (a(t + s))eβ ; aα (t + s)js=0 = aα (t)
ds
by construction. Further, for solution of the system (2.3.9) one has
aα (t) = eα t + O(jtj2)
64 2 Lie algebras and local Lie groups
and
where the latter equality follows from Eq. (2.3.6). Therefore, we have the equation
Note that the value ϕ α (a(t); ϕ (a(s); a(u))) is a coordinate of the element
Thus, differentiating the above equation with respect to u; setting u = 0 and invoking
Eqs. (2.3.7), (2.3.5) and (2.3.3), one obtains
d ϕ α (a(t); a(s))
= Aβα (ϕ (a(t); a(s)))eβ ; ϕ α (a(t); a(s))js=0 = aα (t):
ds
Hence, the left and the right-hand sides of Eq. (2.3.8) satisfy one and the same dif-
ferential equation and the same initial condition, and hence they coincide according
to the theorem on uniqueness of the solution of the Cauchy problem. Theorem 2.6
is proved.
Corollary 2.1. For any vector e there exists one and only one subgroup G1 ; having
this vector e as its directing vector.
g(t) : aα = eα t
the uniqueness of the solution entails that the function f α has the property
∂ f α (0; : : : ; 1; : : : ; 0;t) α α
=
∂t = Aβ (0) = δβ :
t=0
Finally we demonstrate that ∑ā is a canonical system of coordinates of the first kind.
Indeed, if āα = eα t; then
aα = f α (et; 1) = f α (e;t);
so that the curve g(t) with the coordinates aα = aα (t) is a subgroup G1 : Theorem 2.7
is proved.
Corollary 2.2. One can draw a one-parameter subgroup through every element of
a local Lie group Gr sufficiently close to the unit element g0 :
In what follows, the symbol a 1 denotes a point from E r corresponding to the
element ga 1 ; so that ga 1 = ga 1 : Let us introduce auxiliary functions
α ∂ ϕ α (a; b)
Vβ (b) = ; Vβα (0) = δβα ; (2.3.10)
∂ bβ a=b 1
γ ∂ ϕα γ
Vα (ϕ ) = Vβ (b); ϕ α (a; 0) = aα : (2.3.11)
∂ bβ
Conversely, given twice continuously differentiable functions Vβα (b); such that
Vβα (0) = δβα ; with which the system (2.3.11) has a single solution ϕ (a; b) with any
66 2 Lie algebras and local Lie groups
values aα ; there exists a local Lie group Gr with the multiplication law ϕ (a; b) and
with the given auxiliary functions Vβα (b):
Proof. Let us replace b by b + ∆ b in the formula gc = ga gb with a unaltered. Then,
c is replaced by c + ∆ c: Multiplying the equation
gc+∆ c = ga gb+∆ b
ϕ γ (c 1 ; c + ∆ c) = ϕ γ (b 1 ; b + ∆ b):
Noting that
∂ϕα β
∆ cα = ∆ b + O(j∆ bj2 )
∂ bβ
and
ϕ α (b 1 ; b + ∆ b) = Vβα (b)∆ bβ + O(j∆ bj2 );
one transforms the above equality into
γ ∂ϕα β γ
Vα (ϕ ) ∆ b = Vβ (b)∆ bβ + O(j∆ bj2 );
∂ bβ
whence equations (2.3.11) follow.
Let us prove now the converse statement. The assumptions about the functions
Vβα (b) guarantee that the solution ϕ α (a; b) of the system (2.3.11) is determined
and is thrice continuously differentiable with respect to the variables aα ; bβ from a
certain neighborhood ω of the origin of coordinates in the space E r of the points a:
Further considerations refer to this neighborhood ω without special notice.
We determine the operation of multiplication a b for points E r by the formula a
b = ϕ (a; b) and prove that E r (specifically, some sphere Q E r ) is a local Lie group
with this multiplication. Since ϕ (a; b) is smooth, our operation of multiplication is
determined in a sufficiently small vicinity of the origin of coordinates. Therefore, it
remains to prove the validity of group axioms only.
First, let us establish a relation between the functions Vβα given by Eqs. (2.3.11),
and the functions Aβα determined by the solution ϕ (a; b) of the system (2.3.11) ac-
cording to the formulae (2.3.5). This relation is given by
γ γ
Vα (a)Aβα (a) = δβ ; (2.3.12)
so that the matrix (Vβα ) is the inverse matrix to (Aβα ): Equations (2.3.12) follow di-
rectly from Eq. (2.3.11) upon setting b = 0: By virtue of (2.3.12), equations (2.3.11)
2.3 Local Lie group 67
∂ϕα γ
= Aαγ (ϕ )Vβ (b); ϕ α (a; 0) = aα : (2.3.13)
∂ bβ
Let us prove that the introduced multiplication is associative. If we set
we will have to prove only the equality w = w̄: Turning to coordinates and using
Eqs. (2.3.13) and (2.3.12), one obtains
∂ wα γ
= Aαγ (w)Vβ (c); wα jc=0 = uα ;
∂ cβ
∂ w̄α ∂ ϕ α (a; v) ∂ vσ γ γ
= = Aατ (w̄)Vστ (v)Aσγ (v)Vβ (c) = Aαγ (w̄)Vβ (c);
∂c β ∂ vσ ∂ cβ
w̄α jc=0 = ϕ α (a; b) = uα :
One can see that wα and w̄α satisfy one and the same system of differential equations
of the form (2.3.13) and the same initial conditions. By virtue of the uniqueness of
the solution it follows that wα = w̄α :
Further, it follows from ϕ (a; 0) = a that the point 0 is the right unit. Letting a = 0
in Eqs. (2.3.13), one can see that the solution is ϕ α = bα ; so that ϕ (0; b) = b; i.e.
the point 0 is the left unit as well.
Finally the system of equations ϕ α (a; b) = 0 (α = 1; : : : ; r) determines the func-
tions bα = bα (a) in a vicinity of the point 0: Setting (a 1 )α = bα (a) one obtains
the inverse a 1 to the element a:
To complete the proof one has only to verify that the functions
α
α ∂ ϕ (a; b)
V β (b) =
∂ bβ a=b 1
coincide with the given functions Vβα (b): Setting a = b 1 in Eqs. (2.3.11) and taking
into account the equations
one obtains
γ γ ∂ ϕ α ∂ ϕ γ γ
Vβ (b) = δα = = V β (b):
∂ bβ a=b 1 ∂ bβ a=b 1
∂ ui
= fαi (x; u); ui (x0 ) = ui0 (i = 1; : : : ; m; α = 1; : : : ; r): (2.3.14)
∂ xα
Definition 2.9. The system (2.3.14) is said to be totally integrable if it has a solution
for any initial data x0 ; u0 :
Lemma 2.1. In order for the system (2.3.14) to be totally integrable, it is necessary
and sufficient that the equations
∂ fαi j ∂ fβ ∂ fβ j
i i
∂ fαi
+ f = + fα (2.3.15)
∂ xβ ∂ u j β ∂ xα ∂ u j
hold identically with respect to the independent variables x; u:
Proof. Necessity. Calculating the derivative
∂ 2 ui
∂ xα ∂ xβ
by two ways, one arrives at Eqs. (2.3.15) on the solution and, in particular, at the
initial point x0 ; u0 : Since the point x0 ; u0 is arbitrary, equations (2.3.15) are identities
with respect to the independent variables x; u:
Sufficiency. Let us determine the functions vi = vi (t; e) as the solution of the
system of ordinary differential equations ( e is a constant vector)
∂ vi
= eα fαi (te; v); vi (0) = ui0 : (2.3.16)
∂t
For the sake of simplicity we can prove without loss of generality that there exists
a solution of the system (2.3.14) with the initial data at the point x0 = 0: We set
ui (x) = vi (1; x) and prove that this is a solution of the problem (2.3.14). To this end
we note that the following equation is satisfied:
vi (t; e) = vi (1;te):
It is derived in the same way as the similar property of the function f α (e;t) in the
proof of Theorem 2.7. We will verify Eqs. (2.3.14) demonstrating that
∂ ui
Rαi (x) = fαi (x; u) 0:
∂ xα
To this end let us introduce the functions
Differentiating with respect to t; using Eqs. (2.3.16), the identities (2.3.15) and the
definition of Sαi ; one obtains
2.3 Local Lie group 69
dSαi ∂ fβ ji
= eβ S ; Sαi (0) = 0:
dt ∂uj α
Thus, Sαi (t) satisfy the system of linear homogeneous ordinary differential equations
and have the zero initial values. Therefore, Sαi (t) 0 and Lemma 2.1 is proved.
Note that the system (2.3.14) always has a unique solution for sufficiently smooth
right-hand parts fαi (x; u); independently of conditions (2.3.15).
The following theorem has the same relation to the system (2.3.13) as Lemma 2.1
to the system (2.3.14).
Theorem 2.9. The system (2.3.13) is completely integrable if and only if the func-
tions Vβα (b) satisfy the system of equations
∂ Vβα ∂ Vγα
= Cσατ Vβσ Vγτ ; Vβα (0) = δβα ; (2.3.17)
∂ bγ ∂ bβ
where Cβαγ (α ; β ; γ = 1; : : : ; r) are some constants.
Proof. If the system (2.3.13) is completely integrable, then according to Lemma 2.1,
the right-hand sides of Eqs. (2.3.13) should satisfy equations of the form (2.3.15).
Due to the special form of these right-hand sides, one can reduce the corresponding
Eqs. (2.3.15) to the form
" α # " α #
β γ
∂ Vβ (ϕ ) ∂ Vγα (ϕ ) β γ
∂ Vβ (b) ∂ Vγα (b)
Aσ (ϕ )Aτ (ϕ ) = Aσ (b)Aτ (b) ;
∂ ϕγ ∂ ϕβ ∂ bγ ∂ bβ
upon simple transformations where only the relations (2.3.12) are used. Since the
variables b and ϕ = ϕ (a; b) are independent, the resulting equality can be an identity
with respect to b; ϕ only if the common value of both expressions is a constant
" α #
β γ
∂ Vβ (b) ∂ Vγα (b)
Aσ (b)Aτ (b) = Cσατ = const:
∂ bγ ∂ bβ
Multiplying both sides by Vβσ Vγτ one obtains Eqs. (2.3.17) which are thereby equiv-
alent to the conditions of complete integrability of the system (2.3.13). Theorem 2.9
is proved.
Definition 2.10. Equations (2.3.17) are referred to as the Maurer-Cartan equations.
The constants Cβαγ in Eqs. (2.3.17) are called the structure constants of the local Lie
group Gr :
70 2 Lie algebras and local Lie groups
Note that the Maurer-Cartan equations can be transformed into equivalent equa-
tions for the auxiliary functions Aβα (a): Namely, multiplying Eqs. (2.3.17) by
β γ
Aαv Aπ Aρ and using the identities (2.3.12) one obtains
∂ Aαγ ∂ Aβα
Aσβ Aσγ = Cβσγ Aασ : (2.3.18)
∂ aσ ∂ aσ
In what follows one can see that the local Lie group Gr is completely determined
by the set of its structure constants (the third Lie theorem). First, let us deduce two
properties of a canonical coordinate system of the first kind.
Lemma 2.2. The necessary and sufficient condition for the system of coordinates
∑a to be canonical of the first kind is that the functions Vβα satisfy the relations
Proof. Necessity. Let ∑a be canonical of the first kind. Then the curve aα = eα t
(α = 1; : : : ; r) is a subgroup G1 for any vector e (e1 ; : : : ; er ) and hence satisfies Eqs.
(2.3.9). Substituting aα = eα t there, one obtains
eα = Aβα (et)eβ :
bα = Aβα (b)eβ :
γ
Whence, multiplying by Vα (b) and applying Eq. (2.3.12) we get Eqs. (2.3.19).
Sufficiency. Equations (2.3.19) entail that
aα = Aβα (a)aβ :
Proof. Let
Wβα (t) = tVβα (te):
It is sufficient to prove that Wβα (t) is determined uniquely by Cβαγ ; because
β
∂ β Vβα
bα = δγα Vγα :
∂ bγ
Therefore, by virtue of Eqs. (2.3.17),
dWβα
= δβα +Cσατ eτ Wβσ ; Wβα (0) = 0: (2.3.20)
dt
Since the solution of the system (2.3.20) is unique, it is completely determined if
the set of the structure constants Cβαγ is given. Thus, Lemma 2.3 is proved.
Theorem 2.10. The structure constants Cβαγ of the local Lie group Gr satisfy the
Jacobi relations
Cβαγ = Cγβ
α
; σ
Cαβ Cστ γ +Cβσγ Cστ α +Cγα
σ τ
Cσ β = 0: (2.3.21)
Conversely, given any set of constants Cβαγ satisfying the relations (2.3.21), there
exists a local Lie group Gr whose structure constants coincide with the given Cβαγ :
Proof. We set b = 0 in Eqs. (2.3.17). Since Vβα (0) = δβα ; one obtains
!
∂ Vβα ∂ Vγα
Cβαγ = ; (2.3.22)
∂ bγ ∂ bβ b=0
whence the first Jacobi relation (2.3.21) follows. Further, applying the operation
∂ =∂ bε to Eqs. (2.3.17) and using these equations once more one obtains
72 2 Lie algebras and local Lie groups
∂ 2Vβα ∂ Vβσ
∂ 2Vγα ∂ Vετ µ
= Cσατ Vτ +Cσατ Vβσ +Cλτ µ Vγλ Vε
∂ bγ ∂ bε ∂ bβ ∂ bε ∂ bε γ ∂ bγ
∂ Vβσ ∂ Vεσ µ
= Cσατ Vγτ Cσατ Vβτ +Cσατ Cλτ µ Vβσ Vγλ Vε :
∂ bε ∂ bγ
Whence, setting b = 0 one obtains the relation
σ α
Cεγ Cσ β = ωεγβ ωβ εγ ;
where !
∂ 2Vβα σ
α ∂ Vε
ωεγβ = +Cσ β ∂ bγ
:
∂ bε ∂ bγ b=0
Making the circular permutation of indices ε ; γ ; β twice in the above relation, one
obtains two more similar relations, namely
Let us set
Vβα (b) = Wβα (1; b)
and demonstrate that these Vβα together with the given Cβαγ satisfy the Maurer-Cartan
equations (2.3.17). To this end, we introduce the functions
∂ Wβα ∂ Wγα
hβα γ (t) = Cσατ Wβσ Wγτ :
∂ eγ ∂ eβ
It is clear that hβα γ (0) = 0: Differentiating the above functions with respect to t and
using Eqs. (2.3.20) and the relations (2.3.21), one obtains
dhβα γ
= Cσατ eτ hσβ γ :
dt
Thus, the functions hβα γ satisfy the system of linear homogeneous differential equa-
tions with the zero initial conditions. Therefore,
hβα γ (t) 0;
2.3 Local Lie group 73
and, in particular,
hβα γ (1) 0:
The latter are Eqs. (2.3.17) for the functions Vβα (e): Letting e = 0 in Eqs. (2.3.20)
one obtains the equations
dWβα
= δβα
dt
whose solution is
Wβα (t; 0) = δβα t:
Hence
Vβα (0) = δβα :
Using Theorem 2.9 we conclude that the system of Eqs. (2.3.13) is completely
integrable with the obtained Vβα (b): According to Theorem 2.8, there exists a lo-
cal Lie group Gr where these Vβα are auxiliary functions. The given Cβαγ are the
structure constants for the constructed Gr because they are expressed through Vβα
by the formulae (2.3.22) following from Eqs. (2.3.17). This completes the proof of
Theorem 2.10.
duα
= Cσατ eτ uσ ; uα (0) = 0;
dt
Uniqueness of the solution uα (t) of the above initial value problems gives the equa-
tion uα (t) 0; which provides that (2.3.19) holds by construction of the function
Vβα :
The other peculiarity is that the resulting coordinates ∑a are analytic. In other
words, the multiplication law ϕ (a; b) is a holomorphic function of the variables
74 2 Lie algebras and local Lie groups
aα ; bβ at the point a = b = 0: Indeed, since the functions Wβα (t; e) solve the (non-
homogeneous) system of linear equations (2.3.20) with constant (with respect to t)
coefficients, they are determined and analytic with respect to t when ∞ < t < +∞:
Further, the right-hand sides in Eqs. (2.3.20) are analytic functions of the coordi-
nates e1 ; : : : ; er of the vector e: Therefore, the solution is holomorphic with respect
to e; at least in the vicinity of the point e = 0: Due to the equation
the functions Vβα (b); and hence Aβα (b) are holomorphic. Finally, the solution of
the completely integrable system (2.3.11) with the holomorphic right-hand sides
is holomorphic, which was to be proved. Thus, analytic coordinates exist in any
local Lie group Gr :
In fact, the three fundamental theorems of Lie show that investigation of groups
Gr is reduced to investigation of tensors of the third order Cβαγ : Of course, this re-
duction takes place with the accuracy to the local isomorphism of groups Gr : Two
local Lie groups Gr and Gr are said to be locally isomorphic if elements of some
vicinities of the unit element in Gr and Gr can be set into one-to-one correspondence
ga $ ḡa so that
g0 $ ḡ0 ; ga gb $ ḡa ḡb ; ga 1 $ ḡa 1 :
Obviously, it is necessary and sufficient for the local isomorphism Gr and Gr that
there exist such systems of coordinates ∑a and ∑a in Gr and Gr ; respectively, where
the multiplication laws for elements of Gr and Gr coincide:
∂
Xα = Aσα (a) (α = 1; : : : ; r): (2.3.23)
∂ aσ
Computing the commutators of the operators (2.3.23) according to Definition 1.7:
!
∂ Aβσ σ
τ τ ∂ Aα ∂
[Xα ; Xβ ] = Aα τ Aβ τ
∂a ∂a ∂ aσ
and invoking the Maurer-Cartan equations in the form (2.3.18), one obtains
σ
(Xα ; Xβ ) = Cαβ Xσ ;
σ are the structure constants of the given G : The resulting relations
where Cαβ r
demonstrate that the linear span fX g of the operators (2.3.23),
X = eα Xα ;
is a Lie algebra of operators, namely the algebra Lr ; whose structure constants are
equal to the numbers Cβαγ in the basis (2.3.23). The linear independence of the oper-
ators (2.3.23) follows from the fact that they take the form
∂
Xα =
∂ aα
at the point a = 0: Thus, the Lie algebra of operators Lr spanned by the operators
(2.3.23) is the Lie algebra of the given Gr :
The operators (2.3.23) are sometimes termed the shift operators on the group Gr :
In this section we will discuss details of the correspondence between Lie algebras
Lr and local Lie groups Gr established by Definition 2.11.
β γ
[e1 ; e2 ]α = Cβαγ e1 e2 (α = 1; : : : ; r): (2.4.1)
One can readily verify that the introduced operation of commutation satisfies all the
axioms of Definition 1.7 due to the properties (2.3.21) of the structure constants. Let
us verify that structure constants of the derived Lr are equal to constants Cβαγ in the
β β
basis feα g determined as follows: the vector eα has the coordinates eα = δα : This
follows from Eq. (2.2.2), written for [eβ ; eγ ]; namely
[eβ ; eγ ]α = Cσατ eβσ eτγ = Cσατ δβσ δγτ = Cβαγ = Cβσγ δσα = Cβσγ eασ ;
whence
[eβ ; eγ ] = Cβσγ eσ ;
which was to be proved.
Let us make another preliminary observation. The formulae (2.3.4) entail that
∂ ϕ α
Aαγ (a) = = δγα + rβαγ aβ + O(jaj2 );
∂ bγ b=0
whence
∂ Aαγ
= rβαγ : (2.4.2)
∂ aβ a=0
Therefore, equations (2.3.18) provide
Cβαγ = rβαγ α
rγβ (2.4.3)
when a = 0:
Consider in Gr two subgroups G1 ; g1 (t) and g2 (t) with the directing vectors e1
and e2 ; respectively. Let us construct a new curve in Gr :
p p p p
ĝ(t) = g1 ( t)g2 ( t)g1 1 ( t)g2 1 ( t); (2.4.4)
where t 0:
Lemma 2.4. The curve ĝ(t); determined by Eq. (2.4.4) has the directing vector e =
[e1 ; e2 ]; where the commutator is defined by Eqs. (2.4.1).
Proof. Equations (2.3.9) and (2.4.2) entail that
1
aα (t) = eα t + rβαγ eβ eγ t 2 + O(t 3)
2
along the subgroup Gp 1 with the
p directing vector e: Therefore, if b(t) are the coordi-
nates of the curve g1 ( t)g2 ( t); then
p p p p β p γ p
bα (t) = ϕ α (a1 ( t); a2 ( t)) = aα1 ( t) + aα2 ( t) + rβαγ a1 ( t)a2 ( t) + O(t 2 )
3
p 1 β γ β γ β γ
= (eα1 + eα2 ) t + rβαγ (e1 e1 + e2 e2 )t + rβαγ e1 e2t + O(t 2 ):
3
2
2.4 Subgroup, normal subgroup and factor group 77
Likewise,
p 1 β γ β γ β γ
cα (t) = (eα1 + eα2 ) t + rβαγ (e1 ; e1 + e2 e2 )t + rβαγ e1 e2t + O(t 2 )
3
2
p p
for the coordinates c(t) of the curve g1 1 ( t)g2 1 ( t): Consequently, the coordi-
nates â(t) of the curve ĝ(t) from (2.4.4) that are equal to ϕ (b(t)c(t)) are given by
β γ β γ
= (rβαγ α
)e1 e2t + O(t 2 ) = Cβαγ e1 e2t + O(t 2 ):
3 3
rγβ
The relations (2.4.3) are used in the latter transition. Whence, differentiating with
respect to t and setting t = 0; one obtains Eqs. (2.4.1) for the vector
d â
e=
dt t=0:
2.4.2 Subgroup
curve g1 (α t) has the vector α e1 : Therefore, feg is a linear space, namely a subspace
Ls Lr : Further, the curve ĝ(t) Gs constructed according to Eq. (2.4.4) has the
directing vector e = [e1 ; e2 ]: Therefore, for any e1 ; e2 2 Ls one also has [e1 ; e2 ] 2 Ls ;
i.e. Ls is a subalgebra in Lr according to Definition 2.3.
In order to prove the inverse, we consider Gr in canonical coordinates of the
first kind. Let us choose a basis in Lr so that the basis vectors eα 0 (α 0 = 1; : : : ; s)
provide a basis in the subalgebra Ls : Let us agree, that the Greek indices with one
prime α 0 ; β 0 ; σ 0 ; : : : ; and with two primes α 00 ; β 00 ; σ 00 ; : : : ; run the values 1; : : : ; s;
and s + 1; : : :; r; respectively. Then, the coordinates of the vectors e 2 Ls are such,
that eα = 0: Since Ls is a subalgebra in Lr ; the equations
00
Cβαγ = [eβ ; eγ ]α
entail that
Cβα0γ 0 = [eβ 0 ; eγ 0 ]α = 0:
00 00
(2.4.6)
Let Gs be the set fg(t)g of elements of subgroups G1 with the directing vectors
from Ls : Let us introduce into Gr a canonical system of coordinates of the first
kind ∑a resulting from construction of Gr by using structure constants described in
x2.3. Note that the elements of Gs in the coordinate system ∑a are characterized by
the equation aα = 0: Indeed, since ∑a is canonical, the elements of G1 have the
00
coordinates aα = eα t; whence aα = eα t = 0 if e 2 Ls :
00 00
Vβα0 (b0 ) 0;
00
b0 = (b1 ; : : : ; bs ; 0; : : : ; 0) (2.4.7)
in ∑a : To this end, take a vector e 2 Ls and single out in the system of Eqs. (2.3.20)
a subsystem with α = α 00 ; β = β 0 which has the form
dWβα0
00
dt
(see Eq. (2.4.6)). Due to homogeneity of the equations the solution of the above
subsystem is Wβα0 (t) = 0; which entails Eqs. (2.4.7).
00
deed, the previous equations in this case are satisfied identically by virtue of Eqs.
(2.4.7), whereas the remaining equations become a completely integrable system
∂ ϕ α (a0 ; b0 )
0
γ0 γ0
ϕ α (a0 ; 0) = aα :
0 0
Vα 0 (ϕ 0 ) β
= Vβ 0 (b0 );
∂b
0
2.4 Subgroup, normal subgroup and factor group 79
The latter statement follows from the fact that, by virtue of Eq. (2.4.6), Cβα0γ 0 provide
0
a system of structure constants for Ls and hence, satisfy the Jacobi relations (2.3.21).
Theorem 2.11 is proved.
hgh 1g 1
2 Gs
for any g 2 Gs and any h 2 Gr : Therefore, Lemma 2.4 entails that if c̄ 2 Ls and
e 2 Lr ; then one also has [ē; e] 2 Ls : According to Definition 2.3 it means that Ls is
an ideal in Lr :
Conversely, let Ls be an ideal in Lr : By virtue of Theorem 2.11 it corresponds to
a subgroup Gs Gr : Let us prove that
ĝ(t) = ga g(t)ga 1 2 Gs
for any one-parameter subgroup g(t) 2 Gs and any ga 2 Gr : Since the curve ĝ(t) is
also a subgroup G1 ; it is sufficient to prove that its directing vector ê belongs to Ls :
Introducing in Gr a system of coordinates ∑a as we did in the proof of Theorem 2.11
and assuming that g(t) has a directing vector e; one finds out that ĝ(t) is written in
the coordinate from as follows:
Further, choosing the basis in Lr in the same way as in the proof of Theorem 2.11
and invoking that Ls is an ideal, one obtains
Cβαγ 0 = [eβ ; eγ 0 ]α = 0
00 00
instead of Eq. (2.4.6). Proceeding as in the proof of Theorem 2.11 one obtains the
equations
Vβα0 (b) = 0
00
(2.4.9)
80 2 Lie algebras and local Lie groups
00
when γ = γ ; β = β 0 : Whence
∂ϕα
00
0 = 0;
∂ bβ
which means that in the given case one also has
Aβα 0 (a) = 0:
00
(2.4.10)
By virtue of Eqs. (2.4.9) and (2.4.10), the formulae (2.4.8) written for e 2 Ls provide
gs gb if gc gb 1 2 Gs :
c00 = b00 :
∂ ϕ α (a; b)
00
=0
∂ bβ
0
while proving Theorem 2.12. Since ∑a is canonical system of the first kind, one has
ϕ α (a; b) = ϕ α ( b; a);
2.4 Subgroup, normal subgroup and factor group 81
whence
∂ ϕ α (a; b)
00
=0
∂ aβ
0
whence
gc gb 1 2 Gs :
Conversely, let gc = ga gb ; where ga 2 Gs : Then a00 = 0 and
One can easily verify that this operation does not depend on the choice of “repre-
sentatives” g1 ; g2 of the classes h(g1 ); h(g2 ) and that it satisfies the group axioms.
Let us introduce coordinates into the set of classes by taking the numbers aα as
00
coordinates of the class h(ga ): Lemma 2.5 demonstrates that the correspondence
h(ga ) $ a00 is one-to-one. The multiplication law for classes in these coordinates
is given by the functions ϕ 00 (a00 ; b00 ) satisfying the smoothness requirement for the
multiplication law in the local Lie group. Hence, the set of classes h(ga ) is a local
Lie group.
Definition 2.14. The set of classes of equivalent elements of the group Gr generated
by its normal subgroup Gs is called the factor group of the group Gr by its normal
subgroup Gs and is denoted by the symbol Gr =Gs :
Definition 2.4 given in Lie algebras is an equivalent to Definition 2.14. If Gs is a
normal subgroup in Gr and if Ls is a corresponding ideal in the Lie algebra Lr of the
group Gr ; then one can construct the factor group Gr =Gs and the quotient algebra
Lr =Ls :
Theorem 2.13. The quotient algebra Lr =Ls is the Lie algebra of the factor group
Gr =Gs :
Proof. The quotient algebra Lr =Ls is the set of vectors representing the classes of
equivalent vectors e 2 Lr with respect to the ideal Ls :
e ē if e ē 2 Ls :
82 2 Lie algebras and local Lie groups
Choosing the basis in Lr in the same way as in the proof of Theorem 2.11, one
obtains the equivalence criterion in the coordinate form: e00 = ē 00 or, to be more
exact,
eα = ēα :
00 00
Hence, Lr =Ls can be considered as the set of vectors of the form e00 : By virtue of
Lemma 2.5 and construction of the factor group Gr =Gs ; the directing vectors of one-
parameter subgroups from Gr =Gs also have the from e00 : Therefore, the operation of
commutation (2.4.1) in the Lie algebra of the group Gr =Gs is given by
β 00 γ 00
[e1 ; e2 ]α = Cβα00γ 00 e1 e2 :
00 00
The proof is completed by the observation that Cβα00γ 00 are structure constants of the
00
= Γb (Γa (g)):
It is manifest that the automorphisms Γa and Γb coincide if and only if the element
ga gb 1 belongs to the center Z of the group Gr : Therefore, there is a one-to-one cor-
respondence between inner automorphisms and elements of the factor group Gr =Z.
Moreover, the multiplication law in GA will be the same as in Gr =Z due to (2.5.1).
It follows that the set GA is a local Lie group with the multiplication (2.5.1) and that
the group GA is isomorphic to the factor group Gr =Z:
2.5 Inner automorphisms of a group and of its Lie algebra 83
is also a subgroup G1 with the directing vector (2.4.8). The formula (2.4.8) deter-
mines a linear transformation la in Lr given by the matrix
It is evident that the automorphisms Γa and Γb are identical if and only if the transfor-
mations la and lb are identical and that the product ΓaΓb corresponds to the product
la lb = lϕ (a;b) ; given by the matrix
It follows that the set L of matrices lβα (a) with the variable a is a local Lie group that
is isomorphic to the group of inner automorphisms GA : Thus, we have the following
statement.
Theorem 2.14. The Lie algebra of the group GA is isomorphic to the adjoint algebra
of the Lie algebra Lr :
Proof. Let us consider an automorphism Γa along a one-parameter subgroup a = ut
with the directing vector u: Then, Γut is also a subgroup G1 in the group GA ; and
hence the matrix lβα (ut) is a one-parameter subgroup of matrices in the group L :
Therefore
lβα (u(t + s)) = lσα (us)lβσ (ut):
Differentiating with respect to s and letting s = 0 one obtains
dlβα (ut) ∂ lσα (a)
= uγ l σ (ut):
dt ∂ aγ a=0 β
The constant is calculated by using (2.5.2) and the equation Vσα (a)Aσβ (a) = δβα :
84 2 Lie algebras and local Lie groups
α
∂ lσα (a) ∂ Vτ (a) τ α ∂ Aτσ ( a)
= Aσ ( a) +Vτ (a)
∂ aγ a=0 ∂ aγ ∂ aγ a=0
" #
β
∂ Aβv (a) ∂ Aτ ( a)
= Vτ (a)Vvα (a)Aτσ ( a) +Vτα (a) σ γ
∂ aγ ∂a
a=0
∂ Aασ (a) ∂ Aασ ( a)
= +
∂ aγ ∂ aγ a=0
∂ Aα (a)
= 2 σ γ = α
2rγσ :
∂a a=0
The latter equality follows from Eq. (2.4.2). The equations a 1 = a and
i.e. rβαγ aβ aγ = 0; yield that the constants rβαγ are skew-symmetric with respect to the
lower indices in the canonical system of coordinates. Therefore, equation (2.4.3)
provides
α
2rγσ = Cσαγ :
Thus, the matrix (2.5.2) satisfies the system of equations
dlβα (u;t)
= Cσαγ uγ lβσ (ut); lβα (0) = δβα (2.5.3)
dt
along the subgroup G1 : a = ut:
We have already constructed inner automorphisms of the Lie algebra Lr given by
Eqs. (2.2.8) with the matrices fβα (t) in x2.2. Substitution of expressions (2.2.8) into
equations of the subgroup G1 ; (2.2.7) yields the following system of equations for
these matrices:
d fβα (t)
= Cσαγ eγ fβσ (t); fβα (0) = δβα ;
dt
coinciding with (2.5.3) when u = e: Hence,
when u = e and one obtains that the group Z is isomorphic to the group of inner au-
tomorphisms of the Lie algebra Lr : Recall that isomorphic groups have isomorphic
Lie algebras. Furthermore, a Lie algebra of a group of automorphisms of the Lie
algebra Lr is a Lie algebra fEg of operators E = eβ Eβ ; generated by the operators
(2.2.6). The latter Lie algebra is isomorphic to the adjoint algebra LD of the Lie
algebra Lr : This completes the proof of Theorem 2.14.
2.6 Local Lie group of transformations 85
2.6.1 Introduction
Definition 2.16. The family fTa g is called a local Lie group GNr of point transfor-
mations of the space E N if fTa g is a local Lie group Gr with an usual multiplication
of transformations and if the functions f i (x; a) in (2.6.1) are twice continuously
differentiable with respect to the variables x; a:
If ϕ (a; b) is the multiplication law of elements in Gr ; then multiplication of trans-
formations in GNr is carried out by the formulae
Since the group GNr is a group Gr given by the multiplication rule ϕ (a; b); all
notions and facts concerning Gr refer to GNr as well. However, due to the special
form of the multiplication law in GNr some new notions and facts arise.
Let us introduce the auxiliary functions
∂ f i (x; a)
ξαi (x) = (i = 1; : : : ; N; α = 1; : : : ; r) (2.6.3)
∂ aα a=0
∂
Xα = ξαi (x) (α = 1; : : : ; r): (2.6.4)
∂ xi
The operators Xα are referred to as basis operators of the group GNr :
Theorem 2.15. The functions x0i = f i (x; a) satisfy the system of equations
∂ x0i
= ξσi (x0 )Vασ (a); x0i ja=0 = xi ; (2.6.5)
∂ aα
β
where Vβα (a) are auxiliary functions of the group Gr and the linear operators Xα
(2.6.4) are linearly independent. Conversely, let us suppose that a local Lie group
86 2 Lie algebras and local Lie groups
Gr with auxiliary functions Vβα (a) and linearly independent operators Xα (2.6.4) are
given. If the system of equations (2.6.5) has a unique solution for any x 2 E N ; then
substitution of the solution of the system (2.6.5) into the formulae (2.6.1) determines
a local Lie group of transformations GNr isomorphic to the group Gr :
Proof. Let ∆ a be a (small) shift of the point a: By virtue of Eqs. (2.6.1) and (2.6.2),
the formula of multiplication of transformations
f i (x; a + ∆ a) = f i (x0 ; ϕ (a 1
; a + ∆ a)):
Making the Taylor expansion of the right-hand and the left-hand sides, using the
definition (2.6.3), and comparing the principal parts when ∆ a ! 0; one obtains Eqs.
(2.6.5). The initial conditions of (2.6.5) follow directly from Definition 2.16, since
a unit element of the group GNr is an identical transformation of E N : In order to
prove that the operators (2.6.4) are linearly independent, let us assume that ∑a is a
canonical system of the first kind and that eα0 Xα = 0 for some vector e0 ; i.e.
One has
x0i = f i (x; e0t)
along the subgroup G1 with equations aα = eα0 t; and Equations (2.6.5), (2.3.19)
yield
dx0i ∂ x0i
= eα0 α = ξσi (x0 )Vασ (e0t)eα0 = ξσi (x0 )eσ0 = 0: (2.6.6)
dt ∂a
Equations (2.6.6) show that x0i = xi along the whole G1 ; i.e., all transformations G1
are identical. It is possible only for e0 = 0; which was to be proved.
Let us prove the converse statement of Theorem 2.15. Let the functions obtained
as a solution of the system (2.6.5) have the form (2.6.1), thus determining the family
fTa g of transformations of E N : Let us demonstrate that these Ta satisfy Eqs. (2.6.2).
To this end assume that
Using Eqs. (2.6.5) and (2.3.13), as well as the property (2.3.12), one obtains that
∂ x00i
= ξσi (x00 )Vασ (b); x00i jb=0 = x0i ;
∂ bα
∂ yi ∂ yi ∂ ϕ β β
= = ξσi (y)Vβσ (ϕ )Aτ (ϕ )Vατ (b) = ξσi (y)Vασ (b);
∂ bα ∂ ϕ β ∂ bα
2.6 Local Lie group of transformations 87
y0 jb=0 = x0i :
One can see that x00i and yi satisfy one and the same system of equations (2.6.5) as
functions of the point b and the same initial conditions. Uniqueness of the solution
guarantees that x00i = yi for any x; a; b; which is Eq. (2.6.2).
This proves the group property of the family fTa g and shows that the mapping
Gr ! fTa g given by the formula ψ (a) = Ta is at least homomorphic. One has only
to demonstrate that ψ is an isomorphism. Indeed, one would have x0i = xi for all
a = e0t along the subgroup G1 from the kernel of the homomorphism ψ with the
directing vector e0 ; and hence, eσ0 ξσi (x) 0 according to (2.6.6), i.e.
eσ0 Xσ = 0:
Since the operators (2.6.4) are linearly independent by assumption, the above equa-
tion yields e0 = 0; so that the kernel ψ consists of one point a = 0: This proves that
ψ is an isomorphism and Theorem 2.15 is proved.
Corollary 2.3. The functions x0i = f i (x; et) satisfy the equations
dx0i
= eα ξαi (x0 ); x0i jt=0 = xi (2.6.7)
dt
along the subgroup G1 with the directing vector e: In fact, these equations have
already been written out in Eqs. (2.6.6).
The structure constants Cβαγ of the group Gr are also called the structure constants
of the group of transformations GNr :
Theorem 2.16. The linear span of the operators Xα (2.6.4) is a Lie algebra of oper-
γ
ators, the structure constants of which coincide with the structure constants Cαβ of
the group Gr ; so that
σ
[Xα ; Xβ ] = Cαβ Xσ : (2.6.8)
Conversely, if an r-dimensional Lie algebra of operators with the basis (2.6.4) is
given in E N ; then there exists a local Lie group of transformations GNr ; whose basis
operators coincide with the given (2.6.4).
Proof. The solvability of the system (2.6.5) with any initial conditions guarantees
that it is completely integrable. Note that the system has the form (2.3.14). Writing
the test (2.3.15) for complete integrability and using the Maurer-Cartan equations
(2.3.17), one can verify that the criterion for complete integrability of the system
(2.6.5) is given by Eqs. (2.6.8). Note that this discovers the equivalence of complete
integrability of Eqs. (2.6.5) with validity of Eqs. (2.6.8).
88 2 Lie algebras and local Lie groups
a!c: cα = ϕ α (a; b)
form a local Lie group of transformations of Grr : It is known as the first paramet-
ric group of the group Gr : The formulae (2.6.3) for this group of transformations
coincide with the formulae (2.3.5), and the basis operators of its Lie algebra Lrr co-
incide with the operators (2.3.23). Likewise, one can define the second parametric
group of the group Gr as the group of transformations of the space E r (b); namely
the transformations b ! c given by the same formulae ϕ (a; b) = c:
As it has already been mentioned, the proofs of theorems of existence of the groups
Gr or GNr contain an algorithm for constructing multiplication laws (in Gr ) or trans-
2.6 Local Lie group of transformations 89
formations (in GNr ) as well. The corresponding groups are obtained in canonical
coordinates of the first kind. However, the algorithm is inconvenient in applications
because it appears to be very cumbersome and is hardly ever used in fact. The algo-
rithm of constructing Gr and GNr based on finding a “basis” set of subgroups G1 is
more applicable in practice.
The algorithm consists in the following. Let us suppose that we know a set of
r subgroups G1 of the group Gr with the property that the directing vectors of the
subgroups G1 are linearly independent in a certain system of coordinates ∑a : The
subgroups can be written by the formulae a = aα (āα ) or
where āα is the parameter of the subgroup with the number α : Now let us compose
a single product of all gα (āα ): It will be some element ga 2 Gr in the coordinates
∑a :
ga = g1 (ā1 )g2 (ā2 ) gr (ār ): (2.6.10)
We claim that every element ga 2 Gr is uniquely represented in the form (2.6.10)
when the parameters āα vary independently of each other. In other words, we state
that the values of parameters āα can be taken as new coordinates in Gr thus provid-
ing a new system of coordinates ∑ā : Indeed, equation (2.6.10) shows that equations
(2.3.2) hold for a and ā; and one has only to verify that the Jacobian
α
∂ a
∂ āβ
ā=0
does not vanish. Since all āα are independent, one can assume while differentiating
with respect to āβ ; that all āα are equal to zero except for āβ : Then equation (2.6.10)
takes the form
ga = gβ (āβ )
or, by virtue of Eq. (2.6.9), ga = ga β (āβ ) so that the derivative
∂ aα
∂ āβ ā=0
is equal to the coordinate eβα of the directing vector eβ of the subgroup G1 with the
number β according to definition of the directing vector. Therefore, the considered
Jacobian equals to jeβα j and does not vanish due to linear independence of the vectors
eβ (β = 1; : : : ; r):
The coordinates introduced in Gr according to the formula (2.6.10), are called
canonical coordinates of the second kind.
Likewise, one can introduce canonical coordinates of the second kind in GNr : In
this case, one multiplies the transformations Ta α of subgroups G1 with the parame-
ters aα : The result is written in the form
90 2 Lie algebras and local Lie groups
instead of (2.6.10).
Example 2.5. Let us consider the Lie algebra of operators L22 with the basis
∂ ∂ ∂
X1 = ; X2 = x +y
∂x ∂x ∂y
and construct the corresponding group G22 in canonical coordinates of the second
kind. The subgroup G1 with the operator X1 is the group G1 of translations and, as
we already know from Chapter 1, has the form
Ta : x0 = x + a; y0 = y:
The subgroup G1 with the operator X2 is the group G1 of dilations and has the form
Tb : x0 = bx; y0 = by:
T(a;b) = Ta Tb : x0 = bx + a; y0 = by:
Let GNr be a local Lie group of point transformations x0 = Ta x in the space E N (x)
and let
∂
Xα = ξαi (x) i (α = 1; : : : ; r) (3.1.1)
∂x
be a basis of its Lie algebra LNr of operators.
Definition 3.1. A function I(x); which is not identically constant, is called an in-
variant of the group GNr if
I(Ta x) = I(x)
for all transformations Ta 2 GNr :
Comparing this definition with Definition 1.3 and invoking the corollary of The-
orem 2.7, one can see that the function I(x) is an invariant of the group GNr if and
only if it is an invariant of every subgroup G1 GNr : Theorem 1.4 gives the nec-
essary and sufficient condition for the group G1 to have I(x) as is its invariant. By
virtue of this theorem, I(x) is an invariant of GNr if and only if
XI(x) = 0
Thus, an invariant I(x) of the group GNr has to be a solution of the system of
linear differential equations (3.1.2). The questions then arise about the existence
and the multitude of the solutions of the system (3.1.2). First of all, it is clear
that if I 1 (x); : : : ; I s (x) are some solutions of the system (3.1.2), then any function
F(I 1 (x); : : : ; I s (x)) is also a solution:
∂F
Xα F(I) = Xα I σ = 0:
∂ Iσ
Thus, the questions of functional dependence and independence of functions arise.
These questions belong to the classical analysis. Let us recall some facts from this
field.
Functions f 1 (x); : : : ; f s (x) are said to be functionally dependent if there exists a not
identically vanishing function F(z1 ; : : : ; zs ) such that the function F( f 1 (x); : : : ; f s (x))
vanishes identically with respect to the independent variables x = (x1 ; : : : ; xN ) 2 E N ;
F( f 1 (x); : : : ; f s (x)) 0:
F( f 1 (x); : : : ; f s (x)) 0
F(z1 ; : : : ; zs ) 0
with respect to the variables z; then the functions f 1 (x); : : : ; f s (x) are said to be
functionally independent. In what follows, functions f σ (x) are supposed to be once
continuously differentiable.
Lemma 3.1. The functions f σ (x) (σ = 1; : : : ; s) are functionally independent if and
only if the general rank R(J) of the Jacobi matrix
s
∂f
J=
∂ xi
is equal to s;
R(J) = s:
If R = R(J) < s; then there exist s R functionally independent functions Fµ (z1 ; : : : ;
zs ); such that
Fµ ( f 1 (x); : : : ; f s (x)) 0 (µ = 1; : : : ; s R):
3.1 Invariants of the group GNr 93
Let us turn to investigate a system of equations in the form (3.1.2). The operators
Xα are of the form (3.1.1) and not supposed to be the basic operators of LNr so far.
Definition 3.2. Operators Xα (α = 1; : : : ; r) are said to be linearly connected if there
exist functions ϕ α (x) (α = 1; : : : ; r); not all identically zero, such that
ϕ α Xα 0;
∂f
Xα f ξαi (x) = 0 (α = 1; : : : ; s) (3.1.3)
∂ xi
is said to be complete (Jacobian) if the operators Xα compose a complete (Jacobian)
system of operators.
Lemma 3.2. If the system (3.1.3) is complete, then s N: When s < N there ex-
ist exactly N s functionally independent solutions of the system, and any of its
solutions is their function.
Proof. Note that the property of the system (3.1.3) to be complete or Jacobian does
not depend on the choice of a system of coordinates in E N : This follows from
Lemma 1.4, see x1.6. Further, if s > N; then there are functions ϕ α (x) such that
ϕ α Xα 0; since in this case the matrix
94 3 Group invariant solutions of differential equations
ξαi (x)
has the number of columns N; less then the number of rows s; so the rows are to be
linearly dependent for any x 2 E N :
Let us introduce the notion of equivalent systems of operators (or equations of the
form (3.1.3)). A system of operators fXα0 g is said to be equivalent to a system of op-
erators fXα g if Xα0 are independent linear combinations (with variable coefficients)
β
of operators Xα ; specifically, if there exist functions ωα (x) such that
β
jωα (x)j 6= 0
and
β
Xα0 = ωα (x)Xβ :
It is evident that equivalent systems of equations (3.1.3) have the same solutions. Let
us demonstrate that any complete system is equivalent to some Jacobian system.
Indeed, the completeness of the system fXα g entails that the general rank of the
matrix
ξαi (x)
equals to s: If we assume, without loss of generality, that a non-vanishing minor of
the order s of the matrix is composed by the first s columns, then one obtains an
β
equivalent system fXα0 g with ωα (x); equal to elements of the inverse matrix of the
minor. The matrix ξα of the resulting equivalent system has the form
0i
0 1
1 0 0 ξ10s+1 ξ10N
B C
B0
B 1 0 ξ20s+1 ξ20N C
C
B C:
B. .. .. .. .. C
B .. . . . . C
@ A
0 0 1 ξs0s+1 ξs0N
We will carry out a procedure of s steps to reduce the system (3.1.3) to the simplest
form. The first step consists in finding by means of Theorem 1.3 such a system of
coordinates (y) in E N ; where yi = yi (x) (i = 1; : : : ; N); in which the operator X1
becomes the translation operator with respect to y1 :
∂
X1 = Y1 =
∂ y1
In order to construct the system (y); one has to find solutions yi (x) of the equations
0
X1 y1 (x) = 1; X1 yi (x) = 0 (i0 = 2; : : : ; N):
The operators Xα written in the variables (y) provide a system of operators fYα g
with
∂
Y1 =
∂ y1
As it has already been mentioned, the system fYα g is Jacobian again and if
∂
Yα 0 = ηαi 0 (y) (α 0 = 2; : : : ; N);
∂ yi
then
[Y1 ;Yα 0 ] = 0
provides that
∂ ηαi 0
= 0:
∂ y1
Thus, the coordinates ηαi 0 are independent of the variable y1 : Since the differentia-
tion with respect to y1 is absent in the coordinates of the commutator [Yα 0 ;Yβ 0 ] with
the numbers 2; : : : ; N; it follows from the above facts that the system of operators
∂ 0 ∂
Y10 = ; Yα0 0 = ηαi 0 (y0 ) (α 0 = 2; : : : ; s; i0 = 2; : : : ; N);
∂ y1 ∂ yi
0
which is equivalent to the system fXα g; is Jacobian again. Moreover, the operators
fYα0 0 g (α 0 = 2; : : : ; s) act in the space E N 1 (y0 ) of the points y0 = (y2 ; : : : ; yN ) and
compose a Jacobian system. The construction of the system fYα0 0 g completes the
first step.
Since fYα0 0 g has all the properties of fXα g; one can make the second, : : : ; s-th
step of our procedure and as a result arrive at a system of coordinates (z) and a
system of operators fZα g having the form
∂ ∂
Z1 = ; : : : ; Zs = s
∂ z1 ∂z
96 3 Group invariant solutions of differential equations
f = f (zs+1 ; : : : ; zN ):
Returning to the initial coordinates (x); one obtains solutions of the system (3.1.3),
f τ = zτ (x) (τ = s + 1; : : :; N);
with the properties mentioned in Lemma 3.2. This proves the lemma.
In addition note that when s = N the complete system has no functionally inde-
pendent solutions at all, for it can be satisfied only by a constant.
Note that the above proof of Lemma 3.2 contains, in fact, the algorithm for con-
structing functionally independent solutions of the system (3.1.3). The basic ele-
ment of the algorithm is the transition fXα g ! fYα0 g; requiring only integration of
ordinary differential equations.
Let us turn back to the problem of invariants of the group GNr with the basis set of
operators (3.1.1) of its Lie algebra LNr : Let us introduce a matrix of coordinates of
operators Xα :
M = ξαi (x) ; (3.1.4)
where α is the number of the row, i is the number of the column, and α = 1; : : : ; r;
i = 1; : : : ; N: The general rank of the matrix M is denoted by R; so that R = R(M):
The following result is formulated in this notation.
Theorem 3.1. The group GNr has invariants if and only if R < N: If this inequality
is satisfied, then there exist t = N R functionally independent invariants I τ (x) (τ =
1; : : : ;t) of the group GNr such that any of its invariants is their function.
Proof. Since Xα span a Lie algebra LNr ; the system of operators (3.1.1) is closed with
respect to the operation of commutation, but the operators can be linearly connected.
The maximum number of linearly unconnected operators Xα equals to the general
rank R of the matrix M (3.1.4). Consider the system (3.1.2) and eliminate operators
expressed via R unconnected operators from it. Then one obtains a complete system
of R operators. If R = N; there are no invariants. If R < N; the statement follows
from Lemma 3.2. Theorem 3.1 is proved.
Note that the algorithm described in the proof of Lemma 3.2 is efficient in finding
invariants in practice. Let us consider an example of its realization.
3.1 Invariants of the group GNr 97
∂ ∂ ∂ ∂ ∂ ∂
X1 = z y ; X2 = z +x ; X3 = y x
∂y ∂z ∂x ∂z ∂x ∂y
These operators are linearly connected, namely
Since this is the only connection, we have R = 2 and, according to Theorem 3.1,
there is one invariant (t = N R = 3 2 = 1): Let us find it. First,
p let us determine
invariants of the operator X1 which are obviously x and ρ = y2 + z2 : Let us turn
to new variables x; ρ ; z: According to the formula (2.3.7) from 1.2, one obtains
∂ ∂ xz ∂ ∂ xy ∂
Y1 = y ; Y2 = z + ; Y3 = y ;
∂z ∂x ρ ∂ρ ∂x ρ ∂ρ
or, turning to equivalent operators,
∂ ∂ ∂
Y10 = ; Y20 = ρ x ;
∂x ∂x ∂ρ
and the operator Y3 is eliminated as it is linearly connected with Y2 :
y
Y3 = Y2 :
z
Invariant of the operator Y20 on the plane (x; ρ ) is
I = x2 + ρ 2 :
This is the desired invariant of the corresponding G33 : Turning back to the initial
coordinates, one finally obtains
I = x2 + y2 + z2 :
Let us point out some other notions connected with existence of invariants of GNr :
A group GNr is said to be transitive if it has no invariants. A transitive group is
characterized by the relations r R = N: If r = R the group is said to be simply
transitive, and if r > R it is termed multiply transitive.
If the group GNr has invariants, it is said to be intransitive. By virtue of Theo-
rem 3.1, a criterion of intransitiveness is the inequality R < N:
These terms are connected with properties of a group to contain transformations
Ta mapping every point x 2 N to any other point x0 :
The notion of a differential invariant of the group GNr is a direct extension of
Definition 3.1 to the prolonged group G eNre : Therefore, we do not dwell on it here
e
and only mention that due to an unbounded increase of dimension of the space EeN ;
98 3 Group invariant solutions of differential equations
any group GNr has differential invariants (its order can be higher than one) when
successive prolongations are fulfilled and ranks Re are limited by the number r:
Substantially new facts connected with GNr when r > 1 begin from the following
definition where the matrix M (3.1.4) and its general rank R are meant.
Definition 3.5. A manifold M E N is termed a nonsingular manifold of the group
GNr ; if
R(MjM ) = R:
Otherwise, i.e. when the rank of M decreases on M as compared to its general rank,
the manifold M is said to be a singular manifold of the group GNr :
Let x̄ be the points of an invariant manifold M of the group GNr : Then, by defi-
nition,
x̄0 = Ta x̄ 2 M :
3.2 Invariant manifolds 99
which was to be proved. Thus, the operators X jM ; induced by all operators X 2 LNr ;
compose a Lie algebra denoted by LNr jM and called the Lie algebra induced by the
Lie algebra LNr on its invariant manifold M : The induced Lie algebra LNr jM of the
operators (3.2.4) is actually the Lie algebra of the induced group GNr jM :
The above fact means that there is a homomorphism ψ of the algebra LNr to the
Lie algebra LNr jM given by the formula
ψ (X ) = X jM :
100 3 Group invariant solutions of differential equations
ξ i (x̄) = 0 (i = 1; : : : ; N)
for all x̄ 2 M :
Let us consider the problem of constructing invariant manifolds of the group GNr : If
GNr is intransitive and I τ (x) (τ = 1; : : : ;t) is the complete set of its invariants, then
any manifold M ; given by a system of equations of the form
is an invariant manifold. This follows directly from Definitions 3.4 and 3.1. Let us
demonstrate that this procedure of generating invariant manifolds of the group GNr
is the most general in a sense.
Theorem 3.3. The group GNr has nonsingular invariant manifolds if and only if
R < N: If this inequality is satisfied and if
I τ (x) (τ = 1; : : : ;t = N R)
is a complete set of functionally independent invariants of the group GNr ; then any
of its nonsingular invariant manifolds can be given by a system of equations of the
form (3.2.5).
Proof. Let M be regularly given by Eqs. (3.2.1). The invariance conditions (3.2.2)
written for the basis operators Xα ;
σ
i ∂ψ
ξα = 0 (α = 1; : : : ; r);
∂ xi M
show that there are linear dependencies between columns of the matrix
MjM
so that its rank is less than N: Since M is a nonsingular manifold of GNr it follows
that R < N by virtue of Definition 3.5.
Let us assume that R < N: Consider a nonsingular invariant manifold M of the
group GNr given regularly by Eqs. (3.2.1). Note that the transformations of the in-
duced group GNr jM act in the space of N S dimensions. Therefore, GNr jM has a
complete set of N s R functionally independent invariants. Now let us take the
3.2 Invariant manifolds 101
invariants I τ (x) (τ = 1; : : : ;t) and consider them on M : These are functions I τ (x̄)
which are invariants of the induced group GNr jM ; so that we have t = N R as
its invariants. Assume that there are N R s0 functionally independent invariants
among them; the number can not be higher than the number of functionally inde-
pendent invariants of the group CrN jM ; so that
N R s0 N s R;
whence
s s0 :
Moreover, by virtue of Lemma 3.1, there exist s0 functionally independent functions
Φ σ (z1 ; : : : ; zt ); such that
with these functions Φ σ : The manifold M 0 contains the given invariant manifold
M because Equations (b) are satisfied identically for the points x̄ 2 M by virtue of
(a). Further, the dimension of M 0 is N s0 and therefore, the inclusion M 0 M
provides the inequality
N s0 N s;
whence
s s0 :
Thus, s = s0 : The equality of dimensions of the manifolds M and M 0 and the in-
clusion M M 0 entail that
M0 M:
Since Equations (b) of the manifold M 0 have the required form (3.2.5), Theorem 3.3
is proved.
Theorem 3.3 can also be termed a theorem on representation of nonsingular in-
variant manifolds of the group GNr : Singular invariant manifolds may have no rep-
resentation of the form (3.2.5). In order to find them one has to compose a manifold
M given by the system of equations obtained by nullifying all minors of the maxi-
mum order of the matrix M (3.1.4). This M should also be checked for “invariance”
by means of Theorem 3.2.
Let us introduce a numerical characteristics of the invariant manifold of the group
GNr which will be of importance in what follows. The dimension of the manifold M
is denoted by dim M :
Definition 3.7. The number
ρ =t (N dimM ) = dim M R
102 3 Group invariant solutions of differential equations
where R = R(M ); is called the rank of the invariant manifold M of the group GNr :
In other words, the rank of the invariant manifold M of the group GNr is the
dimension of M in the space of invariants of the group. Here the space of invariants
is understood as the space E t (I); where the coordinates of the point are values of
the invariants I 1 ; : : : ; It of the group GNr :
Differential invariant manifolds of the group GNr are determined likewise. Namely
they are invariant manifolds with respect to transformations of the prolonged group
GeNre : They will be manifolds in the prolonged space E Ne and will be denoted by
M f: Naturally, the notion of nonsingular M fis connected with the general rank of
the matrix M whose entries are the coordinates of the prolonged operators Xeα : All
e
nonsingular M f can be represented by a system of equations of the form (3.2.5)
but, generally speaking, the left-hand sides will contain differential invariants of the
group GNr :
Let us describe one more procedure for obtaining differential invariant manifolds
of the group GNr : Using the operators of total differentiation (see x1.4)
∂ ∂
Di = + pli l (i = 1; : : : ; n);
∂x i ∂u
one can formulate the procedure in the following theorem.
Theorem 3.4. If I(x; u) is an invariant of the group GNr and if the manifold, deter-
mined by the system of equations
on this manifold. Therefore, the manifold (3.2.6) is invariant by virtue of the crite-
rion (3.2.2) of Theorem 3.2 which obviously holds for differential invariant mani-
folds as well.
All the above reasoning, notions and results can extended directly to higher pro-
longations of a group, and hence to differential invariants and differential invariant
manifolds of higher orders.
3.3 Invariant solutions of differential equations 103
Φ E N (x; u):
where α are numbers of rows and R = R(M) is the general rank of the matrix. Defi-
nition 3.8 and Theorem 3.3 provide a necessary condition for existence of invariant
H-solutions. Namely, the group H should be intransitive, i.e. that it should have
invariants. Hence, R < N: Let
I τ = I τ (x; u) (τ = 1; : : : ;t = N R) (3.3.3)
Φ: Φ k (I 1 ; : : : ; It ) = 0 (k = 1; : : : ; m); (3.3.4)
uk (k = 1; : : : ; m)
If the equations
3.3 Invariant solutions of differential equations 105
Di Φ k (x; u) = 0 (i = 1; : : : ; n; k = 1; : : : ; m)
e the equation
˜ u; p) is an invariant of the prolonged group H;
Since I(x;
∂ I˜ ∂ I˜ ∂ I˜
XeI˜ = ξ i i + η k k + ζik k = 0
∂x ∂u ∂ pi
∂ Φk l ∂Φ
k
l ∂Φ
k
X + ψ X + (X ψ ) = 0:
∂ xi i
∂ ul i
∂ ul
106 3 Group invariant solutions of differential equations
∂Φk
[ζil (x; u; ψ (x; u)) X ψil (x; u)] = 0;
∂ ul
whence, by virtue of the assumed inequality
∂Φk
∂ ul 6= 0;
Theorem 3.5. If the system (S) admits a group H satisfying the conditions (3.3.5),
then there exists a system (S=H) which connects only the invariants I τ (τ = 1; : : : ;t);
the functions Φ k (I) (k = 1; : : : ; m) and derivatives of Φ k with respect to I τ and which
has the following property. The functions Φ k (I) provide a solution of the system
(S=H) for any invariant H-solution Φ written in the form (3.3.4). Conversely, any
solution of the system (S=H); for which
R
∂ Φ k
= m;
∂ Iτ
provides an invariant H-solution Φ in the implicit form (3.3.4).
Proof. For the sake of simplicity, let us suppose that the system (S) is of the first
order and give the algorithm for constructing the system (S=H): It is very simple:
write Eqs. (1.2.2) with indefinite functions Φ k (I); apply the operators of total dif-
ferentiation Di to them and obtain the system of equations
∂ Φ k ∂ Iτ ∂ Φ k ∂ Iτ l
+ τ p = 0; (3.3.6)
∂ I τ ∂ xi ∂ I ∂ ul i
whence all pki (i = 1; : : : ; n; k = 1; : : : ; m) are obtained. The latter operation can be
carried out by virtue of (3.3.5). Substituting the resulting expressions for pki into
equations (S) we obtain the system (S=H):
Let us demonstrate that the system (S=H) thus constructed, in fact, connects
only invariants of the group H: To this end, write the differential invariant manifold
S (which as we agreed above, is supposed to be a nonsingular manifold for H) e of
the group H in an equivalent form via differential invariants of H; namely
S: Ω µ (I; I)
˜ = 0: (3.3.7)
3.3 Invariant solutions of differential equations 107
The elimination of the variables pki from the equations S; as is described in the
algorithm, consists of substitution of the expressions pki = ψik (x; u) obtained from
Eqs. (3.3.6), into differential invariants I˜ = I(x;
˜ u; p): According to Lemma 3.3, the
latter become invariants of the group H; i.e. they are functions of the invariants
I τ (τ = 1; : : : ;t): The expressions for pki derived from Eqs. (3.3.6) in fact contain
derivatives ∂ Φ k =∂ I τ : Hence equations of the system (S=H) connect only the in-
variants I τ and the derivatives ∂ Φ k =∂ I τ :
If equations (3.3.4) are equations of a given invariant H-solution Φ ; then the
functions Φ k (I) have a specific form. Upon the mentioned substitution of expres-
sions for pki into (3.3.7), equations (3.3.7) are satisfied identically (by definition of
a solution) with respect to the variables x; u; and hence, with respect to the variables
I: Conversely, if
R
∂Φk =m
∂ Iτ
for some solution of the system (S=H); then equations (3.3.4) provide functions uk =
ϕ k (x) by virtue of (3.3.5). For these functions their derivatives coincide with those
derived from the system (3.3.6) and the latter turn the equations S into identities
by construction. Hence, these functions furnish a solution of the system (S), which
is obviously its invariant H-solution. Theorem 3.5 is proved for the case when the
system (S) is of the first order. Alternations to be made in the proof for higher-order
systems are evident.
At first sight the system (S=H) seems to be more complicated than the original
system (S), because the unknown functions in (S=H) depend on t = N R = n +
m R independent variables and this number can be greater than the number n of
independent variables in the system (S). However, the solutions of the system (S=H)
are manifolds in a t-dimensional space of the point I and we are interested only
in such solutions that provide m independent equations (3.3.4), i.e. that have the
dimension t m in this t-dimensional space. Therefore, the system (S=H) contains
actually only t m independent variables. Since this number is equal to the rank of
the invariant manifold (3.3.4) of the group H; one obtains finally that the system
(S=H) is a system with
ρ =t m=n R (3.3.8)
independent variables. In other words, the number of independent variables equals
to the rank of invariant H-solutions.
In this meaning, the system (S=H) is simpler than the original system (S), since
one can see from (3.3.8) that ρ < n always.
In practice, one often comes across the case when the variables x; u in invariants
of the group H “split” in the following sense. Invariants I τ (x; u) can be chosen so
that they divide into invariants depending on x; u :
I k (x; u) (k = 1; : : : ; m)
I m+r (x) (r = 1; : : : ; ρ ):
If the condition (3.3.9) is satisfied, equations (3.3.4) can always be reduced to the
form
I k (x; u) = V k (I m+1 (x); : : : ; It (x)); (3.3.10)
so that V k (k = 1; : : : ; m) will be the unknown functions. In this case, it is convenient
to introduce special notation for the latter ρ invariants and to write invariant H-
solutions in the form
Then, the system (S=H) is just a system with respect to the functions v(y) of ρ
independent variables y1 ; : : : ; yρ :
Theorem 3.5 says nothing about existence of invariant H-solutions. It provides
only the fact of their “potential” existence in the meaning of reducibility of the
system (S) to the system (S=H): Indeed, the system (S=H) can have no solutions
and even be just inconsistent. Let us consider a simple example.
Let the system (S) consist of one equation for z = z(x; y) :
∂z ∂z
x +y = 1:
∂x ∂y
One can readily verify that this equation admits the group H1 with the operator
∂ ∂
X =x +y
∂x ∂y
However,
∂z ∂z
x +y 0
∂x ∂y
3.3 Invariant solutions of differential equations 109
for such z; with any function v(λ ): Therefore, the system (S=H) has the form 0 = 1;
i.e. it is inconsistent.
The Lie algebra admitted by the system (3.3.12) has been calculated in Chapter 1.
We will consider here the case γ 6= 3: In this case the system (3.3.12) admits the Lie
algebra L56 spanned by the operators
∂ ∂ ∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = t + ; X4 = t +x ;
∂t ∂x ∂x ∂u ∂t ∂x
(3.3.13)
∂ ∂ ∂ ∂ ∂
X5 = x + u 2ρ ; X6 = ρ +p
∂x ∂u ∂ρ ∂ρ ∂p
Hence, the system (3.3.12) admits a local Lie group G56 and any subgroup of this
group. We will denote the subgroup H; whose Lie algebra has a basis X ;Y; : : : ; by
the symbol H < X;Y; : : : > :
The system (3.3.12) has n = 2 independent variables (t; x), m = 3 unknown func-
tions (u; ρ ; p). In this case N = 2 + 3 = 5: The formula (3.3.8) for the rank of an
invariant H-solution takes the form ρ̂ = 2 R: Since R > 0 it follows that possible
invariant H-solutions are either of the rank ρ̂ = 0; R = 2 or of the rank ρ̂ = 1; R = 1
(the rank of the solution is denoted by ρ̂ not to mixed it with the unknown function
ρ in the system (3.3.12)). Examining the operators (3.3.13) thoroughly, one can see
that R = 1 only for subgroups with one operator, i.e. for one-parameter subgroups.
Example 3.2. The subgroup H < X1 > has invariants
I 1 = u; I2 = ρ ; I 3 = p; I 4 = x:
The variables here are “separated” so that the H-solutions have the form
where U; R; P are unknown functions which by virtue of Theorem 3.5 should satisfy
the system (S=H): Substituting the above expressions for u; ρ ; p and their derivatives
110 3 Group invariant solutions of differential equations
with respect to t; x into Eqs. (3.3.12), one obtains the system (S=H) in the form
1
UU 0 + P0 = 0; UR0 + RU 0 = 0; UP0 + γ PU 0 = 0:
R
This is a system of ordinary differential equations for U; R; P which is easily inte-
grable. Any solution of the system provides an invariant H < X1 >-solution. Here
and in what follows we do not solve the occurring systems (S=H) up to the end, for
it is not important in the present lecture notes, besides, it is not always easy to do it.
The examples serve to illustrate only the process of forming invariant H-solutions
and the abundance of new possibilities.
Example 3.3. The subgroup H < X3 > has invariants
x
I1 = u ; I2 = ρ ; I 3 = p; I 4 = t:
t
The variables are “separated” and the invariant H-solution has the form
x
u= +U(t); ρ = R(t); p = P(t):
t
The system (S=H) appears to be as follows:
tU 0 +U = 0; tR0 + R = 0; tP0 + γ P = 0:
I 1 = u; I2 = ρ e t ; I 3 = pe t ; I 4 = t:
The necessary condition (3.3.5) is already fulfilled here and the system (S=H) can
be constructed.
The number of such examples of invariant H-solutions of the rank ρ̂ = 1 for the
system (3.3.12) can be increased to infinity. The general form of the one-parameter
subgroup is H < eα Xα >; so that its invariants are functions of the constants eα
(α = 1; : : : ; 6): However, it is not so easy to find them in such a general form, for
one will have to make various assumptions about the constants eα while calculating
(it is supposed to verify it as an exercise). Moreover, the same parameters will be
included in the system (S=H); which will complicate solution of the latter to a much
extent. However, the most important is that the main part of the work will appear
to be useless due to predetermined connections between different H-solutions; this
circumstance is to be discussed in x3.4.
Let us take a quick look at invariant H-solutions of the rank ρ̂ = 0: In this case
t = m and equations (3.3.4) are equivalent to equations I τ = Cτ (τ = 1; : : : ; m); where
Cτ are some constants, which are not determined beforehand. The system (S=H) for
this case is just a system of finite equations with respect to unknown Cτ : The rank
ρ̂ = 0 for the system (3.3.12) provides R = 2; by virtue of which such solutions are
to be sought on the subgroups H with two operators. For example, the subgroup
H < X1 ; X2 > has the invariants u; ρ ; p: Hence, the H-solution has the form
u = C1 ; ρ = C2 ; p = C3 ;
where Cα are constants. Then, the system (S=H) is satisfied identically when Ck are
arbitrary.
The matrix of this system is M1 (3.3.15) and since its rank equals to r; then the
system (3.3.17) has N r linearly independent vector solutions (θ ; σ ); namely the
vectors (θ τ ; σ τ ) (τ = 1; : : : ; N r): According to the formula (3.3.16) these vectors
provide the invariants I τ : These invariants are functionally independent. Indeed, the
general rank of the matrix
τ τ
∂ I ∂ Iτ θi τ σkτ τ
; = I ; kI ;
∂ xi ∂ u k xi u
3.3 Invariant solutions of differential equations 113
which is to be calculated
at the point xi = 1; uk = 1; coincides with the rank of the
τ τ
matrix θi ; σk ; which is equal to N r due to linear independence of the vectors
(θ τ ; σ τ ):
In order to meet the condition (3.3.5) it is necessary and sufficient that the rank
of the matrix (σkτ ) equals to m: Let us demonstrate that the latter is fulfilled if and
only if the rank of the matrix (λαi ) equals to r: If
R((λαi )) < r;
χ k = ω α µαk
one obtains from (3.3.17) that the equations χσk k = 0 are satisfied for any solution of
the system (3.3.17). Therefore,
R((σkτ )) < m:
Conversely, if
R((λαi )) = r;
then the system λαi θi = 0 has exactly n r linearly independent solutions. If we
take them as a part of the complete system from N r solutions, then the remaining
N r (n r) = m solutions of the system should be such that R((σkl )) = m: Thus,
finally, the necessary and sufficient condition of “potential” existence of self-similar
H-solutions for the group H with the operators (3.3.14) is
R((λαi )) = r n: (3.3.18)
The notion of a group of dilations is closely connected with the so-called the-
ory of dimensions of physical quantities. In order to reveal this connection, let us
construct final transformations of the group H with the operators (3.3.14). Upon
the construction, e.g. in canonical coordinates of the second kind, one obtains the
transformations
λi λi µk µk
x0i = a1 1 ar r xi ; u0k = a1 1 ar r uk : (3.3.19)
λi λi
depending on r independent parameters a1 ; : : : ; ar : Let us term the aggregate a1 1 ar r
as the “dimension” of the quantity xi and parameters aα as a “unit of dimension”.
Similar terms are used for the quantities uk : The “dimensions” are denoted in the
theory of dimensions as follows:
λi λi µk µk
[xi ] = a1 1 ar r ; [uk ] = a1 1 ar r :
Then invariants I of the form (3.3.16) will be “dimensionless” quantities. This fol-
lows from Eqs. (3.3.17). The theory of dimensions provides the so-called Π-theorem
claiming that any dimensionless quantity is a function of invariants I of the form
114 3 Group invariant solutions of differential equations
(3.3.16). Here it is a particular case of Theorem 3.1 concerning the group of dila-
tions H with the operators (3.3.14).
The notion of a self-similar solution formulated above in a narrow sense is not
satisfactory, because it is not invariant with respect to the choice of the system of co-
ordinates in E N : Upon a change of a coordinate system, the dilation operator (3.3.14)
is no longer a dilation operator in general. For instance, using the coordinates
∂ ∂
Yα = λαi + µαk k (3.3.21)
∂ yi ∂v
In this connection, the following definition taking into account the main pecu-
liarity of a dilation group is suggested.
Definition 3.9. An invariant H-solution is said to be self-similar in a broad sense if
the group H is Abelian.
Note that the dilation group has two peculiarities: it is Abelian and it does not
contain linearly connected operators. In the above definition we use only the first
property. The second property appears to be of no importance from the viewpoint
of invariants due to the following statement.
Theorem 3.6. An Abelian group H; such that R(M) = R; contains a subgroup HR
which is similar to the group of dilations and has the same value R:
Proof. The equation R(M) = R guarantees that the Lie algebra of the group H con-
tains R linearly unconnected operators
∂
Xα = ξαi (x) (α = 1; : : : ; R): (3.3.22)
∂ xi
Since H is Abelian, one has [Xα ; Xβ ] = 0 for all α ; β : Therefore, there exists a
subgroup HR H; for which the operators (3.3.22) provide a basis of its Lie algebra.
Since these operators are linearly unconnected, we have
R((ξαi )) = R:
Xα ϕ i = δαi (α = 1; : : : ; N; α = 1; : : : ; R) (3.3.23)
and are functionally independent. If such functions exist, then turning to coordinates
yi = ϕ i (x) in E N ; one has the transformed operators
3.3 Invariant solutions of differential equations 115
∂ ∂
Xα = Yα = Xα (ϕ i ) = α
∂y i ∂y
and the theorem is proved. In order to construct the function ϕ = ϕ i (x) when i R
we search it in the implicit form F(x; ϕ ) = 0; where F(x; ϕ ) is such that
∂F
6= 0:
∂ϕ
The equations
∂F ∂F ∂ϕ
+ =0
∂xj ∂ϕ ∂xj
show that it is sufficient to determine the function F(x; ϕ ) as a solution of the system
∂F
Zα F = Xα F + δαi = 0 (α = 1; : : : ; R) (3.3.24)
∂ϕ
satisfying the condition
∂F
6= 0:
∂ϕ
Let us demonstrate that the system (3.3.24) is complete (Definition 3.3). Indeed,
firstly, we see that
[Zα ; Zβ ] = [Xα ; Xβ ] = 0;
and secondly, operators Zα (α = 1; : : : ; R) are linearly unconnected, for any linear
connection between them would be a linear connection of the operators Xα (3.3.22),
which are linearly unconnected by assumption. Since R N; and the operators Zα
act in the space of the point (x1 ; : : : ; xN ; ϕ ); having the dimension N + 1; then, by
Lemma 3.2, the system (3.3.24) has at least one independent solution F(x; ϕ ): There
is certainly such a solution among solutions of the system (3.3.24), for which
∂F
6= 0:
∂ϕ
Otherwise all solutions of the system (3.3.24) would also be solutions of the system
Xα F = 0 (α = 1; : : : ; R): This is impossible because the latter system has only N R
functionally independent solutions, while (3.3.24) has N + 1 R: Thus we have
obtained some functions ϕ i (x) for i R: If i > R; then equations (3.3.23) have the
form
Xα ϕ i = 0 (α = 1; : : : ; R);
so that ϕ i are invariants of the group HR : By virtue of Lemma 3.2, there exist N R
functionally independent invariants. We take them as functions ϕ R+1 ; : : : ; ϕ N : Thus,
the system of functions ϕ i (x) satisfying Eqs. (3.3.23) is constructed. Let us demon-
strate that these ϕ i (x) are functionally independent. Indeed, if
116 3 Group invariant solutions of differential equations
i
∂ ϕ
∂ x j 0;
∂ϕi
µi = 0 ( j = 1; : : : ; N):
∂ xj
µi Xα ϕ i = µα = 0 (α = 1; : : : ; R):
∂ ϕ R+1 ∂ϕN
µR+1 + + µN = 0 ( j = 1; : : : ; N)
∂x j ∂xj
and mean that the functions ϕ R+1 ; : : : ; ϕ N are functionally dependent (if not all
µR+1 ; : : : ; µN are zero). Since this contradicts the choice of the functions ϕ R+1 ; : : : ; ϕ N
it follows that the equation i
∂ϕ
∂xj 0
is impossible. Theorem 3.6 is proved.
Corollary 3.1. For any self-similar H-solution in a broad sense there is such a sub-
group HR H and such a system of coordinates in the space E N ; where this solution
is a self-similar HR -solution in a narrow sense.
Corollary 3.2. Any self-similar H-solution in a broad sense can be derived as a so-
lution independent of some independent variables in a certain system of coordinates
in E N :
The latter follows from the possibility of reducing the group HR to a translation
group.
Note that since a one-parameter group H1 is always Abelian, then all invariant
H1 -solutions of the rank ρ̂ = 1 for the equations of gas dynamics (3.3.12) are self-
similar in a broad sense.
It has been demonstrated in the previous section that if the system (S) admits a group
G then one can search for particular solutions of the system (S); namely invariant
H-solutions for any subgroup H G: An important numerical characteristics of
3.4 Classification of invariant solutions 117
ρ =n R;
H 0 = T HT 1
:
Φ0 = T Φ
is an invariant H 0 -solution.
Proof. The property of solution of Φ to be an invariant H-solution can be expressed
by the formula H Φ = Φ : Therefore, one has
H 0 Φ 0 = T HT 1
T Φ = T HΦ = T Φ = Φ 0;
Hence, solving the above group problem, it is sufficient to know the subgroup
H G up to similarity in the meaning of Definition 3.10. Note that the relation of
similarity of groups H 0 and H; expressed by the formula
H 0 = T HT 1
;
X = eα Xα :
β
In the notation of x2.5, the automorphism la with the matrix (lα (a)) acts on basis
“vectors” Xα of the Lie algebra LNr by the formula
β
Xα0 = lα (a)Xβ (α = 1; : : : ; r): (3.4.1)
which means that in a fixed basis Xα the same automorphism can be considered as
transformation of the vector l to l 0 with the coordinates
β
l 0β = lα (a)eα (β = 1; : : : ; r): (3.4.2)
3.4 Classification of invariant solutions 119
resulting from Eqs. (2.5.3) of Chapter 2, the operators of the group of transforma-
tions (3.4.1), derived by the formulae (2.6.3) of Chapter 2, have the form
γ ∂
Eα = Cαβ Xγ
∂ Xβ
γ
or by virtue of the correlation [Xα ; Xβ ] = Cαβ Xγ ; finally
∂
Eα = [Xα ; Xβ ] (α = 1; : : : ; r): (3.4.3)
∂ Xβ
According to Theorem 2.14, these operators span the adjoint algebra of the Lie
algebra LNr : Application of the operators (3.4.3) for representing the adjoint algebra
is particularly convenient because the coordinates of the operators Eα are taken
directly from the table of commutators of basis operators of the Lie algebra LNr : One
can easily restore the finite automorphisms lα by the operators Eα ; e.g. in canonical
coordinates of the second kind.
Every subalgebra LNS LNr ; as a linear subspace in LNr can be given by the system
of equations
l α = ξσα t σ (α = 1; : : : ; r; σ = 1; : : : ; s); (3.4.4)
where t σ (σ = 1; : : : ; s) are arbitrary parameters, and ξσα are fixed constants, char-
β
acterizing a subalgebra. Under the action of an automorphism lα ; this subalgebra
transforms into a similar subalgebra, the corresponding equations of which have the
form
β
l α = [lβα (a)ξσ ]t σ (α = 1; : : : ; r) (3.4.5)
by virtue of (3.4.2). The formula (3.4.5) contains the whole class of subalgebras
similar to the subalgebra (3.4.4).
The formulae (3.4.4) and (3.4.5) look especially simple in the case of one-
dimensional subalgebras. Since there will be only one parameter t; then considering
l α to be determined up to an arbitrary common factor, one can assume t = 1: Then
(3.4.5) becomes (3.4.2).
Example 3.7. Consider the Lie algebra L23 of operators with the basis
∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = x +y
∂x ∂y ∂x ∂y
and find all classes of similar one-dimensional subalgebras L1 : The table of com-
mutators of operators Xα has the form
120 3 Group invariant solutions of differential equations
X1 X2 X3
X1 0 0 X1
X2 0 0 X2
X3 X1 X2 0
The formulae (3.4.3) give the following operators Eα of the group of inner automor-
phisms:
∂ ∂ ∂ ∂
E1 = X1 ; E2 = X2 ; E3 = X1 X2
∂ X3 ∂ X3 ∂ X1 ∂ X2
Let us find one-parameter groups of automorphisms Aα (t) for every operator
Eα by means of integrating Eqs. (2.6.7) of Chapter 2. For example, for E1 these
equations are written
Likewise,
A2 (t) : X10 = X1 ; X20 = X2 ; X30 = tX2 + X3 ;
A3 (t) : X10 = e t X1 ; X20 = e t X2 ; X30 = X3 :
Let us construct the general inner automorphism in canonical coordinates of the
second kind (a; b; c) by the formula
Thus, the adjoint group of the Lie algebra L23 is constructed. Now we take the sub-
algebra L1 ; consisting of the operator X = eα Xα and consider the following possi-
bilities.
(a) e3 6= 0; the automorphism A can be chosen so that e01 = e02 = 0: To this end,
it is sufficient to let
e1 e2
b = a 3; c = a 3
e e
Moreover, one can let e3 = 1: This provides a class of subalgebras L1 ; similar to the
subalgebra < X3 > :
(b) e3 = 0; here by means of the parameter a; one can have
(e01 )2 + (e02 )2 = 1
or
e01 = cos ϕ ; e02 = sin ϕ :
This provides a one-parameter family of classes of subalgebras similar to the subal-
gebras < cos ϕ X1 + sin ϕ X2 > depending on the parameter ϕ :
The final result of the classification is as follows. Any subalgebra L1 L23 is
similar to one of subalgebras
while these subalgebras are not similar to each other for any ϕ from the interval
0 ϕ < 2π :
If properly developed, this method of constructing classes of similar subalgebras
of a finite-dimensional Lie algebra can be used for subalgebras of the second, third
and higher orders.
Let GNr be a local Lie group of point transformations in the space E N (x): This section
is devoted to expansion of the notion of an invariant H-solution. This is done on the
basis of the following definition.
Definition 3.11. A manifold N E N is said to be a partially invariant manifold
of the group GNr if N lies in an invariant manifold M of the group GNr ; i.e.
N M:
In what follows, M will be taken as the smallest invariant manifold of the group
GNr containing N : It is determined as an intersection of all invariant manifolds
122 3 Group invariant solutions of differential equations
Na = Ta N :
S
The union (Na ) of all manifolds Na obtained when Ta runs the whole group GNr
a
obviously contains the given N : It is invariant with respect to all transformations
Ta 2 GNr and is the smallest invariant manifold of the group GNr containing N :
Hence, M can be determined by the formula
[
M= (Ta N ); Ta 2 GNr : (3.5.1)
a
M = (ξαi (x))
whose entries are the coordinates of basis operators of the Lie algebra LNr of the
group GNr : We transform a point x0 2 N by an arbitrary Ta 2 GNr and obtain the
manifold of points x0 = Ta x0 depending on the parameters a: The tangent element to
this manifold is given by the infinitesimal transformation
and has the dimension equal to the dimension of the space Λ of vectors λ with
the coordinates λ 0 = ξαi (x0 )aα ; resulting when the vector a(a1 ; : : : ; ar ) runs an r-
dimensional space. It is known from linear algebra that the dimension of Λ is equal
to the rank of the matrix (ξαi (x0 )): The latter equals to R due to nonsingularity of
N : Thus, the dimension of the manifold of the points
x0 = Ta x0
with any Ta 2 GNr equals to R; due to which the dimension of M determined by the
formula (3.5.1) exceeds the dimension of N by R at most:
The invariant manifold M ; being nonsingular for GNr ; has the definite rank ρ
(Definition 3.7) given by the formula
ρ = dim M R: (3.5.3)
3.5 Partially invariant solutions 123
Definition 3.12. The rank of the partially invariant manifold N is the rank of the
smallest invariant manifold M containing it. The defect of invariance of a partially
invariant manifold N is the number
ρ = δ + dim N R: (3.5.5)
Then
dim N = N s:
Let us find some inequalities for the invariance defect δ : We introduce the number
t = N R; equal to the number of invariants in a complete set of invariants of the
group GNr : In this notation one has
dim N R=t s:
δ t 1+R N+s = s 1:
By means of the invariance defect one can formulate the following necessary
condition for partial invariance of a manifold N : Let Xα (α = 1; : : : ; r) be a basis of
the Lie algebra LNr of the group GNr :
Theorem 3.7. If the partially invariant manifold N is regularly given by Eqs.
(3.5.6) and has an invariance defect δ ; then the general rank of the matrix
∆ = (Xα ψ σ (x))
Proof. There are functions ψ̄ σ (x; a) such that the manifold Na = Ta N is given by
the equations
ψ̄ σ (x; a) = 0 (σ = 1; : : : ; s): (3.5.8)
Moreover, these functions can be chosen so that
On the other hand, the manifold Na is a locus of the point x0 = f (x; a) when the
point x runs through the manifold N : Therefore, the equations
ψ̄ σ ( f (x; a); a) = 0 (σ = 1; : : : ; s)
∂ ψ̄ σ i 0 β ∂ ψ̄ σ
ξ (x )Vα (a) = 0:
∂ x0i β ∂ aα
Letting a = 0 here and invoking the choice of functions ψ̄ σ (x; a) we see that the
following equations hold on N :
∂ ψ̄ σ (x; a)
Xα ψ σ (x) = : (3.5.9)
∂ aα a=0
dim M = N (s ν) = N s + ν = dim N + ν
R(∆ ) = δ
3.5 Partially invariant solutions 125
can guarantee that the defect of N given by Eqs. (3.5.6) equals to δ ; but we cannot
prove it. Note only that when δ = 0; the manifold N is invariant and then the
criterion of Theorem 3.7 transforms into the criterion of Theorem 3.2, and the latter
is necessary and sufficient.
Consider a system of differential equations (S) again in the space E N (x; u) and dis-
cuss its solutions considered as manifolds Φ in E N :
Definition 3.13. A solution Φ of the system (S) admitting a group H is said to be
a partially invariant H-solution with the rank ρ and the invariance defect δ ; if Φ is
a partially invariant manifold of the group H and has the rank ρ and the invariance
defect δ :
It is clear that a partially invariant H-solution with the invariance defect δ = 0 is
just an invariant H-solution.
Since the system (S) has n independent variables xi and m unknown functions uk
(n + m = N); the dimension of the manifold Φ is fixed and equals to
dim Φ = n:
ρ < n;
t = n+m R; ρ = δ + n R; µ = m δ ;
(3.5.11)
0 ρ < n; maxfR n; 0g δ minfR 1; m 1g
Let us turn to the problem on the algorithm for finding partially invariant H-
solutions. Unfortunately, we do not have complete representation of a partially in-
variant manifold and therefore we can determine only the invariant part of the solu-
tion, i.e. the manifold M :
Let H be a group with a given number R: One can write the inequalities (3.5.11)
and select some value of δ ; thus defining the numbers ρ and µ according to (3.5.11).
Let us assume that a complete set of invariants I τ (τ = 1; : : : ;t) of the group H is
known. We shall look for manifolds M ; where partially invariant H-solutions of
the rank ρ and of the invariance defect δ can lie, giving them by the system of µ
equations of the form
M : Ψ ν (I 1 ; : : : ; It ) = 0 (ν = 1; : : : ; µ ) (3.5.12)
with unknown functions Ψ ν (I): Unlike the case of invariant solutions, we cannot
require that Equations (3.5.12) provide all variables uk as functions of x: Therefore,
the variables uk (k = 1; : : : ; m) are to be divided into main, e.g. u1 ; : : : ; uµ and para-
metric, uµ +1 ; : : : ; um ; so that equations (3.5.12) could be solved with respect to the
main variables u: Let us denote the parametric variables u by ū; and their deriva-
tives by p̄: Likewise, equations resulting from application of the operators of total
differentiation Di to Eqs. (3.5.12),
can provide only the main derivatives p; expressing them via parametric derivatives
p̄:
For the sake of simplicity we assume that the system (S) is of the first order.
Substituting the expressions
into Eqs. (S), one can find expressions for some parametric derivatives via the re-
maining ones, e.g. in the form
Since the latter expressions are not derived by differentiation of the form (3.5.13),
but from Eqs. (S), compatibility conditions of the form
Di p̄ lj = D j p̄il (3.5.14)
should hold, where derivatives of the second order can appear due to differentiation
of derivatives p̄¯: If the second derivatives have independent expressions, one has
to write compatibility conditions for them again, etc. If one makes no additional
assumptions on the system (S), then it is rather difficult to trace the procedure up
to the end in detail. In the theory of differential equations it is known as the pro-
cess of reducing an “active” system (i.e. that can generate new equations according
to (3.5.14)) to a “passive” system (for which conditions of the form (3.5.14) pro-
3.5 Partially invariant solutions 127
vide no new equations independent of the available ones). One can only claim that
the resulting “passive” system consists of the proper passive system P imposed on
parametric functions ū; and of a system (S=H) expressing conditions of passiveness
of the system (P): The system (S=H) contains no parametric functions or their
derivatives and connects only invariants I; functions Ψ (I) and their derivatives up
to some order. This statement is proved like the corresponding statement of Theo-
rem 3.5.
Thus, as a result of the above procedure, the system (S) is reduced into the system
(P) + (S=H)
having the following properties. The system (S=H) is a system with respect to func-
tions Ψ (I) from (3.5.12). Taking any solution of (S=H); one can find all paramet-
ric functions ū(x) from the system (P) and then all the main functions u(x) from
(3.5.12). The resulting ū(x) and u(x) provide a solution of the system (S), namely a
partially invariant H-solution of the rank ρ and the invariance defect δ :
One can make the same remark about the system (S=H) as in the case of invariant
H-solutions. Namely, the number of independent variables equals to the rank ρ of
the considered partially invariant H-solution.
Let us consider the system of equations of gas dynamics (3.3.12) as an example.
Before searching for specific partially invariant solutions of this system, let us com-
pose a table of all possible types of such solutions based on the relations (3.5.11).
In case of the system (3.3.12), one has n = 2; m = 3; so that (3.5.11) take the form
t =5 R; ρ = δ +2 R; µ =3 δ;
No: R t δ ρ µ Form of M
1 1 4 0 1 3 I 1 ; I 2 ; I 3 (I 4 )
2 2 3 0 0 3 I 1 = C1 ; I 2 = C2 ; I 3 = C3
3 2 3 1 1 2 I 1 ; I 2 (I 3 )
4 3 2 1 0 2 I 1 = C1 ; I 2 = C2
5 3 2 2 1 1 I 1 (I 2 )
6 4 1 2 0 1 I 1 = C1
Solutions of the form 1 and 2 are invariant and have already been discussed in
x3.3. Let us consider an example of the type 3 :
Let us take the subgroup H with the operators X3 and X6 from (3.3.13). One has
R = 2; t = 3: One easily obtains the invariants
128 3 Group invariant solutions of differential equations
p
I 1 = tu x; I2 = ; I 3 = t:
ρ
One can find from this system both parametric derivatives if ψ 6= 0: Moreover, one
obtains one equation (from the second and the third ones) without ρ ; namely
t ψ 0 + (γ 1)ψ = 0:
ρx tϕ 0 + ϕ ρt x tϕ 0 + ϕ 1
= ; = +ϕ (3.5.16)
ρ tψ ρ t tψ t
Now one has to write the condition (3.5.14) of compatibility for these equations,
namely
∂ ρx ∂ ρt
=
∂t ρ ∂x ρ
This yields the equation
∂ tϕ 0 + ϕ tϕ 0 + ϕ
t + = 0;
∂ t tψ tψ
t ψ 0 + (γ 1)ψ = 0
composes the system (S=H): The passive system P is reduced to (3.5.16). Integrat-
ing the equations (S=H); one obtains the first integrals
3.6 Reduction of partially invariant solutions 129
tϕ 0 + ϕ
tγ 1
ψ = C1 ; = C2 :
ψ
Eliminating ψ from the second equation by using the first equation and then inte-
grating, one obtains
t ϕ = C2t 2 γ +C3
where
C1C2
C2 =
2 γ
Finally we obtain the following solution of the system (S=H) :
C3
ψ = C1t 1 γ ; ϕ = C2t 1 γ
+
t
Substituting this solution into (3.5.16), one obtains a totally integrable system
ρx C2 ρt x C2 (C3 +C2t 2 γ ) t
= ; = C2 2 +
ρ t ρ t t2
The section investigates the situation connected with the fact that any invariant man-
ifold of a group H is at the same time an invariant manifold of any subgroup H 0 H:
This follows directly from Definition 3.4, since any transformation Ta 2 H 0 belongs
to H:
Consequently, any partially invariant H-solution is also a partially invariant H 0 -
solution if the subgroup H 0 H: However, this transition from H to H 0 H changes.
Generally speaking, the rank and the defect of invariance of the smallest invariant
130 3 Group invariant solutions of differential equations
manifold M containing the solution Φ : Let us agree to mark all symbols relating to
the subgroup H 0 by a prime.
Lemma 3.5. The rank of a partially invariant solution Φ does not decrease and its
defect of invariance does not increase,
ρ0 ρ; δ0 δ;
ρ0 ρ = t0 t (µ 0 µ ): (3.6.1)
t0 t µ0 µ
δ = dim M dim Φ
where C2 =const. Let us consider the subgroup H 0 < X3 C2 X6 > of the group H;
whose operator is written in detail as
∂ ∂ ∂ ∂
X3 C2 X6 = t + C2ρ C2 p
∂x ∂u ∂ρ ∂p
It is readily verified that the functions
p x
I 1 = tu x; I2 = ; I 3 = ρ eC2 2 ; I4 = t
ρ
ρ̂ = 1; δ = 1;
Let us prove two lemmas, the first one being a particular case of a more general
theorem on the number of essential parameters of a system of functions.
Consider a system of functions
Proof. It can be assumed without loss of generality that the rank minor of the
first matrix (3.6.4) is in the left upper corner, i.e. it corresponds to the values
k = 1; : : : ; δ ; α = 1; : : : ; δ . Let us split the values of the index α = 1; : : : ; r into values
σ = 1; : : : ; δ and τ = δ + 1; : : :; r: The conditions of the lemma provide that the last
r δ rows of the first matrix (3.6.4) are linear combinations of the first δ rows:
∂ϕk σ ∂ϕ
k
= λ (k = 1; : : : ; m; τ = δ + 1; : : :; r); (3.6.6)
∂ aτ t
∂ aσ
where, generally speaking, λτσ = λτσ (x; a): Since the second matrix (3.6.4) has the
same rank as the first one and contains the first matrix as a part, then the relations
∂ 2ϕ k σ ∂ 2ϕ k
= λ τ (x; a)
∂ xi ∂ a τ ∂ xi ∂ aσ
hold with the same λτσ : By virtue of these relations, differentiation of Eqs. (3.6.6)
with respect to xi provides the new relations
∂ λτσ ∂ ϕ k
= 0 (k = 1; : : : ; m; i = 1; : : : ; n)
∂ xi ∂ a σ
expressing the fact of linear dependence of the first δ rows of the first matrix (3.6.4).
But the first δ rows are linearly independent by condition. Hence
∂ λτσ
= 0 (i = 1; : : : ; n):
∂ xi
3.6 Reduction of partially invariant solutions 133
The latter equations mean that the coefficients λτσ in Eqs. (3.6.6) are independent of
x and can only be functions of parameters a at most: λτσ = λτσ (a):
Therefore, equations (3.6.6) can be considered as equations to which functions
ϕ k (x; a) satisfy as functions of parameters a with any fixed x: One can readily see
that these equations generate a complete system (see definition (3.3.18)). Indeed,
the rank of the matrix composed by coordinates of the operators
∂ ∂
Yτ = λτσ (a) (τ = δ + 1; : : :; r)
∂ aτ ∂ aσ
is obviously equal to r δ ; and the commutator [Yτ 1 ;Yτ 2 ] of any two of them should
be expressed linearly through these operators. Otherwise, we would have some
linear dependencies among the first δ rows of the first matrix (3.6.4) which con-
tradicts the assumption. Note that the system (3.6.6) contains r independent vari-
ables and r δ equations for a fixed k: Therefore, according to Lemma 3.2, it has
r (r δ ) = δ functionally independent solutions, that can be chosen to be func-
tions of variables a only. Let us denote these solutions by Bε (a) (ε = 1; : : : ; δ ): Since
every function ϕ k (x; a) satisfies the same system as B(a); then equalities of the form
(3.6.5) should hold with the derived B(a) due to Lemma 3.2. Lemma 3.6 is proved.
The second lemma deals with some property of the prolongation of the group
GNr : We assume as before that a basis of the Lie algebra LNr is given by the operators
∂ ∂
Xα = ξαi + ηαk k (α = 1; : : : ; r) (3.6.7)
∂x i ∂u
and that the prolongation of these operators with respect to functions u(x) has the
form
∂
Xeα = Xα + ζαk i k :
∂ pi
Consider the matrices of coordinates of the operators Xα and Xeα :
M = ξαi ; ηαk ; M e = ξαi ; ηαk ; ζ k :
αi
Lemma 3.7. If the group GNr is intransitive and the general ranks of the matrices M
e are equal to each other, then the operators (3.6.7) are linearly unconnected.
and M
Proof. Let us assume that the operators (3.6.7) are linearly connected on the con-
trary to the statement. We can assume, without loss of generality, that the first R
of the operators Xσ (σ = 1; : : : ; R) are linearly unconnected, and that the last r R
operators Xτ are expressed via the first ones by the formulae
where ωτσ = ωτσ (x; u): Let us lead this assumption to a contradiction proving that
equality of the ranks of matrices M and M e implies that all ωτσ are constants. Then
equations (3.6.8) would mean that the operators (3.6.7) are linearly dependent and
hence do not provide a basis of LNr :
134 3 Group invariant solutions of differential equations
The equation
e =R
R(M) = R(M)
shows that the prolonged operators should also be linearly connected with the same
coefficients ωτσ :
Xeτ = ωτσ Xeσ (τ = R + 1; : : :; r): (3.6.9)
Indeed, otherwise one could find a non-vanishing minor of the order R + 1 in the
e Let us write Eqs. (3.6.8) for the coordinates of the operators (3.6.7):
matrix M:
Taking into account Eqs. (3.6.10) we write the relations (3.6.9) in the coordinate
form as follows:
ζτki = ωτσ ζσk i
or, by virtue of the formulae (1.4.10) of Chapter 1,
or
∂ ωτσ σ
k ∂ ωτ l
σ
j ∂ ωτ k
σ
j ∂ ωτ l k
ησk + η σ p ξ σ p ξ σ p p = 0; (3.6.11)
∂ xi ∂ ul i
∂ xi j
∂ ul i j
where i = 1; : : : ; n; k = 1; : : : ; m; τ = R + 1; : : : ; r: Since the functions ξ ; η ; ω are
independent of the variables p; and (3.6.11) must be an identity with respect to
independent variables x; u; p; equations (3.6.11) are easily “split” with respect to
variables p; which leads to three series of equations
∂ ωτσ
ησk = 0; (a)
∂ xi
∂ ωτσ ∂ ωτσ
δij ησk δlk ξσj = 0; (b)
∂ ul ∂ xi
∂ ωτσ
ξσj = 0: (c)
∂ ul
Let us consider the matrix MR = (ξσi ; ησk ) similar to the matrix M; but composed
of coordinates of only the first R operators (3.6.7). Let us assume that σ are numbers
of columns in the matrix MR ; so that it has R columns and N rows. Since the group
3.6 Reduction of partially invariant solutions 135
GNr is intransitive, then R < N and there is a row in the matrix MR after eliminating
which the remaining matrix MR0 still has the rank R: Let it be the row with the number
i0 for example (the reasoning for a number k0 is similar). If we let that i = i0 and
j 6= i0 in Eqs. (a), (b) and introduce the quantities
∂ ωτσ
= zσ0 ;
∂ xi 0
we obtain the system of linear homogeneous equations
The matrix of the latter system is exactly the matrix MR0 : Since there are exactly R
“unknown” zσ0 ; then R(MR ) = R provides that
zσ0 = 0 (σ = 1; : : : ; R):
∂ ωτσ
= vσ :
∂ ul
Then, along with Eqs. (c), we obtain the following system of linear homogeneous
equations with the matrix MR :
ησk vσ = 0; ξσj vσ = 0
vσ = 0 (σ = 1; : : : ; R)
ησk zσ = 0; ξσj zσ = 0;
∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = x +u
∂x ∂u ∂x ∂u
136 3 Group invariant solutions of differential equations
Prolonging these operators with respect to the function u(x); one obtains that
Xeα = Xα (α = 1; 2; 3):
Therefore, R(M) = R(M) e = 2: However, the statement of Lemma 3.7 is not satisfied,
since the operators Xα are linearly connected:
X3 = xX1 + uX2 :
This happened due to the fact that the group G23 under consideration is transitive.
Proof. Let Φ be the considered partially invariant H-solution and Φa be the man-
ifold derived from Φ under the action of the transformation Ta 2 H: Let us write
equations for Φa in the form
∂ ϕ k (x; a)
pki =
∂ xi
If one substitutes the known expressions aσ = aσ (x; u; ā) into the latter equations
then the result will contain no variables ā: Indeed, the resulting equations should
be equivalent (on solution Φ ) to equations of the passive system (P) by virtue of
Property 3.1. Furthermore, the system (P) is invariant with respect to the group H
and does not contain the parameters a: It follows that the rank of the matrix
∂ ϕ k (x; a) ∂ 2 ϕ k (x; a)
;
∂ aα ∂ xi∂ aα
is equal to the rank of the matrix (3.6.13), i.e. δ : By virtue of Lemma 3.6, one can
make a conclusion that there are functions B1 (a); : : : ; Bδ (a) such that the right-hand
sides of (3.6.12) depend only on x and the variables B: Therefore, the part of Eqs.
(3.6.12) which is not invariant can be reduced to the form
Note that according to the above assumption this part is the first δ equations of
(3.6.12).
Consider possible transformations Ta 2 H satisfying the system of equations
Since Φ0 Φ ; all such Ta have the property of leaving the manifold Φ invariant.
Further, if Φ is subjected first to the transformation Ta ; and then to the transforma-
tion Tb from the set of transformations with the property (3.6.15), then one obtains
the transformation Tb Ta = Tϕ (a;b) having the property (3.6.15) again. Therefore, the
set of all Ta 2 H with the property (3.6.15) is a group, namely a subgroup H 0 H:
Since all Ta 2 H 0 leave the manifold Φ invariant, we conclude that Φ is an invari-
ant H 0 -solution. Note that (3.6.15) is exactly an (r δ )-dimensional manifold in
the parametric space of the group H; for otherwise M would not be the smallest
manifold containing Φ : Therefore, the order of the group is equal to r0 = r δ :
Let us prove the statement about the rank. Let R be the rank of the matrix M
composed from the coordinates of basis operators of the group H: Then the number
of invariants is equal to t = N R: Let us consider a prolonged group H: e Since the
k
invariant equations (P) allow to find expressions for all derivatives pi due to Prop-
erty 3.1, the increase of the number of invariants in transition from H to H 0 is not to
be smaller than the number of these variables pki ; i.e. mn: This follows from Theo-
rem 3.3. But this increase can not be greater than mn; for the dimension of the space
138 3 Group invariant solutions of differential equations
ρ0 = n R0 = n r0 = n r+δ = n R+δ = ρ
(ρ 2 fρ2 gρ )ρx = ρ ft + ρ ( f ρ f ρ ) f x + gx ;
fρ (ρ gρ γ g)ρx = gt + f gx (ρ gρ γ g) fx ; (3.6.17)
ρt + ( f + ρ fρ )ρx + ρ fx = 0:
It is not possible to find both derivatives ρt ; ρx from these equations only if (the
general case)
ρ 2 fρ2 = gρ ; ρ gρ = γ g:
Let us limit our consideration by the case γ 6= 0; 1; 3: Then the general solution of
the latter equations is written in the form
a2 γ 2a γ 1
g= ρ ; f = ρ 2 + b;
γ γ 1
where a = a(t; x); b = b(t; x) are arbitrary functions. Substituting these expressions
into the first two equations (3.6.17), one can see that they are reduced to
at = ax = bt = bx = 0;
3.7 Some problems 139
Solutions of Eqs. (3.3.12), where the variables are connected by the relations
(3.6.18), are known in gas dynamics as simple waves.
In conclusion of the present lecture notes let us discuss some problems useful for
further development of the theory and applications of group properties of differential
equations.
Based on the fact that a theory is enriched by accumulating examples of its ap-
plication we point out that it is desirable to have calculated admissible groups for
possibly wider classes of partial differential equations. In particular, the following
problems, unsolved so far, can be singled out.
Problem 3.1. Find the group admitted by an arbitrary linear differential equation
with constant coefficients
P(D)u = 0;
where D is the vector
∂ ∂
;:::; n ;
∂ x1 ∂x
and P(D) is a polynomial with constant coefficients.
Problem 3.2. Find the group admitted by a system of linear partial differential
equations of the first order with constant coefficients.
Problem 3.3. Make the group classification of equations of magnetohydrodynam-
ics in the three-dimensional case.
Problem 3.4. Find the group admitted by Einstein’s equations from the general rel-
ativity.
140 3 Group invariant solutions of differential equations
For some systems of equations the admissible group is already known, but there
is no complete classification of partially invariant solutions. In particular, this is the
case with equations of gas dynamics.
Problem 3.5. Classify partially invariant solutions of equations of gas dynamics in
two-dimensional and three-dimensional cases.
At present there are a lot of examples of nonlinear systems of equations admit-
ting an infinite Lie algebra of operators. However, the issue of using an infinite Lie
algebra for constructing classes of partial solutions of such systems of equations is
not sufficiently investigated.
Problem 3.6. Elaborate efficient algorithms of using an admissible infinite Lie al-
gebra for constructing classes of partial solutions of the corresponding equations.
Classifying partial solutions, e.g. invariant H-solutions, we face the necessity to
construct classes of similar subalgebras of a given Lie algebra. In some cases the
problem is investigated, however there are difficulties in calculation in applications.
Problem 3.7. Elaborate efficient algorithms of constructing classes of similar sub-
algebras of a given Lie algebra.
Together with solutions derived on the basis of finite invariants of a group GNr ;
one can pose a question on finding solutions with application of differential invari-
ants. It is possible that this will enrich the stock of partial solutions provided by
group properties of a system (S): This issue is probably not investigated at all.
Problem 3.8. Develop a theory of differential invariant and partially differential
invariant solutions of differential equations with a known admissible group.
When searching for invariant H-solutions of the system (S), one obtains a new
system (S=H): A group admitted by the system (S=H) should be somehow con-
nected with the properties of the system (S), in particular, with the group G admit-
ted by the system (S). The only result that can be easily obtained is the following: if
H is a normal subgroup in G; then the system (S=H) admits the factor group G=H:
However, examples demonstrate that the most general group admitted by the system
(S=H) can be considerably wider than the factor group.
Problem 3.9. Develop methods for finding the group admitted by the system (S=H)
directly in terms of a system (S) and its admitted group H:
An important part in enumerating partially invariant solutions of a system (S)
is played, as we could already see, by properties of reduction of such solutions
to a smaller invariance defect. Only a particular case of such reduction, based on
Property 3.1, is mentioned in the lecture notes. The general situation with reduction
of partially invariant solutions is not clear.
Problem 3.10. Find theorems on reduction for systems (S) and groups H with more
general properties than Property 3.1.
3.7 Some problems 141
微分方程群性质理论讲义
L. V. Ovsyannikov
TH
分 }j ß:/f节分析研亢的领奇科学家 他在不哇解栩部
分不 .1l'解 5理论微分方程仰分类 U 及流协 JJ 予中的
l.í Hl Jiñii 作出 ()J础性的贡献在 {h~ulnnik"、教
缸丽丽丽fI.com.(n