Download as pdf or txt
Download as pdf or txt
You are on page 1of 154

lectures on the Theory of

Group Properties of
Differential Equations
微分方程群性质理论讲义
Edited by N. H. Ibragimov

也 1lt 奇'J:P.ι ~l
HIGHEREOυCAτlON PRESS

NONLINEAR PHYSICAL SCIENCE
š‚5Ôn‰Æ
This page intentionally left blank
NONLINEAR PHYSICAL SCIENCE
Nonlinear Physical Science focuses on recent advances of fundamental theories and
principles, analytical and symbolic approaches, as well as computational techniques
in nonlinear physical science and nonlinear mathematics with engineering applica-
tions.
Topics of interest in Nonlinear Physical Science include but are not limited to:
- New findings and discoveries in nonlinear physics and mathematics
- Nonlinearity, complexity and mathematical structures in nonlinear physics
- Nonlinear phenomena and observations in nature and engineering
- Computational methods and theories in complex systems
- Lie group analysis, new theories and principles in mathematical modeling
- Stability, bifurcation, chaos and fractals in physical science and engineering
- Nonlinear chemical and biological physics
- Discontinuity, synchronization and natural complexity in the physical sciences

SERIES EDITORS
Albert C.J. Luo Nail H. Ibragimov
Department of Mechanical and Industrial Department of Mathematics and Science
Engineering Blekinge Institute of Technology
Southern Illinois University Edwardsville S-371 79 Karlskrona, Sweden
Edwardsville, IL 62026-1805, USA Email: nib@bth.se
Email: aluo@siue.edu

INTERNATIONAL ADVISORY BOARD


Ping Ao, University of Washington, USA; Email: aoping@u.washington.edu
Jan Awrejcewicz, The Technical University of Lodz, Poland; Email: awrejcew@p.lodz.pl
Eugene Benilov, University of Limerick, Ireland; Email; Eugene.Benilov@ul.ie
Eshel Ben-Jacob, Tel Aviv University, Israel; Email: eshel@tamar.tau.ac.il
Maurice Courbage, Université Paris 7, France; Email: maurice.courbage@univ-paris-diderot.fr
Marian Gidea, Northeastern Illinois University, USA; Email: mgidea@neiu.edu
James A. Glazier, Indiana University, USA; Email: glazier@indiana.edu
Shijun Liao, Shanghai Jiaotong University, China; Email: sjliao@sjtu.edu.cn
Jose Antonio Tenreiro Machado, ISEP-Institute of Engineering of Porto, Portugal; Email: jtm@dee.isep.ipp.pt
Nikolai A. Magnitskii, Russian Academy of Sciences, Russia; Email: nmag@isa.ru
Josep J. Masdemont, Universitat Politecnica de Catalunya (UPC), Spain; Email: josep@barquins.upc.edu
Dmitry E. Pelinovsky, McMaster University, Canada; Email: dmpeli@math.mcmaster.ca
Sergey Prants, V.I.Il’ichev Pacific Oceanological Institute of the Russian Academy of Sciences. Russia;
Email: prants@poi.dvo.ru
Victor I. Shrira, Keele University, UK; Email: v.i.shrira@keele.ac.uk
Jian Qiao Sun, University of California, USA; Email: jqsun@ucmerced.edu
Abdul-Majid Wazwaz, Saint Xavier University, USA; Email: wazwaz@sxu.edu
Pei Yu, The University of Western Ontario, Canada; Email: pyu@uwo.ca
L.y. Ovsyannikov

Lectures on the Theory of


Group Properties of
Differential Equations
微分方程群'性质理论讲义
Weifen Fangcheng Qunxingzhi Li lun Jiangyi

Edited by N.H. Ibragimov


Translated by E. D. Avdonina, N.H. Ibragimov

·北京
剧盹
Aωh(Jr E"it(J~
LV Ov<ya盯nikov \l ail H. Ihrdgimo 飞

Couocilor ofRussian Acadcrηy nfScicnce< Dcpartmcnt of Mnthcrnatics and 只 ~'Pnt'.


La vrcntyc\' lns{itu{c of Hydr 、d ,叫到 m,~< Bl ckingc InSlilutc ofTcchnol,唱、
Lavrentyev ?叫 15 371 79 Karlskron~. <:"'edcn
Novosibirsk , 63 ∞ 90. Rω"" E-mail' nih(i毛hlh ~旷

F-m~il' ovs (a) hytlro.MC. nI

俨 2013 Highcr Edu陀 alion f>r陀ss Li m ,ted Comp3ny. 4 Dewai Daj ie. 1 阳。 120. R~;i;η~. l' R ,\'j ,,"

图书在版捕目 (C f P )数据

微甘方程前性顶用 ìt 讲义== Lectur t' s on the theory of


grO \l p pr叩 ert l<"::龟 o( di 卧陀 nti31 叫 ll"UOn、英文 ! (俄罗斯)
跑走斯币尼科 J王若 北京高耳教育出版祉 2013.4
(非战忡物坪科学/罗明位 f 瑞典丁伊布柿韭'"失
丰蝙)
ISDN 9ï8 ï 04 036944 1
①橄 U. ,:1)现 rn ①微分方程研究英文
'2'本群 研究英立 [\' ~.v OlïS 字。 1:;2 5

小罔版术闯卡 11'\ (":IP 数据枯宁( :!Ol 飞)寄; 029566 号

司E 划l编幌子'.R F. 浮 ,陪任编指 手华英 树面设计均 TJ新 I<i式1号计字 ..灯


白阿侄核对 陈烟野 哥哥{圣即制桥创

出胶好?于 2写写数育出版社 咨淘司且活 ~OO-81 0-059 8

垃 址 北京市西城l主ll!外大街 4 哥 同 址 htrp :ltw飞.vw.hep , edu.rn

;Ij(\~偏码 100120 http;//飞飞飞V飞v.hcp.com.cn

日j 自由 原4钊 币犀间四 月 11 苟明公司 网上订购 http://w飞V咽.., I~ndr且C() , CO l11


'" 787n ,,"" \ (1 92111'" \ 11 (, 11Itp://ww飞,v .landmço.com ffl

印 ?长 <).75 版 次 20 \3年 4 月革 l 版

字 数 2 \0干宇 印 次 20 \3埠 4 同事 1 府阳回|

晌 书热钱 。\0 -585日 111 冉

本?主如有缺页 倒!lî、树荫锋精俗J♂锁 1窗咧日开耐图书销笛苦恼叮联系1湖法


版权所有慢权必究
?匆料号 3的料 ω
Editor’s preface

When I studied at Novosibirsk State University (Russian) I was lucky to have such
brilliant teachers in mathematics as M.A. Lavrentyev, S.L. Sobolev, A.I. Mal’tsev,
Yu.G. Reshetnyak and others. But it were L.V. Ovsyannikov’s lectures in Ordinary
differential equations, Partial differential equations, Gas dynamics and Group prop-
erties of differential equations that were of the most benefit for me. I attended his
course “Group properties of differential equations” when I was a third-year student.
His lectures provided a clear introduction to Lie group methods for determining
symmetries of differential equations and a variety of their applications in gas dy-
namics and other nonlinear models as well as Ovsyannikov’s remarkable contribu-
tion to this classical subject. His lectures were spectacular not only due to the bril-
liant presentation of the material but also due to absolutely new discoveries for us. I
remember one of our most emotional student’s repeated exclamations like “Wonder-
ful, . . . Incredible!” every time when Ovsyannikov revealed most unusual properties
of symmetries or unexpected methods.
His lecture notes of this course were published in 1966 with the print of 300
copies only. Since then the Notes were neither reprinted nor translated into English,
though they contain the material that is useful for students and teachers but cannot be
found in modern texts. For example, theory of partially invariant solutions developed
by Ovsyannikov and presented in x3.5, x3.6 is useful for investigating mathematical
models described by systems of nonlinear differential equations. It is important to
make this classical text available to everyone interested in modern group analysis.
In order to adapt the text for modern students I made several minor changes
in the English translation. In particular, sections have been divided into subsections
and few misprints have been corrected. Part of the problems formulated in x3.7 have
been completely or partially solved since 1966. But we did not make any comments
on this matter in the present translation.

January 2013 Nail H. Ibragimov


Preface

The theory of differential equations has two aspects of investigation, namely local
and global, no matter whether the equations arise from applied problems of physics
and mechanics or from abstract speculations (which is rather frequent in modern
mathematics). The local aspect is characterized by dealing with the inner structure
of a family of solutions and its investigation in a neighborhood of a certain point.
The global approach deals with solutions defined in some domain and having a given
behavior on its boundary.
It would certainly be erroneous to oppose these directions to each other. How-
ever, it is no good to ignore the differences in approaches either. While the global
approach necessitates the functional analytic apparatus, the local viewpoint allows
one to get along with algebraic means only. A brilliant example of a profound lo-
cal consideration is the famous Cauchy-Kovalevskaya theorem which is, in fact,
an algebraic statement. Moreover, it is an easy matter to notice that the theory of
boundary value problems also makes an essential application of various algebraic
properties of the whole family of solutions. Therefore, the local aspect of the alge-
braic theory of differential equations is quite vital.
The theory of group properties of differential equations descried in the present
lecture notes is a typical example of a local theory. It is especially valuable in inves-
tigating nonlinear differential equations, for its algorithms act here as reliably as for
linear cases.
In spite of the fact that the fundamentals of the theory of group properties were
elaborated in works of the Norwegian mathematician Sophus Lie more than a hun-
dred years ago, its development is desirable nowadays as well.
Methodological peculiarity of the present text is that its first chapter uses only the
simplest algebraic apparatus of one-parameter groups, which is especially advisable
for researchers engaged in applied fields. This allows one to solve the problem of
finding a group admitted by a given system of differential equations completely. The
second chapter is tailored to provide a deeper insight into the subject resulting from
solving determining equations. The group structure of the family of solutions itself
is discussed in the third chapter, which also suggests some new elements for the
theory. The latter are related to the notions of a partially invariant manifold of the
viii Preface

group, its defect of invariance, and the problem of reduction of partially invariant
solutions. The final section x3.7 suggests a qualitative formulation of several prob-
lems demonstrating possibilities for further development of the theory with no claim
to be complete.
The present lecture notes are written hot on the traces of a special course given
by the author in Novosibirsk State University during the 1965/1966 academic year.
Such a prompt decision was made to have the lecture notes published by the spring
examinations. Therefore, the lectures may appear to be “raw” to many extent, and
the author is ready to be completely responsible for that.
The quick release of the lecture notes would be impossible without the support of
administration of the university. A major technical work in preparing the manuscript
was done by the students V.G. Firsov, E.Z. Borovskaya, T.E. Kuzmina, N.I. Nau-
menko, M.L. Kochubievskaya and others. The author is sincerely grateful to all
these people.

Novosibirsk, Russia, May 1966 L.V. Ovsyannikov


Contents

Editor’s preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii

1 One-parameter continuous transformation groups admitted by


differential equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 One-parameter continuous transformation group . . . . . . . . . . . . . . . . . 1
1.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.2 Canonical parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.4 Auxiliary functions of groups . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Infinitesimal operator of the group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.2 Transformation of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.3 Change of coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Invariants and invariant manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 Invariants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.2 Invariant manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.3 Invariance of regularly defined manifolds . . . . . . . . . . . . . . . . 15
1.4 Theory of prolongation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.1 Prolongation of the space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.4.2 Prolonged group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.4.3 First prolongation of the group operator . . . . . . . . . . . . . . . . . 19
1.4.4 Second prolongation of the group operator . . . . . . . . . . . . . . . 22
1.4.5 Properties of prolongations of operators . . . . . . . . . . . . . . . . . 24
1.5 Groups admitted by differential equations . . . . . . . . . . . . . . . . . . . . . . . 28
1.5.1 Determining equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5.2 First-order ordinary differential equations . . . . . . . . . . . . . . . . 30
1.5.3 Second-order ordinary differential equations . . . . . . . . . . . . . 32
1.5.4 Heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.5.5 Gasdynamic equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
x Contents

1.6 Lie algebra of operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47


1.6.1 Commutator. Definition of a Lie algebra . . . . . . . . . . . . . . . . . 47
1.6.2 Properties of commutator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.6.3 Lie algebra of admitted operators . . . . . . . . . . . . . . . . . . . . . . . 51

2 Lie algebras and local Lie groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53


2.1 Lie algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.1.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.1.2 Subalgebra and ideal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.1.3 Structure of finite-dimensional Lie algebras . . . . . . . . . . . . . . 55
2.2 Adjoint algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.2.1 Inner derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.2.2 Adjoint algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.2.3 Inner automorphisms of a Lie algebra . . . . . . . . . . . . . . . . . . . 60
2.3 Local Lie group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3.1 Coordinates in a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
2.3.2 Subgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.3.3 Canonical coordinates of the first kind . . . . . . . . . . . . . . . . . . . 64
2.3.4 First fundamental theorem of Lie . . . . . . . . . . . . . . . . . . . . . . . 65
2.3.5 Second fundamental theorem of Lie . . . . . . . . . . . . . . . . . . . . . 69
2.3.6 Properties of canonical coordinate systems of
the first kind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.3.7 Third fundamental theorem of Lie . . . . . . . . . . . . . . . . . . . . . . 71
2.3.8 Lie algebra of a local Lie group . . . . . . . . . . . . . . . . . . . . . . . . 73
2.4 Subgroup, normal subgroup and factor group . . . . . . . . . . . . . . . . . . . 75
2.4.1 Lemma on commutator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.4.2 Subgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.4.3 Normal subgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
2.4.4 Factor group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.5 Inner automorphisms of a group and of its Lie algebra . . . . . . . . . . . . 82
2.5.1 Inner automorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.5.2 Lie algebra of GA and adjoint algebra of Lr . . . . . . . . . . . . . . . 83
2.6 Local Lie group of transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.6.2 Lie’s first theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.6.3 Lie’s second theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.6.4 Canonical coordinates of the second kind . . . . . . . . . . . . . . . . 88

3 Group invariant solutions of differential equations . . . . . . . . . . . . . . . . . 91


3.1 Invariants of the group GNr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.1 Invariance criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1.2 Functional independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.1.3 Linearly unconnected operators . . . . . . . . . . . . . . . . . . . . . . . . 93
3.1.4 Integration of Jacobian systems . . . . . . . . . . . . . . . . . . . . . . . . 95
3.1.5 Computation of invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Contents xi

3.2 Invariant manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98


3.2.1 Invariant manifolds criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3.2.2 Induced group and its Lie algebra . . . . . . . . . . . . . . . . . . . . . . . 98
3.2.3 Theorem on representation of nonsingular invariant
manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.2.4 Differential invariant manifolds . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3 Invariant solutions of differential equations . . . . . . . . . . . . . . . . . . . . . 103
3.3.1 Definition of invariant solutions . . . . . . . . . . . . . . . . . . . . . . . . 103
3.3.2 The system (S=H) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.3.3 Examples from one-dimensional gas dynamics . . . . . . . . . . . 109
3.3.4 Self-similar solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.4 Classification of invariant solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
3.4.1 Invariant solutions for similar subalgebras . . . . . . . . . . . . . . . 116
3.4.2 Classes of similar subalgebras . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.5 Partially invariant solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.5.1 Partially invariant manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
3.5.2 Defect of invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
3.5.3 Construction of partially invariant solutions . . . . . . . . . . . . . . 125
3.6 Reduction of partially invariant solutions . . . . . . . . . . . . . . . . . . . . . . . . 129
3.6.1 Statement of the reduction problem . . . . . . . . . . . . . . . . . . . . . 129
3.6.2 Two auxiliary lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.6.3 Theorem on reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.7 Some problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Chapter 1
One-parameter continuous transformation
groups admitted by differential equations

1.1 One-parameter continuous transformation group

1.1.1 Definition

Let us consider transformations T of an N-dimensional Euclidian space E N into


itself, so that one has
x0 = T x 2 E N
for x 2 E N : If the point x has coordinates x1 ; : : : ; xN then this transformation can be
given by the system of equalities

x0i = f i (x) = f i (x1 ; : : : ; xN ) (i = 1; : : : ; N): (1.1.1)

The functions f are assumed to be thrice continuously differentiable and locally


invertible, which means that there exists an inverse transformation T 1 for the trans-
formation T in some neighborhood of the point x0 = T x; so that x = T 1 x0 :
The product of transformations T1 T2 is the transformation T consisting in suc-
cessive transformations T2 and then T1 : The identity transformation plays the role of
the unit transformation with respect to the product.
The above product is written in terms of functions f by the following equations:

f i (x) = f1i ( f21 (x); : : : ; f3N (x)) (i = 1; : : : ; N):

This operation of multiplication is associative, i.e.

T1 (T2 T3 ) = (T1 T2 )T3 :

Note that the inversion of the product of transformations is given by


1
(T1 T2 ) = T2 1 T1 1 :
2 1 One-parameter continuous transformation groups admitted by differential equations

We will consider a family of transformations fTa g with the above properties de-
pending on a real parameter a that varies within an interval ∆ :
The family fTa g is said to be locally closed with respect to the product if there
exists a subinterval ∆ 0  ∆ such that

Tb Ta 2 fTa g

for any a; b 2 ∆ 0 : This leads to a function c = ϕ (a; b) which determines the multi-
plication law for transformations of fTa g according to the formula

Tb Ta = Tc :

The transformation Ta is written in coordinates in the following form similar to


Eqs. (1.1.1):
Ta : x0i = f i (x; a) (i = 1; : : : ; N): (1.1.2)
The product of transformations is written in terms of the functions (1.1.2) and the
multiplication law ϕ (a; b) as follows:

Tb Ta = Tϕ (a;b) : f i ( f (x; a); b) = f i (x; ϕ (a; b)) (i = 1; : : : ; N): (1.1.3)

Definition 1.1. The family fTa g is called a local one-parameter continuous trans-
formation group if it is locally closed with respect to the product and if the interval
∆ 0 can be chosen so that the following conditions hold.
1 There exists the unique value a0 2 ∆ 0 such that Ta0 is an identity transforma-
tion.
2 The function ϕ (a; b) is thrice continuously differentiable and the equation
ϕ (a; b) = a0 has the unique solution b = a 1 for any a 2 ∆ 0 :
Condition 2 means that the operation of inversion of transformations (Ta ) 1 =
Ta 1 is possible in fTa g:
Hereafter the symbol a 1 indicates only a definite value of the parameter and not
1
the inverse value of the number a; so that a 1 6= .
a
The choice of the interval ∆ 0 is not unique, generally speaking. If such an interval
is selected one can take any smaller interval instead of ∆ 0 : It means that we are
interested only in some sufficiently small neighborhood of a0 : The operations of
multiplication and inversion of transformations Ta are feasible only for values of
the parameter a from the above neighborhood. Therefore, the object introduced by
Definition 1.1 is termed as a local group. In what follows the sufficient closeness of
all considered values of the parameters a; b : : : to the value a0 is provided.
Further on, the term “group G1 ” will be used to indicate a local one-parameter
continuous transformation group.
1.1 One-parameter continuous transformation group 3

1.1.2 Canonical parameter

Generally introduction of the new parameter ā = ā(a); where ā(a) is a thrice con-
tinuously differentiable monotonous function, changes ϕ ; ∆ and ∆ 0 :
In what follows, we assume that a0 = 0 without loss of generality. Note that in
this case the definition leads to the following properties of the function ϕ (a; b) :

ϕ (0; 0) = 0; ϕ (a; 0) = a; ϕ (0; b) = b;


(1.1.4)
ϕ (a; a 1 ) = ϕ (a 1 ; a) = 0:

The parameter a is said to be canonical if the multiplication law is given by

ϕ (a; b) = a + b:

Then a 1 = a and equations (1.1.3) take the form

f i ( f (x; a); b) = f i (x; a + b) (i = 1; : : : ; N): (1.1.5)

Theorem 1.1. A canonical parameter can be introduced in any one-parameter


group.
Proof. Let Tc = Tb Ta ; so that c = ϕ (a; b): Let us give a small increment ∆ b to the
parameter b; then c receives a small increment ∆ c; so that

ϕ (a; b + ∆ b) = c + ∆ c:

For transformations it is written by the formula

Tb+∆ b Ta = Tc+∆ c :

Multiplying the right-hand side by


1
Tc = Ta 1 Tb 1 ;

one obtains
1 1
Tc+∆ c Tc = Tb+∆ b Tb
due to the associative multiplication law. The equality has the form

ϕ (c 1 ; c + ∆ c) = ϕ (b 1 ; b + ∆ b) (1.1.6)

in terms of the function ϕ :


Let
∂ ϕ (a; b)
V (b) = 
∂b a=b 1

Taylor’s formula and the equation ϕ (b 1 ; b) = 0 yield


4 1 One-parameter continuous transformation groups admitted by differential equations

ϕ (b 1 ; b + ∆ b) = V (b)∆ b + O(j∆ bj2):

Applying the above equation to Eq. (1.1.6) and invoking that j∆ cj = Oj∆ bj one
obtains
V (c)∆ c = V (b)∆ b + O(j∆ bj2 ): (1.1.7)
Dividing both sides of Eq. (1.1.7) by ∆ b and taking the limit ∆ b ! 0; one arrives
at the differential equation
dc
V (c) = V (b) (1.1.8)
db
with the initial condition
cjb=0 = a:
Furthermore, equations (1.1.4) show that V (0) = 1:
Let us introduce the function
Z a
ā(a) = V (s)ds:
0

Then the function c = ϕ (a; b); determined by the relations

ā(c) = ā(a) + ā(b); (1.1.9)

is a solution to Eq. (1.1.8).


The function ā(a) is obviously monotonous and thrice continuously differen-
tiable with respect to a: Taking it as a new parameter, one obtains that the new
parameter is canonical due to Eq. (1.1.9).
Corollary 1.1. Any one-parameter transformation group is Abelian. Indeed, if a is
a canonical parameter then, according to the definition, one has

Tb Ta = Ta+b = Tb+a = Ta Tb :

1.1.3 Examples

Example 1.1. Translations on a straight line:

x0 = x + a:

Here
ϕ (a; b) = a + b:
Translations in an N-dimensional space in the direction of the vector λ = (λ 1 ; : : : ;
λ N ) are given by

x0i = xi + λ i a (i = 1; : : : ; N):
1.1 One-parameter continuous transformation group 5

Example 1.2. Dilations:

x0 = ax (∆ = (0; ∞)):

Here
ϕ (a; b) = ab:
Assuming that a = eā ; one arrives to the canonical parameter ā:
The group G1 of dilations in an N-dimensional space has the form

x0i = aν xi ;
i

where ν i =const. (i = 1; : : : ; N):

Example 1.3. Group of rotations in the plane (x; y) :


p p
x0 = 1 a2x + ay; y0 = ax + 1 a2y:

It is clear from the geometric meaning of these transformations that the transition
to the canonical parameter is given by the formula a = sin ā: In this parameter the
rotation transformations take the standard form

x0 = x cos ā + y sin ā; y0 = y cos ā x sin ā;

where ā 2 ( π ; π ):
Example 1.4. The transformations
x y
x0 = ; y0 =
1 ax 1 ax
form the group of projective transformations on the (x; y) plane. Here a is a canoni-
cal parameter.

1.1.4 Auxiliary functions of groups

In what follows, we assume that the parameter a in groups G1 under consideration


is chosen to be canonical unless otherwise indicated.
Let us relate the auxiliary functions

∂ f i (x; a)
ξ (x) =
i
(i = 1; : : : ; N) (1.1.10)
∂ a a=0

to the group G1 given by Eqs. (1.1.2).


Theorem 1.2. The functions f i (x; a) defining a group of transformations satisfy the
system of differential equations
6 1 One-parameter continuous transformation groups admitted by differential equations

∂ fi
= ξ i ( f ) (i = 1; : : : ; N) (1.1.11)
∂a
with the initial conditions

f i ja=0 = xi (i = 1; : : : ; N): (1.1.12)

Conversely, given any system of sufficiently smooth (continuously differentiable)


functions ξ i (x); the functions f i (x; a) obtained by solving the problem (1.1.11)—
(1.1.12) determines the group G1 :

Proof. Let us give a small increment ∆ a to the parameter a and write the equation
Ta+∆ a = T∆ a Ta in terms of the functions f i :

f i (x; a + ∆ a) = f i ( f (x; a)∆ a):

The Taylor expansion of both sides of the above equation with respect to ∆ a has the
form
∂ fi
f i (x; a + ∆ a) = f i (x; a) + ∆ a + O(j∆ aj2);
∂a

∂ fi
f ( f (x; a); ∆ a) = f (x; a) +
i i
( f ) ∆ a + O(j∆ aj2):
∂∆a ∆ a=0
Equating the right-hand sides of the system, dividing by ∆ a and letting ∆ a ! 0; one
obtains Eqs. (1.1.11). Equations (1.1.12) hold due to the fact that T0 is the identity
transformation. Thus, the direct statement is proved.
Conversely, let ξ i (x) be a given set of continuously differentiable functions.
Equations (1.1.11) provide a system of ordinary differential equations with the in-
dependent variable a: The assumptions of the theorem guarantee that this system
has a unique solution in a neighborhood of a = 0: The solution provides a one-
parameter family of transformations. Let us demonstrate that it is a group G1 : It is
manifest from Eqs. (1.1.12) that we have the identity transformation when a = 0:
Let us prove that the equation Tb Ta = Ta+b holds in a certain neighborhood of the
values a = 0: In terms of the function f we have to show that

f i ( f (x; a); b) = f i (x; a + b):

Let
yi (b) = f i ( f (x; a); b); zi (b) = f i (x; a + b):
Calculating the derivatives of these functions one obtains

∂ yi ∂ f i ( f ; b)
= = ξ i (y)
∂b ∂b
since the functions f i satisfy Eqs. (1.1.11). Moreover, by virtue of Eqs. (1.1.12) one
has f i ( f (x; a); 0) = f i (x; a); i.e.
1.2 Infinitesimal operator of the group 7

yi (0) = f i (x; a):

On the other hand, the same reasoning shows that

dzi ∂ f i (x; a + b)
= = ξ i (z); zi (0) = f i (x; a):
db ∂b
Thus, functions yi (b) and zi (b) satisfy one and the same system of differential equa-
tions and the same initial data. The theorem on uniqueness of a solution of a system
of differential equations guarantees that

yi (b) = zi (b):

The existence of the inverse transformation follows from the fact that letting a 1 =
a one obtains
Ta (Ta ) 1 = Ta T a = T0 :
The later is the identity transformation by virtue of Eqs. (1.1.12).
Theorem 1.2 can be used for constructing local one-parameter transformation
groups.

1.2 Infinitesimal operator of the group

1.2.1 Definition and examples

We will use the usual convention on summation with respect to repeated indices.
Namely, if the upper and the lower indices are the same in a sum, the sign ∑ is
omitted for the sake of brevity. For instance, instead of the expression
N N
∑ ∑ Ai j a i b j ;
i=1 j=1

we write Ai j ai b j :
Definition 1.2. An infinitesimal operator of the group G1 is the linear differential
operator

X = ξ i (x) i ; (1.2.1)
∂x
where ξ i (x) are determined in (1.1.10). Functions ξ i (x) are coordinates of the oper-
ator X :
Let us write out the operators of the groups G1 for Examples 1.1-1.4 from x1.1.
Example 1.5. The operator of the translation group x0 = x + a along the x-axis is
8 1 One-parameter continuous transformation groups admitted by differential equations


X= 
∂x
The general translation operator in E N along the vector λ has the form


X = λi ; λ i = const:
∂ xi
Example 1.6. The operator of dilation group x0 = ax in the direction of the x-axis
is

X =x 
∂x
The general dilation operator in E N has the form
n

X = ∑ ν i xi ; ν i = const:
i=1 ∂ xi

Example 1.7. The operator of the rotation group in the plane (x; y) is

∂ ∂
X =y x 
∂x ∂y
Example 1.8. The operator

∂ ∂
X = x2 + xy
∂x ∂y
corresponds to the projective transformations of Example 1.4.

Example 1.9. Consider now an example of constructing finite transformations of


the group G1 using its infinitesimal operator. Take the operator


X =y
∂x
with two variables x1 = x; x2 = y: Here ξ 1 = y; ξ 2 = 0 and the system (1.1.11) with
the initial conditions has the form
∂ x0 ∂ y0
= y0 ; = 0; x0 ja=0 = x; y0 ja=0 = y:
∂a ∂a
It follows that y0 is independent of a and the initial condition yields y0 = y: Now the
first equation provides x0 = ay + x: Thus, the finite transformations of the group G1
with the operator

y
∂x
are
x0 = x + ay; y0 = y:
1.2 Infinitesimal operator of the group 9

This group G1 is known as the group of shifts.

1.2.2 Transformation of functions

Let us turn to the problem of transforming a function F(x) by the transformation


x0 = T x: We will determine the transformed function T F by taking the value of the
initial function F(x) in the transformed point as the value T F(x) of the transformed
function, i.e. by the equation

T F(x) = F(T x)  F(x0 ):

If Ta 2 G1 ; then Ta F(x) is the function of the point x and the parameter a: Let us
calculate its derivative with respect to a at a = 0: Invoking Eqs. (1.1.10) and (1.2.1)
one has

∂ ∂ F(x0 ) ∂ x0i ∂ F(x)
Ta F(x) = = ξ i (x) = XF(x);
∂a a=0 ∂ x 0i ∂ a a=0 ∂ xi

or finally


Ta F(x) = XF(x): (1.2.2)
∂a a=0
Hence, the principal linear part of the variation of F(x) under the transformations
Ta 2 G1 at a = 0 is given by the equation

Ta F(x) F(x) = XF  a:

This is the reason why the operator X is called an infinitesimal operator (i.e. the
operator of the infinitesimal transformation).
For finite transformations Ta 2 G1 the following lemma holds.
Lemma 1.1. If x0 = Ta x; then


F(x0 ) = X 0 F(x0 ); (1.2.3)
∂a
where

X 0 = ξ i (x0 ) 
∂ x0i
Proof. One has
∂ ∂ F ∂ x0i ∂F
F(x0 ) = 0i = ξ i (x0 ) 0i = X 0 F(x0 );
∂a ∂x ∂a ∂x
where the second equality is obtained by using Eqs. (1.1.11).
10 1 One-parameter continuous transformation groups admitted by differential equations

1.2.3 Change of coordinates

The system of coordinates in the space E N was not supposed to be Cartesian in the
above reasoning. Therefore, it is important to consider the properties of the groups
G1 with the change of coordinates. Let

Ta : x0i = f i (x; a)

be transformations of the group G1 ; and

T: yi = yi (x) (i = 1; : : : ; N)

be a smooth transformation of coordinates.


One can write the transformations Ta in the new coordinates, but the functions
will change as well. Let
Ta : y0i = gi (y; a)
be the expression of the transformation Ta in the coordinates y: Let us find the con-
nections between the functions f ; g and y: We have

y0i = yi (x0 ) = yi ( f (x; a))

by definition. Hence
yi ( f (x; a)) = gi (y(x); a): (1.2.4)
Equations (1.2.4) hold identically in the variables x; a:
Considering transformations Ta : y0i = gi (y; a) as transformations in the system of
coordinates (x); one obtains new transformations T a : Equation (1.2.4) is written in
this notation in the form T Ta = T a T; whence
1
T a = T Ta T : (1.2.5)

It is manifest that the family of transformations fT a g derived by the formula


(1.2.5) generates a new group G1 : Indeed,
1 1 1
(T Ta T )(T Tb T ) = T Ta Tb T ; T Ta 1 T 1
= (T Ta T 1
) 1:

The group G1 is referred to as a group similar to the group G1 ; or as a group


derived from G1 by the similarity transformation T : yi = yi (x):
Let us find out the form of the infinitesimal operator of the group in the new
system of coordinates. Every infinitesimal operator is characterized by the set of
quantities ξ i (i = 1; : : : ; N) that can be considered as coordinates of a vector ξ :
Lemma 1.2. Under the change of coordinates, the quantities ξ i behave as coordi-
nates of a contravariant vector.
Proof. Let η i be coordinates of the vector ξ in the variables (y): According to defi-
nition (1.1.10) we have
1.2 Infinitesimal operator of the group 11

∂ gi
ηi = :
∂ a a=0
Differentiating the identities (1.2.4) with respect to a and setting a = 0 we obtain

∂ yi j ∂ gi
ξ (x) = = η i (y);
∂ xj ∂ a a=0

thus completing the proof.


The infinitesimal operator (1.2.1) can be considered as a scalar product of the

vectors ξ i and : One of these vectors is contravariant, the other is covariant.
∂ xi
The scalar product of a covariant vector and a contravariant one gives a scalar, and
therefore remains unaltered after the change of coordinates. Hence,

∂ ∂
X = ξi = ηi i ;
∂ xi ∂y
where
∂ yi
ηi = ξ j= X(yi (x)):
∂xj
Thus, the transformation formula for the operator X with respect to a change of
coordinates yi = yi (x) has the form

∂ ∂
X = ξi = X (yi ) i  (1.2.6)
∂x i ∂y
Theorem 1.3. Any one-parameter transformation group is similar to a group of
translations with respect to one of the coordinates.
Proof. It is known that for any contravariant vector fξ 1 ; : : : ; ξ N g there exists such a
system of coordinates where this vector is written in the form f1; 0; : : :; 0g: There-
fore, the statement follows from Lemma 1.2.
Example 1.10. Let us rewrite the operator

∂ ∂
X =x y
∂y ∂x
in the polar system of coordinates
p y
r = x2 + y2; ϕ = arctan 
x
In this example, Equation (1.2.6) is written

∂ ∂
X = X(r) + X(ϕ ) 
∂r ∂ϕ
We have
∂r ∂r
X (r) = x y =0
∂y ∂x
12 1 One-parameter continuous transformation groups admitted by differential equations

and 
∂ϕ ∂ϕ x y
X (ϕ ) = x y =x 2 y = 1:
∂y ∂x r r2
Thus, in polar coordinates one has


X= 
∂ϕ
This is the operator of translations along the coordinate ϕ :

1.3 Invariants and invariant manifolds

1.3.1 Invariants

Definition 1.3. A function F(x) 6 const. is called an invariant of the group G1 if it


remains unaltered under any transformation Ta 2 G1 ; i.e. if the equation

Ta F(x) = F(x) or F(Ta x) = F(x)

holds identically in x and a:


Theorem 1.4. The necessary and sufficient condition for the function F(x) to be an
invariant of the group G1 with the operator X is the validity of the equation

XF(x) = 0: (1.3.1)

Proof. Necessity follows from Eq. (1.2.2).


Sufficiency. If X(F(x)) = 0; then X 0 (F(x0 )) = 0 as well. Equation (1.2.3) yields

d
F(x0 ) = 0:
da
It follows that the equation F(x0 ) = F(x) is satisfied identically in x and a: This
completes the proof.
Equation (1.3.1) is a linear partial differential equation of the first order and can
be written in a more detailed form as follows:
∂F
ξ i (x) = 0: (1.3.2)
∂ xi
It is known that such an equation has N 1 functionally independent solutions F =
I τ (x) (τ = 1; : : : ; N 1) and the general solution has the form

F(x) = Φ (I 1 (x); : : : ; I N 1
(x)); (1.3.3)
1.3 Invariants and invariant manifolds 13

where Φ (z1 ; : : : ; zN 1 ) is any differentiable function. Hence, every G1 has a com-


plete set of N 1 functionally independent invariants I τ (x) (τ = 1; : : : ; N 1) and
any invariant G1 is given by the formula (1.3.3).
Example 1.11. Let us consider a group G1 of translations in the (x; y) plane:

x0 = x + a; y0 = y + 2a:

Here N = 2 and according to Theorem 1.4 there exists N 1 = 1 independent invari-


ant. In the given case, it is readily calculated without using the criterion of Theorem
1.4. Indeed, eliminating the parameter a from the equations representing the group
G1 ; one obtains
2x0 y0 = 2x y
for any a: Hence, the desired invariant is I = 2x y:
By means of Theorem 1.4, the invariant is obtained as follows. The operator of
the given G1 is
∂ ∂
X= +2
∂x ∂y
and equation (1.3.2) is written

∂F ∂F
+2 = 0:
∂x ∂y
The associated (characteristic) system of ordinary differential equations contains in
this case one equation:
dx dy
= 
1 2
Its first integral obviously is 2x y = C:
Example 1.12. Consider a group G1 given by the dilation operator

∂ ∂ ∂
X =x + 3y 2z 
∂x ∂y ∂z
The characteristic system is

dx dy dz
= = ;
x 3y 2z
whence one obtains two independent integrals
y
= C1 ; x2 z = C2 :
x3
Here N = 3; two independent invariants are
y
I1 = ; I 2 = x2 z;
x3
14 1 One-parameter continuous transformation groups admitted by differential equations

and the general form of the invariant of the considered group G1 is given by
y 
F = Φ 3 ; x2 z
x
with an arbitrary function of two variables Φ (I 1 ; I 2 ):
Example 1.13. The invariant for the group of rotations with the operator

∂ ∂
X =y x
∂x ∂y

has the form I = x2 +y2 : It can also be readily derived from the geometrical meaning
of transformations of the considered G1 :

1.3.2 Invariant manifolds

The notion of an invariant manifold of a group is as important as the notion of an


invariant.
Let M be a manifold in E N and G1 be a group of transformations.
Definition 1.4. The manifold M is called an invariant manifold of the group G1 if
Ta x 2 M for any point x 2 M and any Ta 2 G1 : In other words, transformations of
the group G1 map M into itself.
It is manifest from Definition 1.3 that if a function I(x) is an invariant of the
group G1 ; then the manifold given by the equation I(x) = 0 is an invariant manifold
of G1 : In general, if the functions I τ (x) (τ = 1; : : : ; N 1) provide a complete set of
independent invariants of G1 ; the system of equations of the form

Fα (I 1 (x); : : : ; I N 1
(x)) = 0 (α = 1; : : : ; A)

determines an invariant manifold of the group G1 :


For instance, the group G1 with the operator

∂ ∂ ∂
X =x + 3y 2z
∂x ∂y ∂z
has the invariants
y
I1 = ; I 2 = x2 z
x3
and one obtains equations of its two-dimensional invariant manifolds in E 3 (x; y; z)
in the form y 
F 3 ; x2 z
x
or equations of one-dimensional invariant manifolds in the form
1.3 Invariants and invariant manifolds 15
y  y 
F1 3
; x2 z = 0; F2 3
; x2 z = 0:
x x
It is obvious, that one can write these equations in the explicit form, namely

y = x3 ϕ (x2 z)

or
C2
y = C1 x3 ; z= (C1 ;C2 = const:);
x2
respectively.

1.3.3 Invariance of regularly defined manifolds

Now let us investigate the invariance criterion of a given manifold. Consider a man-
ifold given by a system of equations

M : ψ σ (x) = 0 (σ = 1; : : : ; s): (1.3.4)

In order to formulate conditions for the functions ψ σ (x); the notion of the general
rank of a functional matrix is necessary.
Let M = M(x) be a matrix whose entries are given functions of a point x 2 E N :
The rank of the matrix M(x) at the point x is denoted by

R = R(M) = R(M(x)):

The rank R itself is also a function of the point x : R = R(x): The matrix M is said
to have the general rank at the point x0 2 E N if

R(x) = R(x0 )

for all x in a certain neighborhood of the point x0 2 E N :


The manifold M given by Eqs. (1.3.4) is said to be regularly defined if functions
ψ σ (x) are continuously differentiable and if the Jacobian matrix
 
∂ ψσ
J=
∂ xi

has the general rank equal to s (the number of Eqs. (1.3.4)) at every point of M :
Example 1.14. Let ψ 1 = x2 ; ψ 2 = y5 : Then
 
2x 0
J= :
0 5y4

The matrix J has the general rank at any point (x0 ; y0 ) that does not belong to the
coordinate axes and has no general rank at any point of the coordinate axes on the
16 1 One-parameter continuous transformation groups admitted by differential equations

plane E 2 (x; y): The manifold

x2 = 1; y5 = 2

is regularly defined, whereas the manifold

x2 = 1; y5 = 0

is not regularly defined. In the latter case the same manifold can be given by the
equations x2 = 1; y = 0 and it becomes regularly defined.
Theorem 1.5. Let the manifold M be regularly defined by Eqs. (1.3.4). The nec-
essary and sufficient condition for the invariance of the manifold M with respect to
the group G1 with the operator X is the validity of the equations

X ψ σ (x)jM = 0 (σ = 1; : : : ; s): (1.3.5)

Proof. Necessity. If M is invariant with respect to G1 ; then ψ σ (Ta x) = 0 for all


x 2 M and all Ta 2 G: Therefore, one also has

∂ σ
ψ (Ta x) =0
∂a a=0

for x 2 M ; whence (1.3.5) follows due to (1.2.2).


Sufficiency. Let Eqs. (1.3.5) hold. We can assume that equations (1.3.4) have the
form
xσ = 0 (σ = 1; : : : ; s) (1.3.6)
without loss of generality. Indeed, this can be provided by choosing the new coor-
dinates (y) in E N defined by

yσ = ψ σ (x) (σ = 1; : : : ; s);

ys+α = xs+α (α = 1; : : : ; N s):

Then, the Jacobian matrix  


∂ yi
J=
∂xj
has the general rank R(J) = N at the points M as one can easily verify invoking
that M is regularly defined by Eqs. (1.3.4). Now the equations defining M have the
necessary form
yσ = 0 (σ = 1; : : : ; s)
in the coordinates (y): As for the conditions (1.3.5), they are independent of the
choice of the system of coordinates in E N by virtue of the above mentioned invari-
ance of the expression XF(x):
Thus, it is sufficient to prove our statement for M given by Eqs. (1.3.6). If the
operator X is written in the form
1.4 Theory of prolongation 17


X = ξ i (x)
∂x
then equations (1.3.5) take the form

ξ σ (0; : : : ; 0; xs+1 ; : : : ; xN ) = 0 (σ = 1; : : : ; s) (1.3.7)

for our M : Considering Eqs. (1.1.11) at points in M ; one can rewrite them in the
form of two subsystems of equations

∂ fσ
= ξ σ ( f 1 ; : : : ; f s ; f s+1 ; : : : ; f N ) (σ = 1; : : : ; s);
∂a
∂ f s+α
= ξ s+α ( f 1 ; : : : ; f N ) (α = 1; : : : ; N s):
∂a
The initial values for the first subsystem are vanishing at the point x 2 M :

f σ ja=0 = 0:

Therefore, the values

x0σ = f σ (x; a) = 0 (σ = 1; : : : ; s)

with x 2 M satisfy the initial conditions and, by virtue of Eqs. (1.3.7), all equations
of the first subsystem. Hence,

x0σ = f σ (x; a) = 0 (σ = 1; : : : ; s)

for all x 2 M with every value of the parameter a due to the uniqueness of the
solution of the system (1.1.11). According to Definition 1.4, it means that M is
invariant with respect to G1 with the operator X :

1.4 Theory of prolongation

1.4.1 Prolongation of the space

We divide the coordinates of points in E N into two kinds, namely n independent


variables xi (i = 1; : : : ; n) and m dependent variables uk (k = 1; : : : ; m); so that n +
m = N: Functions determining transformations Ta of a group G1 are written now in
the form

x0i = f i (x; u; a) (i = 1; : : : ; n);


(1.4.1)
u0k = gk (x; u; a) (k = 1; : : : ; m):
18 1 One-parameter continuous transformation groups admitted by differential equations

Let us consider an n-dimensional manifold Φ  E N given by equations

Φ : uk = ϕ k (x) (k = 1; : : : ; m): (1.4.2)

The transformation Ta maps this manifold into a manifold Φ 0 given by equations

u0k = ϕ 0k (x0 ) (k = 1; : : : ; m):

The manifold Φ can be represented in terms of functions ϕ 0 as follows:

gk (x; u; a) = ϕ 0k ( f (x; u; a)) (k = 1; : : : ; m): (1.4.3)

Let us introduce the new quantities pki equal to the values of derivatives of the
functions (1.4.2)
∂ uk
pki = i (i = 1; : : : ; n; k = 1; : : : ; m)
∂x
and call them derivatives on the manifold Φ :
The derivatives pi0k on the manifold Φ 0 are obtained by differentiating Eqs. (1.4.3)
with respect to every xi : This provides the equations
 j 
∂ gk ∂ gk l 0k ∂ f ∂fj l
+ p = pj + l pi ; (1.4.4)
∂ xi ∂ u l i ∂ xi ∂u

which are to be solved with respect to the quantities p0kj : The latter operation pro-
vides a single-valued result when the values of jaj are sufficiently small due to the
fact that the parenthesis in the right-hand sides of Eqs. (1.4.4) are equal to δi (the
j

Kronecker symbol) when a = 0: Hence, the determinant of the system does not van-
ish and, due to the continuity of functions contained in Eqs. (1.4.4), its sign does not
change in a certain vicinity of a = 0. Therefore, the quantities pikj can be represented
as functions of x; u; p; a :

p0k k
i = hi (x; u; p; a) (k = 1; : : : ; m; i = 1; : : : ; n): (1.4.5)

Note that the representation (1.4.5) has no connection with the specific form of the
manifold Φ at all.
e
Let us introduce the space EeN of the dimension N
e = N +mn; where the quantities
x; u; p serve as point coordinates. This space will be called the prolongation of the
space E N :

1.4.2 Prolonged group

The union of Eqs. (1.4.1) and (1.4.5) determines a family of transformations Tea in
e
EeN depending on the parameter a: The transformations Tea are said to be the pro-
1.4 Theory of prolongation 19

longed transformations obtained by prolonging the transformations Ta with respect


to functions u(x):
Let us demonstrate that the family of prolonged transformations fTea g is a local
group of transformations G e1 in the space EeNe :
To this end, let us take two arbitrary transformations Tea ; Teb and consider their
product Teb Tea : The manifold Φ under the action of Tea transforms into Φ 0 ; and Tea
provides the values of the derivatives p0 on Φ 0 : Under the action of Teb ; the manifold
Φ 0 is transformed into Φ 00 ; and Teb provides the derivatives p00 on Φ 00 : On the other
hand, one can obtain Φ 00 applying the transformation Tb Ta = Ta+b to Φ ; and the
derivatives p00 are obtained by means of Tea+b : Hence, the transformations Teb Tea and
Tea+b provide the same values of the derivatives p00 on the manifold Φ : Moreover,
these transformations obviously yield the corresponding values of the coordinates
x00 ; u00 :
The expressions of x00 ; u00 ; p00 via x; u; p; a; b are independent of a particular choice
of Φ ; and due to its arbitrariness, the quantities x; u; p can take any values. Thus,

Teb Tea = Tea+b


e
everywhere on EeN :
Furthermore, it is clear that Te0 is the identity transformation and that

Te a = Tea 1 :

e1 is a one-parameter local group of transformations in EeNe :


Thus, G
Definition 1.5. The group Ge1 of transformations given by Eqs. (1.4.1) and (1.4.5)
is called a prolonged group obtained by prolonging the group G e1 with respect to
functions u(x):

1.4.3 First prolongation of the group operator

Let us find the operator of the group G e1 : Since G1 is given by Eqs. (1.4.1), its
operator has the form (see Definition 1.2)

∂ ∂
X = ξi + ηk k ; (1.4.6)
∂ xi ∂u
where
∂ f i ∂ gk
ξ = ξ (x; u) =
i i
; η = η (x; u) =
k k
:
∂ a a=0 ∂ a a=0
e1 transforms the variables x; u precisely as the group G1
Invoking that the group G
e1 yields
does, we conclude that Definition 1.2 of the operator Xe for the group G
20 1 One-parameter continuous transformation groups admitted by differential equations


Xe = X + ζik k ; (1.4.7)
∂ pi

where
∂ hki
ζik = ζik (x; u; p) =
∂ a a=0

with the functions hki (x; u; p; a) from (1.4.5).


Now the problem of constructing the infinitesimal operator is reduced to finding
the additional coordinates ζik : To this end we differentiate Eq. (1.4.4) with respect
to a and then set a = 0: Note that one can change the order of differentiation in this
expression due to the initial assumptions. Upon these operations, one obtains the
equation  j 
∂ηk l ∂η
k
k ∂ξ ∂ξ j l
+ pi l = ζ i + p j
k
+ l pi ;
∂ xi ∂u ∂ xi ∂u
whence finally,
 
∂ηk ∂ηk l ∂ξ j ∂ξ j l
ζik = + p pkj + l pi (k = 1; : : : ; m; i = 1; : : : ; n):
∂ xi ∂ ul i ∂ xi ∂u
(1.4.8)

The operator Xe determined by Eqs. (1.4.7) and (1.4.8) is called the prolonged
operator obtained by prolonging the operator X of Eq. (1.4.6) with respect to the
functions u(x):
In what follows, we use the operator of total differentiation

∂ ∂
Di = + pki k  (1.4.9)
∂ xi ∂u
Then Eqs. (1.4.8) take the compact form

ζik = Di (η k ) pkj Di (ξ j ) (k = 1; : : : ; m; i = 1; : : : ; n): (1.4.10)

Note that ζik are linear homogeneous forms with respect to the first derivatives of
the coordinates of the operator X of Eq. (1.4.6).
As a general example, consider the group G1 generated by the infinitesimal op-
erator
∂ ∂
X = ξ (x; y) + η (x; y)
∂x ∂y
in the space E 2 (x; y); and prolong this operator with respect to the function y(x): If

dy
p = y0 = ;
dx
then
1.4 Theory of prolongation 21


Xe = X + ζ 
∂p
The operator of total differentiation has the form

∂ ∂
D= +p 
∂x ∂y
The formula (1.4.10) yields

∂η ∂η ∂ξ ∂ξ
ζ = D(η ) pD(ξ ) = +p p p2  (1.4.11)
∂x ∂y ∂x ∂y
Let us calculate the prolonged operators for some specific groups acting in
E 2 (x; y):
Example 1.15. The group of translations with respect to the x-axis. Its infinitesimal
operator has the form

X= 
∂x
In this case ξ = 1; η = 0: Substituting them into Eq. (1.4.11), one obtains ζ = 0:
Therefore, Xe = X : In such cases we say that the operator “does not prolong” which
means that the operator does not change after the prolongation.

Example 1.16. In the case of the dilation operator

∂ ∂
X =x + 2y
∂x ∂y

one has ξ = x; η = 2y: Substituting into Eq. (1.4.11), one obtains the dilation oper-
ator again, but in an extended space. Namely, ζ = 2p p = p and hence


Xe = X + p 
∂p

Problem 1.1. Prove that if X is the dilation operator in E N (see Example 1.12 of
x1.2), then the prolonged operator Xe is also a dilation operator in the extended space.
Example 1.17. For the rotation operator

∂ ∂
X =y x ;
∂x ∂y

one has ξ = y; η = x and equation (1.4.11) provides ζ = (1 + p2) and hence


Xe = X (1 + p2) 
∂p

Consider a group G1 in the space E 3 (x; y; z) with the operator


22 1 One-parameter continuous transformation groups admitted by differential equations

∂ ∂ ∂
X = ξ (x; y; z) + η (x; y; z) + ζ (x; y; z) 
∂x ∂y ∂z

Let us calculate the prolonged operator with respect to the function z(x; y): Here,
one has two operators of total differentiation:

∂ ∂ ∂ ∂
Dx = +p ; Dy = +q ;
∂x ∂z ∂y ∂z
where
∂z ∂z
p= ; q= 
∂x ∂y
The prolonged operator has the form

∂ ∂
Xe = X + σ +τ ;
∂p ∂q
where
σ = Dx (ζ ) pDx (ξ ) qDx(η );
(1.4.12)
τ = Dy (ζ ) pDy (ξ ) qDy(η )

according to (1.4.10).

1.4.4 Second prolongation of the group operator

The above prolongation of the group G1 and its operator X is also called the “first
prolongation” (i.e. prolongation to derivatives of the first order of the variables u
with respect to the variables x). Likewise, one can define the second- and higher-
order prolongations (to derivatives of the second- and higher-order derivatives of u
with respect to x). Then the additional coordinates of the prolonged operator will still
be given by the formulae (1.4.10), but by taking into account that now the operator
Xe is prolonged with respect to the functions p(x); which results in the change of the
operators of total differentiation Di as well.
In particular, if one wants to carry out the second prolongation, one uses the
operators of total differentiation

e i = ∂ + pki ∂ + rikj ∂ ;
D (1.4.13)
∂ xi ∂ uk ∂ pkj

where
∂ pki
rikj =
∂xj
Let us write the second prolongation of the operator X (1.4.6) in the form
1.4 Theory of prolongation 23

e ∂
Xe = Xe + σikj k ; (1.4.14)
∂ ri j

where Xe is the first prolongation of the operator X given by Eqs. (1.4.7) and (1.4.10).
The additional coordinates σikj are given by the following formulae:

σikj = D
e i (ζ jk ) e i (ξ t ) (k = 1; : : : ; m; i; j = 1; : : : ; n);
rtkj D (1.4.15)

in accordance with Eqs. (1.4.10). The last term in Eqs. (1.4.15) obviously has the
summation over t = 1; : : : ; n:
For example, the additional coordinate of the first prolongation for the operator

∂ ∂
X =ξ +η
∂x ∂y
is given by the formula (1.4.11). Denoting

d2y
r = y00 = ;
dx2
we write the second prolongation in the form (1.4.14):

e ∂
Xe = Xe + σ  (1.4.16)
∂r
In our case the operator of total differentiation (1.4.13) is written

e = ∂ + p ∂ +r ∂ ;
D
∂x ∂y ∂p
whereas formulae (1.4.15) gives the following coordinate σ of the operator (1.4.16):

σ = D(
e ζ) e ξ ):
rD(

Substituting here ζ from Eq. (1.4.11), one obtains


 2   2 
∂ 2η ∂ η ∂ 2z 2 ∂ η ∂ 2ξ
σ= + p 2 + p 2 (1.4.17)
∂ x2 ∂ x∂ y ∂ x2 ∂ y2 ∂ x∂ y
 
3∂ ξ ∂η ∂ξ ∂ξ
2
p +r 2 3pr 
∂ y2 ∂y ∂x ∂y

Lemma 1.3. Given the operator

∂ ∂
X = ξi + ηk k 
∂ xi ∂u
The following equation is satisfied for any function F = F(x; u) :
24 1 One-parameter continuous transformation groups admitted by differential equations

e i F)
X(D Di (XF) = Di (ξ j )D j F: (1.4.18)

Proof. The detailed expression of the left-hand side of Eq. (1.4.18) is


  
∂ ∂ ∂ ∂F l ∂F
ξ i i + η k k + ζik k + p
∂x ∂u ∂ pi ∂ xi i
∂ ul
  
∂ ∂ ∂F ∂F
+ pli l ξ j j +ηk k :
∂ xi ∂u ∂x ∂u
As a result of the operations, one obtains mixed derivatives of the second order of
the function F; but one can easily verify that they cancel out. Therefore, one has to
calculate only the derivatives of coefficients of the first derivatives of F: This yields

∂F ∂F ∂F ∂F ∂F
ζik Di (ξ j ) Di (η k ) = pkj Di (ξ j ) Di (ξ j )
∂ uk ∂xj ∂ uk ∂ uk ∂xj
 
∂F ∂F
= Di (ξ )j
+ pkj k
∂xj ∂u

= Di (ξ j )D j F;

which was to be proved.

1.4.5 Properties of prolongations of operators

In what follows, we investigate various properties of transition from the operator


X to the prolonged operator Xe; or in other words properties of the operation of
prolongation of operators.
A simplest example of such properties is linearity (and homogeneity) of the op-
eration of prolongation with respect to coordinates of the initial operator. This prop-
erty follows directly from Eqs. (1.4.10). It can be expressed by some formula pro-
vided that the following stipulation is made. Let X1 and X2 be two operators of the
form (1.4.6). Their coordinates will be specified by the subscripts 1 and 2 as well. A
linear combination of the operators X1 and X2 with constant coefficients e1 and e2
is the operator
e1 X1 + e2X2
with the coordinates
e1 ξ1i + e2 ξ2i ; e1 η1k + e2 η2k :
Then, the above property of linearity of the operation of prolongation is expressed
by the formula
e1 X^ 2 1e 2e
1 + e X2 = e X1 + e X2 : (1.4.19)
1.4 Theory of prolongation 25

The property of the operation of prolongation formulated by the following theorem


is less obvious.
Theorem 1.6. The operation of prolongation is invariant with respect to the choice
of a system of coordinates.
Proof. Let
∂ ∂
X =ξi +ηk k
∂x i ∂u
be an operator given in the system of coordinates (x; u): Let us consider a change of
coordinates
yi = yi (x; u); vk = vk (x; u):
We denote the derivative of vk with respect to yi by qki ; i.e.

∂ vk
qki = 
∂ yi

If we make the change of coordinates (x; u; p) ! (y; v; q) in the prolongation Xe of the


operator X ; we obtain the operator (Xe )0 : However, it can be done vice versa: the co-
ordinates are transformed first, deriving the operator X 0 ; which is further prolonged
with respect to the functions v(y):
The theorem claims that both ways lead to one and the same result and this
property is written by the formula

(Xe )0 = Xe0 : (1.4.20)

Let us prove Eq. (1.4.20). According to Eq. (1.2.6), one has

∂ ∂
X 0 = X (yi ) + X(vk ) k
∂y i ∂v
so that
∂ ∂ ∂
Xe0 = X(yi ) i + X (vk ) k + ζi0k k 
∂y ∂v ∂ qi
On the other hand,

Xe = X + ζik k
∂ pi
and using Eq. (1.2.6) again, one has

e 0 = Xe(yi ) ∂ + X(v
(X) e k ) ∂ + X(q
e ki ) ∂ 
∂ yi ∂ vk ∂ qki

Note that yi and vk are independent of pki ; therefore Xe affects them in the same
way as the operator X: Hence,
26 1 One-parameter continuous transformation groups admitted by differential equations

e 0 = X (yi ) ∂ ∂ e ki ) ∂ 
(X) + X(vk ) k + X(q
∂y i ∂v ∂ qki

It remains only to verify that


ζi0k = X(q
e ki ):

Restricting the change of coordinates on the manifold

Φ: uk = ϕ k (x) (k = 1; : : : ; m)

we have
vk (x; u) = vk (y(x; u)):
Differentiating this equation with respect to xi ; we obtain
 
∂ vk l∂v
k ∂ vk ∂ y j l ∂y
j
+ p = + p ;
∂ xi i
∂ ul ∂ y j ∂ xi i
∂ ue
or
qkj Di (y j ) = Di (vk ):
The equation of the manifold Φ has been written in the variables x: However, it
can also be written in the variables y and result in the similar formulae

pkj D0i (x j ) = D0i (uk );

where
∂ ∂
D0i = + qli l 
∂ yi ∂v
Let us rewrite the differential operator D0i in another form, i.e.

∂ ∂
D0i = + qli l
∂ yi ∂v

∂xj ∂ l∂x ∂
j ∂ uk ∂ l ∂u ∂
k
= + q + + q
∂ yi ∂ x j i
∂ vl ∂ x j ∂ yi ∂ uk i
∂ vl ∂ uk
∂ ∂
= D0i (x j ) + D0i (uk ) k
∂xj ∂u
= D0i (x j )D j :

Thus, one obtains


D0i = D0i (x j )D j : (a)
Acting by the operator (a) on yt ; one arrives at

D0i (x j )D j (yt ) = δit :

Likewise, one can prove the formulae


1.4 Theory of prolongation 27

D0i (x j )Dt (yi ) = δt j : (b)

Now equation (1.4.10) yields

ζi0k = Dt0 (X (vk )) qtk D0i (X(yt )):

Applying Eq. (a), one obtains

ζi0k = D0i (x j )(D j X (vk ) qtk D j X(yt )):

Further, applying the operator Xe to the equation

qkj Di (y j ) = Di (vk )

one obtains
e kj Di (y j ) + qkj XD
Xq e i (y j ) = XD
e i (vk ): (c)
Let us turn the equation
ζ j0k = X(q
e kj )

into an equivalent one multiplying it by Di (y j ): Taking into account Eqs. (a), (b),
(c), we conclude that it is sufficient to prove the equation
e i (vk )
XD e i (y j ) = Di X(vk )
qkj XD qkj Di X(y j );

or, in another form


e i (vk )
XD e i (y j )
Di X(vk ) = qkj [XD Di X (y j )]:

Using Lemma 1.3, one obtains the equivalent equation

Di (ξ j )D j (vk ) = qkj Di (ξ t )Dt (y j );

which holds identically by virtue of the formula

qkj Di (y j ) = Di (vk ):

This completes the proof.


Let us introduce another term connected with a prolonged group G e1 : Invariants
e
of the group G1 that are not invariants of the initial group G1 are called differential
invariants of the group G1 : Likewise, invariant manifolds of the group G1 are re-
ferred to as differential invariant manifolds of the group G1 if they are not invariant
manifolds of the group G1 : This terminology is used not only in what refers to the
first prolongation of a group, but to higher-order prolongations as well.
28 1 One-parameter continuous transformation groups admitted by differential equations

1.5 Groups admitted by differential equations

1.5.1 Determining equations

Consider a system of differential equations (S) with respect to the unknown func-
tions uk (k = 1; : : : ; m) of independent variables xi (i = 1; : : : ; n): Let π be the highest
order of derivatives involved in (S): Equations (S) are considered as equations of a
manifold in a π times prolonged space E(x; e u): This manifold will be denoted by S
(without brackets).
Definition 1.6. The system (S) is said to admit a group G 1 if the corresponding
manifold S is a differential invariant manifold of the group G1 :
In other words, (S) admits G1 if the equations (S) remain unaltered under the
action of any properly prolonged transformation Ta 2 G1 :
Since every group G1 is characterized by its operator X; Definition 1.6 can be
obviously reformulated in terms of the operator X:
It is convenient to formulate the main property of solutions of the system (S)
admitting the group G1 by considering every solution as a manifold Φ  E N :

Φ : uk = ϕ k (x) (k = 1; : : : ; m):

Then the property of the manifold Φ to be a solution of the system (S) can be
formulated as follows: a π times prolonged manifold Φ lies on the manifold S: This
property will be expressed by the formula

Φ
e  S:

Theorem 1.7. If (S) admits the group G1 ; then Ta Φ will be a solution for all Ta 2 G1
together with Φ :
Proof. Since Φ
e  S; one has Tea Φ
e  Tea S: The invariance of S implies that

Tea S = S:

It has been demonstrated in x1.4 that Tea Φ a Φ : Therefore,


e = Tg

aΦ = T
Tg ea Φ
e  S;

which was to be proved.


The following problem arises due to Definition 1.6. Find all groups G1 admitted
by a given system (S): Here we will not assume that the system (S) under consider-
ation has solutions. Furthermore, it is not important if the system (S) is determined,
under-determined or over-determined. Nevertheless, we provide the algorithm al-
lowing to reduce the solution of the problem to the problem of integration of an
auxiliary system of differential equations. It will be necessary, however, that equa-
1.5 Groups admitted by differential equations 29

tions (S) are written in the form ensuring regularity of the manifold S (in the sense
of x1.3).
In order to solve the formulated problem, note that finding a group G1 admitted
by the system (S) is equivalent to determining the operator X of the group. It is
convenient to use the operator X for it is easy to write the invariance condition of
the manifold (see x1.3) precisely by means of X:
Thus, let us assume that a system of differential equations is given. To be specific,
we assume that it is of the first order:

Fα (x; u; p) = 0 (α = 1; : : : ; A): (S)

We look for the operator

∂ ∂
X = ξ i (x; u) + η k (x; u) k ; (1.5.1)
∂ xi ∂u
admitted by the system (S). According to Definition 1.6, one has to derive X from the
conditions of invariance of the manifold S; which is regularly given in the extended
space by equations (S) with respect to the prolonged operator Xe: Writing Xe by the
formulae (1.4.7) and (1.4.8) and applying Theorem 1.5, one can see that (S) admits
G1 with the operator (1.5.1) if and only if the equations

e α (x; u; p) = 0 (α = 1; : : : ; A)
XF (1.5.2)
S

are satisfied at every point (x; u; p) 2 S:


Note that equations (1.5.2) are differential equations with respect to the unknown
coordinates ξ ; η of the operator (1.5.1).
Equations (1.5.2) are referred to as determining equations for the operators X ;
admitted by the system (S).
Theorem 1.8. A set of operators admitted by the given system (S) generates a linear
vector space.
In other words, if (S) admits the operators X1 and X2 ; then (S) admits also any
linear combination
X = e1 X1 + e2X2
with constant coefficients e1 ; e2 :
Proof. The determining equations (1.5.2) are linear and homogeneous with re-
spect to the coordinates ξ ; η of the operator X due to linearity of the operation of
prolongation mentioned in x1.4. Since the set of solutions of the determining equa-
tions can be identified with the set of operators X ; the theorem is proved.
In particular, it follows from Theorem 1.8 that the system of determining equa-
tions (1.5.2) is always compatible, in spite of the fact that the system (S) may have
no solution at all!
In what follows, the linear space of operators X admitted by the system (S) is
denoted by L(S) or, if there is no risk of ambiguity, just by L:
30 1 One-parameter continuous transformation groups admitted by differential equations

The space L can be of any dimension from zero to infinity. If it equals to r; then
we write Lr instead of L: When r = 0; ∞ we also write L0 ; L∞ :
In a general case one has L = L0 ; i.e. the system (S) does not admit any nonzero
operator X and hence, any group G1 : However, in many important cases r > 0 as
one can see below.
If L(S) = Lr ; the system (S) is said to admit the space of operators L r :

1.5.2 First-order ordinary differential equations

Let us find the space L admitted by the ordinary differential equation y0 = f (x; y):
Here the system (S) consists of one equation

S: p = f (x; y); (1.5.3)

where
dy
p = y0 = 
dx
We seek the operator admitted by Eq. (1.5.3) in the form

∂ ∂
X =ξ +η ;
∂x ∂y

where the coordinates ξ and η depend on x and y:


The prolonged operator has the form

∂ ∂ ∂
Xe = ξ +η + [ηx + p(ηy ξx ) p2 ξ y ] 
∂x ∂y ∂p

The determining equations are obtained by acting by the operator Xe on Eq. (1.5.3)
and then by replacing the variable p by its value f (x; y) (transition to the manifold
S). This provides the determining equation

ηx + f (ηy ξx ) f 2 ξy = ξ f x + η f y : (1.5.4)

Since equation (1.5.4) contains two unknown functions, one of them can be chosen
to be arbitrary. Hence, L(S) = L∞ :
In order to construct the general solution of Eq. (1.5.4), we set that

η = ξ f +θ; (1.5.5)

where θ is a new unknown function of x; y: Substituting this expression into the


determining equation one can see that it takes the form

∂θ ∂θ
+f = fy θ : (1.5.6)
∂x ∂y
1.5 Groups admitted by differential equations 31

This equation is obviously satisfied when θ = 0: Hence, equation (1.5.3) admits the
operator
∂ ∂
X0 = + f (x; y) ;
∂x ∂y
as well as the operators of the form
 
∂ ∂
X = ξ X0 = ξ
0
+f
∂x ∂y

with an arbitrary function ξ = ξ (x; y): One can also see from here that the admitted
space L is infinite-dimensional.
Let us demonstrate that if θ = θ (x; y) is a non-vanishing solution of Eq. (1.5.6),
then the function
1
M=
θ
is the integrating factor for Eq. (1.5.3).
Indeed, writing the initial equation in the form

dy f dx = 0

and multiplying it by M; one obtains the conditions of the integrating factor in the
form
∂ M ∂ (M f )
+ = 0:
∂x ∂y
Substituting here M = 1=θ one can see that the result coincides with Eq. (1.5.6).
Thus, we conclude that if equation (1.5.3) admits the operator

∂ ∂
X =ξ +η
∂x ∂y

for which η f ξ 6 0; then the function


1
M=
η fξ

is an integrating factor.
Generally speaking, the problem of finding θ from Eq. (1.5.6) is not easier than
the problem of integrating the initial equation (1.5.3). However, the indicated prop-
erty provides an effective tool for integrating Eq. (1.5.3) if the admitted operator X is
known incidentally. One can easily verify that this property is the basis of the whole
“elementary” theory of integration of ordinary differential equations by quadrature.
For instance, consider the equation
y
y0 = 
x(y + ln x)

It is readily verified that it admits the operator


32 1 One-parameter continuous transformation groups admitted by differential equations


X = xy
∂x
and that
y2
η fξ = 
y + lnx
Thus, the given equation is equivalent to the equation
y + lnx 1
dy dx = 0;
y2 xy
with a total differential in the left-hand side.

1.5.3 Second-order ordinary differential equations

Let us find the space L admitted by the equation y00 = f (x; y; y0 ): Denoting p = y0
and r = y00 ; one has the system (S) of one equation

S: r = f (x; y; p): (1.5.7)

We are looking for the admitted operator in the form

∂ ∂
X = ξ (x; y) + η (x; y)  (1.5.8)
∂x ∂y
In order to write the determining equation, one has to make the second prolongation
e
Xe of the operator X : It has been shown in x1.4 that the first prolongation Xe of
X is given by the formulae (1.4.11) and (1.4.17). According to Eqs. (1.5.2), the
determining equation is written

e
e
X(r f (x; y; p))jr= f = 0:

It takes the following form in detail:

ηxx + p(2ηxy ξxx ) + p2 (ηyy 2ξxy ) p3ξyy + f (ηy 2ξx 3pξy)

= ξ fx + η fy + [ηx + p(ηy ξx ) p2 ξ y ] f p : (1.5.9)

An important peculiarity of Eq. (1.5.9) is that it holds identically with respect to


the variables x; y; p; while the unknown functions ξ ; η depend on the variables x; y
only. Therefore, equation (1.5.9) splits into a system of equations that already do
not contain p: The equation readily splits if the function f (x; y; p) is holomorphic
with respect to the variable p and can be expanded into the Taylor series:
1.5 Groups admitted by differential equations 33

f (x; y; p) = ∑ fn (x; y)pn :
n=0

Substituting this expansion into Eq. (1.5.9), we have to equate the terms with the
same powers of p: This is the operation of splitting which, generally speaking, leads
to an infinite system of equations for ξ ; η : The latter system is also called a system
of determining equations.
Carrying out the indicated operation of Eq. (1.5.9), one obtains the following
system of determining equations

p0 : ηxx = f0x ξ + f0yη f0ηy + 2 f0ξx ;

p1 : 2ηxy ξxx = f1x ξ + f1yη + 2 f2ηx + f1ξx + 3 f0ξy ;

p2 : ηyy 2ξxy = f2x ξ + f2yη + 3 f3ηx + f2 ηy + 2 f1ξy ;

p3 : ξyy = f3x ξ + f3yη + 4 fvηx + 2 f3ηy f3 ξx + f2 ξy ;


:::::::::::::

Of course, there is no chance here that the solution of the system provides some
Lr with r > 0: On the other hand, the question on the maximum possible value of r
is of interest. The answer is given by the following theorem by S. Lie.
Theorem 1.9. Equation (1.5.7) cannot admit the space Lr of operators of the form
(1.5.8) with r > 8:
Proof. Let us rewrite the first four determining equations in a compact form

ηxx = ϕ1 ; 2ηxy ξxx = ϕ2 ; ηyy 2ξxy = ϕ3 ; ξyy = ϕ4 ; (1.5.10)

where ϕσ (σ = 1; : : : ; 4) are some linear functions of ξ ; η and the first derivatives


of ξ ; η with respect to x; y: Equations (1.5.10) provide definite expressions for the
third derivatives:
ξxxx = φ1 ; ξxxy = φ2 ; ξxyy = φ3 ; ξyyy = φ4 ;
(1.5.11)
ηxxx = ω1 ; ηxxy = ω2 ; ηxyy = ω3 ; ηyyy = ω4 ;

where every function φσ ; ωσ depends only on ξ ; η as well as on their first and sec-
ond derivatives. The general solution of the system (1.5.11) depends on 6 + 6 = 12
arbitrary constants and due to linearity of Eqs. (1.5.11) is a linear form of these
constants itself. Moreover, equations (1.5.10) impose four independent relations on
these constants. Therefore, no more than 12 4 = 8 arbitrary constants remain,
which means that the space of solutions has the dimension r  8: The actual num-
ber of arbitrary constants remaining in the solution can be less than 8 ; since the
complete system of the determining equations contains other equations as well.
Let us demonstrate that the dimension 8 is reached by the equation
34 1 One-parameter continuous transformation groups admitted by differential equations

y00 = 0:

In this case the complete system of the determining equations is reduced to Eqs.
(1.5.10) and (1.5.11), where ϕσ = φσ = ωσ = 0:
The general solution of Eqs. (1.5.11) in this case has the form

ξ = A1 + A2 x + A3y + A4xy + A5 x2 + A6 y2 ;

η = B1 + B2x + B3 y + B4xy + B5x2 + B6 y2


with the arbitrary constants Ai ; Bi : Equations (1.5.10) impose on these constants the
relations
B5 = 0; B4 A5 = 0; B6 A4 = 0; A6 = 0
by virtue of which A5 ; A6 ; B5 ; B6 are expressed in terms of other constants. Finally,
one obtains the general solution of the determining equations in the form

ξ = A1 + A2 x + A3y + A4xy + B4x2 ;

η = B1 + B2 x + B3y + A4y2 + B4 xy;


where A j ; B j are already independent. The family of operators X in the form (1.5.8)
with the above expressions for ξ and η generates a space L8 :

1.5.4 Heat equation

Turning to partial differential equations, let us find operators admitted by the heat
equation
uy = uxx :
The following notation of derivatives is introduced:

ux = p; uy = q; uxx = r; uyx = s; uyy = t:

The equation of the manifold S in the extended space has the form

S: q = r: (1.5.12)

Let us look for the admitted operator in the form

∂ ∂ ∂
X = ξ (x; y; u) + η (x; y; u) + ζ (x; y; u)  (1.5.13)
∂x ∂y ∂u

Prolonging this operator with respect to u(x; y); one obtains

∂ ∂
Xe = X + α +β ;
∂p ∂q
1.5 Groups admitted by differential equations 35

where according to Eqs. (1.4.8)

α = Dx (ζ ) pDx (ξ ) qDx(η );

β = Dy (ζ ) pDy (ξ ) qDy(η );
and operators of total differentiation have the form

∂ ∂
Dx = +p ;
∂x ∂u
∂ ∂
Dy = +q :
∂y ∂u
The expanded form of the expressions α ; β is

α = ζx + pζu p(ξx + pξu) q(ηx + pηu );

β = ζy + qζu p(ξy + qξu) q(ηy + qηu ):


In order to write the determining equation, one has to find the second prolongation

e ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂
Xe = Xe + ρ +σ +τ = X +α +β +ρ +σ +τ ;
∂r ∂s ∂t ∂p ∂q ∂r ∂s ∂t
which provides
e
e
X(q r) = β ρ:
Therefore, the determining equation (1.5.2) is written

(β ρ )jr=q = 0: (1.5.14)

Equation (1.5.14) shows that one has to calculate only the expression for the coor-
ee
dinate ρ in the prolonged operator X: Using the operator of total differentiation

ex = ∂ + p ∂ + r ∂ + s ∂
D
∂x ∂u ∂p ∂q
and the formula (1.4.15) one obtains

ρ =D
ex α e xξ
rD exη :
sD

All preliminary formulae are ready now and one can write out the determining equa-
tion (1.5.14) in detail:
36 1 One-parameter continuous transformation groups admitted by differential equations

ζy + qζu p(ξy + qξu ) q(ηy + qηu )

= ζxx + pζxu p(ξxx + pξxu ) q(ηxx + pηxu ) (1.5.14 0)

+p[ζxu + pζuu p(ξxu + pξuu ) q(ηxu + pηuu )]

+q(ζu ξx 2pξu qηu) s(ηx + pηu )

q(ξx + pξu) s(ηx + pηu ):

Since we have already turned to the manifold S of Eq. (1.5.12) by setting r = q; the
determining equations (1.5.14 0) should hold identically in the variables

x; y; u; p; q; s; t:

However, according to the assumption (1.5.13), the coordinates ξ ; η ; ζ of the un-


known operator X depend only on x; y; u: Therefore, we split the determining equa-
tion with respect to four independent variables p; q; s and t: Note that the variable
t is not involved in Eq. (1.5.14 0). It is manifest from Eq. (1.5.14 0) that splitting with
respect to s yields the equation

ηx + pηu = 0;

which in its turn splits with respect to p and gives two equations:

ηx = 0; ηu = 0:

Using these equations in (1.5.14 0) and splitting with respect to q one obtains

ζu pξu ηy = ζu 2ξx 3pξu;

whence, splitting with respect to p one obtains

ξu = 0; ηy = 2ξx :

In view of this information, equation (1.5.14 0) takes the form

ζy pξy = ζxx + pζxu pξxx + p(ζxu + pζuu ):

Finally, splitting the latter equation with respect to p one obtains

ζuu = 0; ζy = ζxx ; ξy = ξxx 2ζxu:

Thus, equation (1.5.14 0) generates the following system of determining equations

ηx = 0; ηu = 0; ηy = 2ξx ; ξu = 0;
(1.5.15)
ξy = ξxx 2ζxu ; ζuu = 0; ζy = ζxx :
1.5 Groups admitted by differential equations 37

In order to find the space L(S); one has to construct the general solution of the
system (1.5.15). The equation ζuu = 0 shows that ζ is linear with respect to u at
most, i.e.
ζ = a(x; y)u + b(x; y):
Furthermore, the equations
ηx = 0; ηu = 0
show that η = η (y) and therefore the equation ηy = 2ξx yields

1
ξ = η 0 (y)x + c(y):
2
Upon substituting the expression for ζ ; equation ζy = ζxx takes the form

ay u + byu = axx u + bxx;

whence, splitting with respect to the variable u; one obtains two equations:

ay = axx ; by = bxx :

Similar substitution brings the equation ξy = 2ζxu to the form

1
2ax = η 00 (y)x + c0 (y):
2
Now the equation ay = axx becomes

1 00
ay = η :
4
Hence, all the determining equations (1.5.15) are satisfied if we find the functions
a(x; y); b(x; y); η (y); c(y) as solutions of the equations

1 00 1 0 1 00
ax = η (y)x c (y); ay = η (y); by = bxx : (1.5.16)
4 2 4
The compatibility condition for the first two equations (1.5.16) is written

η 000 (y)x + c00 (y) = 0;

whence
η 000 (y) = 0; c00 (y) = 0:
The latter equations have the general solution

η = A1 + 2A2y + 4A3y2 ; c = B1 + 2B2 y:

Substituting these functions into ax ; ay one obtains


38 1 One-parameter continuous transformation groups admitted by differential equations

a(x; y) = B3 B2 x A3 (x2 + 2y):

Summing up the above results one obtains the following general solution of the
determining equations (1.5.15)

ξ = B1 + 2B2y + A2x + 4A3xy;

η = A1 + 2A2y + 4A3y2 ; (1.5.17)


ζ = [B3 B2 x A3 (x2 + 2y)]u + b(x; y);
where A1 ; A2 ; A3 ; B1 ; B2 ; B3 are arbitrary constants and b(x; y) is any solution of Eq.
(1.5.16). It follows that the space L of operators admitted by the heat equations
is infinite dimensional. Indeed, one can find an infinite set of linearly independent
functions satisfying Eq. (1.5.16). Let us try to investigate the structure of this space
L∞ in detail.
Note that the operators with the coefficients (1.5.17) contain the operators of the
form

X 0 = b(x; y)  (1.5.18)
∂u
The set of the operators (1.5.18) with all b(x; y) satisfying Eq. (1.5.16) is a linear
space L0 : This property is guaranteed by linearity and homogeneity of Eq. (1.5.16).
On the other hand, the set of all operators obtained when b  0 is a six-dimensional
space L6 : Since the operators from L0 and L6 are linearly independent, one obtains
the decomposition of L∞ into the direct sum of subspaces:

L∞ = L6  L0 : (1.5.19)

Let us dwell on operators from L0 and find out to which groups G1 they cor-
respond. According to Theorem 1.2, in order to construct G1 with X 0 of the form
(1.5.18) one has to solve the system of equations

∂ x0 ∂ y0 ∂ u0
= 0; = 0; = b(x0 ; y0 )
∂a ∂a ∂a
with the initial conditions

x0 ja=0 = x; y0 ja=0 = y; u0 ja=0 = u:

One can readily solve the system and verify that the corresponding transformations
Ta 2 G1 have the form

Ta : x0 = x; y0 = y; u0 = u + ab(x; y):

Since the function ab(x; y); together with b(x; y); is a solution of Eq. (1.5.16), i.e.
of the original heat equation, the above transformations Ta consist merely in adding
some solution of the equation uy = uxx to the solutions u of the same equation. It is
clear that the presence of such transformations Ta admitted by the equation uy = uxx
1.5 Groups admitted by differential equations 39

is a trivial consequence of linearity of this equation. This property is obviously valid


for all linear equations (and systems of equations).
Therefore, the valuable part of the derived L(s) is encapsulated in the subspace
L6 : As any finite dimensional linear vector space, L6 has a basis. In the given case,
the latter can be obtained by fixing the nonzero value of one of the arbitrary con-
stants A; B and assuming that the rest are vanishing in Eqs. (1.5.17). Thus, one
obtains the following basis of the space L6 of operators admitted by the equation
uy = uxx :
∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = x + 2y ;
∂x ∂y ∂x ∂y
∂ ∂
X4 = 2y xu ; (1.5.20)
∂x ∂u
∂ ∂ 1 2 ∂ ∂
X5 = xy + y2 (x + 2y)u ; X6 = u 
∂x ∂y 4 ∂u ∂u
Every operator (1.5.20), or any linear combination of these operators (with constant
coefficients) generates a one-parameter group G1 admitted by the equation uy = uxx :
According to Theorem 1.7, one can map any known solutions into new solutions by
applying the transformations Ta 2 G1 :
Let us find the general formula for the transformation of solutions. Let a trans-
formation Ta 2 G1 be given by Eqs. (1.1.2), and let

Φ : uk = ϕ k (x) (k = 1; : : : ; m)

be a solution of the system (S) admitting the group G1 : Then Ta maps the manifold
Φ into the manifold
Φ 0 : u0k = ϕ 0k (x0 );
which is also a solution of (S). Equations (S) are written in the variables (x; u) in the
form
gk (x; u; a) = ϕ 0k ( f (x; u; a)):
Since ϕ 0k (x) is a solution of (S), then omitting the prime one can formulate the
following rule of transformation of solutions. If

uk = ϕ k (x) (k = 1; : : : ; m)

is a solution of the system (S), then the functions

uk = ϕ 0k (x; a)

determined implicitly from the system of equations

gk (x; u; a) = ϕ k ( f (x; u; a)) (k = 1; : : : ; m); (1.5.21)

define a new solution of the system (S) for any value of the parameter a: Let us
apply this procedure to some operators (1.5.20).
40 1 One-parameter continuous transformation groups admitted by differential equations

Example 1.18. The operator X1 generates the group G1 :

x0 = x + a; y0 = y; u0 = u:

The formula (1.5.21) yields the transformation of the solution u = ϕ (x; y) into the
solution u = ϕ (x + a; y):
Example 1.19. The operator X3 generates the group G1 :

x0 = ax; y0 = a2 y; u0 = u;

and equations (1.5.21) provides u = ϕ (ax; a2 y):

Example 1.20. The operator X4 generates the group G1


ax a 2 y
x0 = x + 2ay; y0 = y; u0 = e u:

Here, equation (1.5.21) takes the form


ax a 2 y
e u = ϕ (x + 2ay; y)

and provides the following formula for transformation of solutions:


2
u = eax+a y ϕ (x + 2ay; y):

Example 1.21. Consider the operator X5 : In order to construct transformations Ta 2


G1 ; one has to integrate the system of ordinary differential equations (Theorem 1.2):

dx0 dy0 du0 1 02


= x0 y 0 ; = y02 ; = (x + 2y0)u0
da da da 4
with the following initial conditions a = 0 :

x0 = x; y0 = y; u0 = u:

The solution is given by

x y p ax2 
x0 = ; y0 = ; u0 = 1 ay e 4(1 ay) u:
1 ay 1 ay
Equation (1.5.21) has the form
p   
ax2 x y
1 ay e 4(1 ay) u=ϕ ;
1 ay 1 ay

and provides the following formula for transformation of the solution ϕ (x; y) into
the solution:  
1 ax2 x y
u= p e 4(1 ay) ϕ ; :
1 ay 1 ay 1 ay
1.5 Groups admitted by differential equations 41

The formula is far from being


p as evident as the previous ones. In particular, assuming
that ϕ = 1; multiplying by a and passing over to the limit as a ! ∞; one obtains
the well-known fundamental solution of the heat equation:

1 x2
u= p e 4y 
y

There exists another method for constructing new solutions from known ones.
It is not connected with constructing finite transformations Ta ; but applicable only
to linear homogeneous equations. Consider a solution depending on the parameter
a: Since the parameter a is not involved in (S), one can obtain a new solution by
differentiating the considered solution with respect to the parameter a: Let us apply
this observation to solutions provided by the formula (1.5.21).
If we denote the solution derived from the formula (1.5.21) by

uk = ϕ̄ k (x; a);

then we have the identities

gk (x; ϕ̄ (x; a); a)  ϕ k ( f (x; ϕ̄ (x; a); a)):

Differentiating them with respect to a; letting a = 0; invoking definition (1.1.10),


and taking into account that
ϕ̄ k (x; 0) = ϕ k (x);
we obtain
∂ ϕ̄ k ∂ϕk
+ η k (x; ϕ ) = ξ i (x; ϕ ) i 
∂ a a=0 ∂x
Since (S) is linear, the functions
∂ ϕ̄ k
∂ a a=0
should also provide a solution for the system (S). Hence, the equations

∂ϕk
uk = ξ i (x; ϕ ) η k (x; ϕ ) (k = 1; : : : ; m)
∂ xi
provide a solution together with uk = ϕ k (x): It is easy to remember these equations
since the right-hand side is the result of application of the operator X to the dif-
ference ϕ k (x) uk taken on the initial solution. Finally, one makes the following
conclusion for linear equations.
If
uk = ϕ k (x) (k = 1; : : : ; m)
is a solution of the linear (homogeneous) system (S) admitting the operator X; then
the functions
uk = X (ϕ k (x) uk )juk =ϕ k (x) (k = 1; : : : ; m) (1.5.22)
42 1 One-parameter continuous transformation groups admitted by differential equations

generate a solution of the system (S) as well.


For instance, applying this method to X4 from Eqs. (1.5.20), one obtains the
following solution:
u = 2yϕx + xϕ :
Likewise, applying the procedure to the operator X5 and any solution ϕ (x; y); one
obtains the following solution:
1
u = xyϕx + y2ϕy + (x2 + 2y)ϕ :
4

1.5.5 Gasdynamic equations

Let us consider the space of operators admitted by equations of gas dynamics.


The system of equations describing the one-dimensional polytropic gas flow
(with plane waves) has the form
1
ut + uux + px = 0;
ρ
(1.5.23)
ρt + uρx + ρ ux = 0;
pt + upx + γ pux = 0:

The spatial coordinate x and time t are independent variables, the velocity u; pres-
sure p and density ρ are functions of x and t: The isentropic exponent γ 6= 0 is a
constant.
We look for the infinitesimal operator admitted by the system (1.5.23) in the form

∂ ∂ ∂ ∂ ∂
X =ξ +η +ω +τ +σ 
∂t ∂x ∂u ∂ρ ∂p
For the sake of simplicity, we will use the already known result and assume that the
coordinates of the operator X depend on the variables t; x; u; ρ ; p as follows:

ξ = ξ (t; x);
η = η (t; x);
ω = ω (t; x; u); (1.5.24)
τ = τ (t; x; ρ );
σ = σ (t; x; p):

We will write the first prolongation of the operator X in the form


1.5 Groups admitted by differential equations 43

∂ ∂ ∂ ∂
Xe = X + ζut + ζux + ζρ t + ζρ x
∂ ut ∂ ux ∂ ρt ∂ ρx
∂ ∂
+ζ pt + ζ px 
∂ pt ∂ px
The operators of total differentiation have the form

∂ ∂ ∂ ∂
Dt = + ut + ρt + pt ;
∂t ∂u ∂ρ ∂p

∂ ∂ ∂ ∂
Dx = + ux + ρx + px 
∂x ∂u ∂ρ ∂p
The additional coordinates of the prolonged operator are derived by the formulae
(1.4.10):

ζut = Dt (ω ) ut Dt (ξ ) ux Dt (η ) = ωt + (ωu ξt )ut ηt ux ;

ζux = Dx (ω ) ut Dx (ξ ) uxDx (η ) = ωx ξx ut + (ωu ηx )ux ;


ζρt = Dt (τ ) ρt Dt (ξ ) ρx Dt (η ) = τt + (τρ ξt )ρt ηt ρx ;
ζρx = Dx (τ ) ρt Dx (ξ ) ρx Dx (η ) = τx ξxρt + (τρ ηx )ρx ;
ζ pt = Dt (σ ) pt Dt (ξ ) pxDt (η ) = σt + (σ p ξt )pt ηt px ;
ζ px = Dx (σ ) pt Dx (ξ ) px Dx (η ) = σx ξx pt + (σ p ηx )px :
Acting by the operator Xe; on every equation of the system (1.5.23) we obtain the
equations
1 τ
ζut + uζux + ζ px + ω ux px = 0;
ρ ρ2
ζρt + uζρx + ρζux + ωρx + τ ux = 0;
ζ pt + uζ px + γ pζux + ω px + γσ ux = 0:
In order to write these equations on the manifold S given by Eqs. (1.5.23), it
suffices to eliminate the variables ut ; ρt ; pt by means of Eqs. (1.5.23). This yields
three determining equations:
  
1
ωt (ωu ξt ) uux + px ηt ux + u ωx
ρ
  
1 1
+ ξx uux + px + (ωu ηx )ux + [σx + ξx (upx
ρ ρ
τ
+ γ pux ) + (σ p ηx )px ] + ω ux px = 0; (I)
ρ2
44 1 One-parameter continuous transformation groups admitted by differential equations

τt (τρ ξt )(uρx + ρ ux ) ηt ρx + u[τx + ξx (uρx


  
1
+ ρ ux ) + (τρ ηx )ρx ] + ρ ωx + ξx uux px
ρ

+ (ωu ηx )ux + ωρx + τ ux = 0; (II)

σt (σ p ξt )(upx + γ pux ) ηt px + u[σx + ξx (upx


  
1
+ γ pux ) + (σ p ηx )px ] + γ p ωx + ξx uux + px
ρ

+ (ωu + ηx )ux + ω px + γσ ux = 0: (III)

Equations (I)—(III) should hold identically with respect to the variables

t; x; u; ρ ; p; ux ; ρx ; px :

We split Eqs. (I)—(III) with respect to the variables ux ; ρx ; px ; i.e. equate the coeffi-
cients of these variables to zero. Note that equation (II) contains only one term with
px ; namely, ξx px : Therefore,
ξx = 0:
Taking this equation into account we make further splitting and obtain the following
equations

ux : u(ωu ξt ) ηt + u(ωu ηx ) + ω = 0;
1 1 τ (I)
px : (ωu ξt ) + (σ p ηx ) = 0;
ρ ρ ρ2

ux : ρ (τρ ξt ) + ρ (ωu ηx ) + τ = 0;
(II)
ρx : u(τρ ξt ) ηt + u(τρ ηx ) + ω = 0;

ux : p(σ p ξt ) + p(ωu ηx ) + σ = 0;
(III)
px : u(σ p ξt ) ηt + u(σ p + ηx ) + ω = 0:

Equation (I) (ux ) yields


ω = (ηx ξt )u + ηt :
Due to this expression for ω equations (II) (ρx ) and (III) (px ) are satisfied identi-
cally. Furthermore, substituting the expression for ω into Eq. (II) (ux ); one obtains

τ ρτρ = 0
1.5 Groups admitted by differential equations 45

and substituting it into Eq. (III) (ux ); one obtains

σ pσ p = 0:

Integration of these equations provides

σ = a(t; x)p; τ = b(t; x)ρ ;

where a(t; x) and b(t; x) are arbitrary functions so far. Substituting these expressions
into Eq. (I) (px ); one obtains

b=a 2ηx + 2ξt :

Thus,
τ = (a 2ηx + 2ζt )ρ :
This completes the procedure of splitting of the determining equations with respect
to the variables ux ; ρx ; px and provides the equations

ξx = 0;
ω = (ηx ξt )u + ηt ;
(1.5.25)
σ = a(t; x)p;
τ = (a 2ηx + 2ξt )ρ ;

and the remaining equations (I)—(III)


1
ωt + uωx + σx = 0; (I )
ρ
τt + uτx + ρωx = 0; (II )
σt + uσx + γ pωx = 0: (III)

Equations (I  )—(III ) admit further splitting with respect to the variables u; ρ ; p:


Equation (I ) yields that ax = 0; whence σx = 0: Upon substitution of the expression
for ω ; from Eqs. (1.5.25) the equation (I  ) takes the form

u(ηxt ξtt ) + ηtt + u(u(ηxx ξtx ) + ηtx ) = 0:

Then the split with respect to u provides

u2 : ηxx = 0; u1 : 2ηtx = ξtt ; u0 : ηtt = 0: (1.5.26)

In view of the above results, equation (III  ) takes the form

at + γηtx = 0:

Likewise, equation (II ) yields


46 1 One-parameter continuous transformation groups admitted by differential equations

(at 2ηxt + 2ξtt ) + ηtx = 0;

which is equivalent to
3
at + ξtt = 0
2
due to the previous equations. Expressing at via ξtt ; one obtains two formulae:

3 γ
at = ξtt ; at = ξtt ;
2 2
whence
(γ 3)ξtt = 0: (1.5.27)
Let γ 6= 3; then ξtt = 0 and we have the following system of determining equa-
tions:
ηxx = 0; ηxt = 0; ηtt = 0; ξtt = 0; ax = 0; at = 0:
Its general solution is

a = a1 ; ξ = a2 t + a3 ; η = a4 t + a5 x + a6 ;

where ai (i = 1; : : : ; 6) are arbitrary constants. Therefore, if γ 6= 3; the system


(1.5.23) admits a six-dimensional space of operators L6 :
3
If γ = 3; then at = ξtt : Equations (1.5.26) demonstrate that ξttt = 0: Hence,
2
in this case the general solution has the form

ξ = a0 t 2 + a2 t + a3 ;

a= 3a0t + a1;
η = a0tx + a4t + a5 x + a6:
As compared to the above, the dimension of the space is increased by one, i.e. one
obtains L7 : Since L6 results from L7 when a0 = 0; the case γ = 3 is considered
further. Equations (1.5.25) provide

ω = ( a0 t + a5 a2 )u + a0x + a4 ;
τ = ( a0 t + a1 2a5 + 2a2)ρ ;
σ = ( 3a0t + a1)p:

Let us introduce the constant a05 = a5 a2 instead of a5 for the sake of convenience.
Since a05 is an independent arbitrary constant, the prime will be omitted.
Finally, the general solution of the determining equations has the form
1.6 Lie algebra of operators 47

ξ = a 0 t 2 + a2 t + a3 ;
η = a0tx + a4t + a2x + a5x + a6;
ω = a0 (x tu) + a5u + a4; (1.5.28)
τ = ( a0 t + a1 2a5)ρ ;
σ = ( 3a0t + a1)p:

The following basis of the space of operators L6 (γ 6= 3) corresponds to this solution:


X1 = ; (time translation)
∂t

X2 = ; (space translation)
∂x
∂ ∂
X3 = x +t ; (dilation)
∂x ∂t
∂ ∂
X4 = t + ; (the Galilean translation)
∂x ∂u
∂ ∂ ∂
X5 = x +u 2ρ ; (dilation)
∂x ∂u ∂ρ
∂ ∂
X6 = ρ +p : (dilation)
∂ρ ∂p
In the case L7 (γ = 3) one more basis operator is added

∂ ∂ ∂ ∂ ∂
X7 = t 2 + tx + (x tu) tρ 3t p ;
∂t ∂x ∂u ∂ρ ∂p

which is an operator of some projective transformation of the space E 5 (t; x; u; ρ ; p):

1.6 Lie algebra of operators

1.6.1 Commutator. Definition of a Lie algebra

We have already introduced in x1.5 the operations of addition of operators and their
multiplication by constants. We will consider one more operation now.
Let
∂ ∂
X = ξ i i and Y = η i i
∂x ∂x
be two operators in E N :
48 1 One-parameter continuous transformation groups admitted by differential equations

Definition 1.7. Commutator of the operators X and Y is a new operator [X;Y ] de-
termined by the formula


[X;Y ] = (X η i Y ξ i)  (1.6.1)
∂ xi
It can be obtained by the following principle. Consider the expression

X(Y F(x)) Y (XF(x))

as a result of the action of some operator on the function F(x): Expansion of this
expression cancels out the derivatives of the second order and provides the result of
the action of the commutator on F: Therefore, the formula
 ∂
[X ;Y ] = X η i Yξ i = XY YX (1.6.2)
∂ xi
holds.
The operation defined by the formula (1.6.1) maps any two operators X and Y
into their commutator [X ;Y ]: This operation is referred to as the operation of com-
mutation. Let us formulate some properties of the operation of commutation.
1 The commutator is bilinear with respect to X and Y; i.e. for any constants α
and β the following identity holds:

[(α X + β Y ); Z] = α [X; Z] + β [Y; Z]:

2 The commutator is skew symmetric:

[X;Y ] = [Y; X]:

3 The Jacobi identity

[[X;Y ]; Z] + [[Y; Z]; X] + [[Z; X];Y ] = 0

holds for any three operators X;Y and Z:


These properties can be easily proved by means of the representation (1.6.2) of
the commutator.
Definition 1.8. A linear space L of operators is referred to as a Lie algebra of oper-
ators if for any X and Y belonging to L; their commutator belongs to L as well.
Some examples of spaces L of the operators admitted by differential equations
have been considered in x1.5. One can easily verify that all of them are Lie algebras
of operators. To this end, it is sufficient to calculate commutators of the basis opera-
tors. In what follows, it will be demonstrated that this circumstance is not incidental.
For this purpose, we have to study some additional properties of the operation of
commutation.
1.6 Lie algebra of operators 49

1.6.2 Properties of commutator

Lemma 1.4. The commutator is invariant with respect to changes of coordinate


systems in E N :
Proof. Consider a new system of coordinates given by the equations

yi = yi (x) (i = 1; : : : ; N):

According to the transformation formula (1.2.6) for operators, we have

∂ ∂
X = X 0 = X(yi ) ; Y = Y 0 = Y (yi ) 
∂ yi ∂ yi

Calculating the commutator in the variables (y); one has

∂ ∂
[X 0 ;Y 0 ] = [X 0Y (yi ) Y 0 X(yi )] = [XY (yi ) Y X(yi )]
∂ yi ∂ yi

= [X ;Y ](yi ) = [X ;Y ]0 :
∂ yi

Hence, [X 0 ;Y 0 ] = [X ;Y ]0 and Lemma 1.4 is proved.


Theorem 1.10. If a manifold M  E N is invariant with respect to the operators X
and Y; then it is also invariant with respect to their commutator [X;Y ]:
Proof. Let us make the same assumption as in the proof of Theorem 1.5, namely
that M is given by the equations

x1 = 0; : : : ; xs = 0:

The necessary and sufficient conditions of invariance of such M with respect to the
operators
∂ ∂
X = ξ i i ; Y = ηi i
∂x ∂x
are obtained in Theorem 1.5 in the form

ξ σ jM = ξ σ (0; : : : ; 0; xs+1 ; : : : ; xn ) = 0;

η σ jM = η σ (0; : : : ; 0; xs+1 ; : : : ; xn ) = 0 (σ = 1; : : : ; s):


Note that all coordinates
∂ ησ ∂ξσ
[X;Y ]σ = X η σ Yξ σ = ξ i ηi (1  σ  s)
∂ xi ∂ xi
of the commutator are identically equal to zero on the manifold M : Indeed, [X;Y ]σ
can be written in the form
50 1 One-parameter continuous transformation groups admitted by differential equations

∂ησ 0 ∂η
σ ∂ξσ ∂ξσ
ξτ +ξτ ητ ητ
0
τ
;
∂x ∂xτ ∂τ ∂τ0
0

where τ = 1; : : : ; s; τ 0 = s + 1; : : : ; N: The terms with indices τ are equal to zero,


because ξ τ and η τ are equal to zero according to the conditions of invariance. The
terms with τ 0 are also equal to zero, because the operations of differentiation with
respect to τ 0 and of transition to the manifold M are permutable.
Theorem 1.11. The operation of prolongation is permutable with the operation of
commutation:
e Ye ] = [X;Y
[X; g ]:

Proof. We use the invariance of the commutator and the operation of prolongation
with respect to the system of coordinates. Let us introduce a system of coordinates
in E N so that the operator X becomes an operator of translation along one of the
coordinates. It is always possible according to Theorem 1.3. As it has been demon-
strated above, the operator of translation is “does not prolong”, so the prolongation
of the operator X has the form Xe = X: We assume that the operator of translation X
is

X= 1
∂x
The alternative assumption

X= 1
∂u
is considered likewise. Further, one has
∂ ∂ ∂ ∂
Ye = Y + ζik k = ξ i i + η k k + ζik k ; ζik = Di (η k ) pkj Di (ξ j ):
∂ pi ∂ x ∂ u ∂ pi

e Ye ] :
Let us compute the commutator [X;
 
e Ye ] = [X; Ye ] = [X;Y ] + X; ζik ∂
[X;
∂ pki
 k 
∂ ζi ∂
= [X;Y ] + :
∂ x1 ∂ pki

On the other hand, since

∂ξi ∂ ∂ ηk ∂
[X ;Y ] = + i ;
∂x ∂x
1 i ∂ x ∂ uk
one has
g ] = [X;Y ] + ζi0k ∂
[X;Y ;
∂ pki
where
1.6 Lie algebra of operators 51
   
∂ηk ∂ξ j ∂ k
ζi0k = Di pkj Di = ζ :
∂ x1 ∂ x1 ∂ x1 i
Hence,
∂ ζik ∂
[Xg
;Y ] = [X ;Y ] + e Ye ];
= [X;
∂ x1 ∂ pki
and the theorem is proved.

1.6.3 Lie algebra of admitted operators

Theorem 1.12. Given any system of differential equations (S), the linear space L(S)
of operators admitted by the system (S) is a Lie algebra.
Proof. If (S) admits X and Y; i.e. X;Y 2 L(S); then the manifold S is invariant with
respect to Xe and Ye in the prolonged space. According to Theorem 1.10, S is also
invariant with respect to [Xe ; Ye ]; and according to Theorem 1.11,

e Ye ] = [X;Y
[X; g ]:

Thus, S is invariant with respect to [Xg ;Y ]; which means that (S) admits the commu-
tator [X;Y ] by definition, so that [X;Y ] 2 L(S) and Theorem 1.12 is proved.
Theorems 1.8 and 1.12 demonstrate that a commutator of any two operators from
L(S) is an operator in L(S): In particular, in case of a finite-dimensional Lr ; the
commutator of any two basis operators is a linear combination of basis operators.
It is convenient to write this circumstance in the table of commutators, where the
intersection of the k-th row and the l-th column gives the commutator [Xk ; Xl ]:
As an example, we provide the following table of commutators for basis opera-
tors of the Lie algebra L6 admitted by the heat equation uy = uxx :

X1 X2 X3 X4 X5 X6
1
X1 0 0 X1 X6 X4 0
2
1
X2 0 0 2X2 2X1 X3 X6 0
2
X3 X1 2X2 0 X4 2X5 0
X4 X6 2X1 X4 0 0 0
1 1
X5 X4 X3 + X6 2X5 0 0 0
2 2
X6 0 0 0 0 0 0
Chapter 2
Lie algebras and local Lie groups

2.1 Lie algebra

2.1.1 Definition and examples

We consider a linear vector space L of elements (vectors) u; v; : : : over a field of real


(or complex) numbers.
Definition 2.1. A linear space L is said to be a Lie algebra if a binary operation of
multiplication [u; v] satisfying the following properties is defined:
1 Linearity [α u + β v; ω ] = α [u; ω ] + β [v; ω ];
2 Antisymmetry [u; v] = [v; u];
3 Jacobi identity [[u; v]; ω ] + [[v; ω ]; u] + [[ω ; u]; v] = 0:
The multiplication is also termed an operation of commutation and the product
[u; v] is called the commutator of the vectors u; v:
Example 2.1. The set of operators admitted by a system of differential equations.
The operation of commutation is defined in x1.6.
Example 2.2. The set of vectors in the three-dimensional Euclidian space where
the operation of commutation is the vector product: [a; b] = a  b:
Example 2.3. The set of linear transformations A; B; : : : of a linear space. The op-
eration of commutation is defined by [A; B] = AB BA:
Depending on the dimension of the linear space L; one can single out infinite Lie
algebras (L is infinite-dimensional) and finite Lie algebras (L is finite-dimensional).
In the latter case, the Lie algebra is denoted by Lr if the dimension of L is r: Thus,
one has the Lie algebra L3 in Example 2.2.
Definition 2.2. A linear mapping ψ of a Lie algebra L to a Lie algebra L0 is called
an isomorphism if:
(a) it is a one-to-one mapping,
54 2 Lie algebras and local Lie groups

(b) it preserves the commutator of any two vectors.


If only the condition (b) is satisfied, the mapping ψ is called a homomorphism. The
set J of vectors from L mapped into the zero of the Lie algebra L0 is called the
kernel of the homomorphism ψ : Isomorphism of L onto itself is called an automor-
phism.
In this definition, preservation of the commutator means that

ψ [u; v] = [ψ (u); ψ (v)]:

Example 2.4. The Lie algebra L3 from Example 2.2 is isomorphic with respect to
the Lie algebra of the matrices
0 1
0 a1 a2
A = @ a1 0 a3 A
a2 a3 0

if the operation of commutation for A is defined as in Example 2.3. The isomor-


phism ψ is determined as follows. If a vector a has the coordinates (a1 ; a2 ; a3 ); then
ψ (a) = A:

2.1.2 Subalgebra and ideal

Definition 2.3. A linear subspace N  L is called a subalgebra of the Lie algebra L


if [u; v] 2 N for any u; v 2 N: The subalgebra J  L is termed an ideal of L if [u; v] 2 J
for any u 2 J and v 2 L:

One can easily verify that a homomorphism (and an isomorphism in particular)


maps a subalgebra N  L into a subalgebra N 0  L0 ; and an ideal J into an ideal J 0 :
Further, the kernel J of a homomorphism of the Lie algebra L into the Lie algebra
L0 is an ideal in L: Indeed, if u 2 J and v 2 L; then ψ (u) = 0 implies that

ψ ([u; v]) = [ψ (u); ψ (v)] = [0; ψ (v)] = 0;

so that [u; v] 2 J:
The set Z of all vectors u 2 L; such that [u; v] = 0 for any v 2 L is an ideal in L:
This ideal is termed the center of the Lie algebra L: Here we will verify only the
property of Z to be a subalgebra. Let u1 ; u2 2 Z and v 2 L: Using the Jacobi identity

[[u1 ; u2 ]; v] + [[u2; v]; u1 ] + [[v; u1]; u2 ] = 0

and noting that the second and the third terms vanish due to the assumptions that
[u2 ; v] = 0 and [u1 ; v] = [v; u1] = 0; we obtain [[u1 ; u2 ]; v]: Hence,

[u1 ; u2 ] 2 Z:
2.1 Lie algebra 55

The remaining properties are verified trivially.


If J is an ideal in L; one can introduce an equivalence relation. Namely, two
vectors u and v from L are equivalent, u  v; if u v 2 J : One can easily verify
that the introduced equivalence relation is reflexive, symmetric and transitive. The
equivalence relation splits L into classes U;V; : : : of equivalent vectors. The set of
these classes is a Lie algebra if one introduces the basic operations by the following
rule. If u 2 U; v 2 V; then α U + β V and [U;V ] are classes containing the elements
α u + β v and [u; v]; respectively.
Definition 2.4. The Lie algebra of classes of equivalent vectors introduced above is
called the quotient algebra of the Lie algebra L with respect to its ideal J and is
denoted by L=J :
There exists a “natural” homomorphism ψ of the Lie algebra L to the quotient
algebra L=J: It is given by the formula ψ (u) = U if u 2 U: The ideal J is the kernel
of the homomorphism ψ :

2.1.3 Structure of finite-dimensional Lie algebras

Let us consider the case of a finite-dimensional Lie algebra Lr : Since Lr is a linear


space, it has a basis fuα g of r linearly independent vectors

uα (α = 1; : : : ; r):

Any vector of Lr is represented uniquely as a linear combination of the basis vectors.


In particular,
γ
[uα ; uβ ] = Cαβ uγ (α ; β = 1; : : : ; r); (2.1.1)
where the summation over γ = 1; : : : ; r is assumed in the right-hand side. The num-
γ
bers Cαβ are called the structure constants of Lr in the basis fuα g:
Structure constants change together with the basis. Let fūα g be a new basis in Lr
defined by
β
ūα = pα uβ :
Comparing two expressions for the commutator:
γ
[ūα ; ūβ ] = [pσα uσ ; pβτ uτ ] = pσα pβτ [uσ uτ ] = pσα pβτ Cσ τ uγ

and
ε ε γ
[ūα ; ūβ ] = C̄αβ uε = C̄αβ p ε uγ ;
one obtains the following rule for the change of structure constants:
ε γ γ
C̄αβ pε = Cσ τ pσα pβτ :
56 2 Lie algebras and local Lie groups

γ
It follows that Cαβ is a tensor of the third order which is twice covariant and once
contravariant.
The structure constants determine the Lie algebra Lr completely, since they allow
to find the commutator of any two elements in the coordinate form. Namely, if

u = aα uα ; v = bα uα ;

then, by virtue of Eqs. (2.1.1)


γ
[u; v] = aα bβ [uα ; uβ ] = Cαβ aα bβ uγ : (2.1.2)

The properties 2  and 3 of the operation of commutation (see Definition 2.1)


γ
can be expressed in terms of structure constants Lr : One can readily verify that Cαβ
satisfies the following Jacobi relations:
γ γ σ
Cαβ = Cβ α ; Cαβ Cσε γ +Cβσγ Cσε α +Cγα
σ ε
Cσ β = 0 (2.1.3)

for all α ; β ; γ ; ε = 1; : : : ; r (summation over σ = 1; : : : ; r).


The property of isomorphism of the Lie algebras Lr is also expressed in terms of
structure constants.
Theorem 2.1. Algebras Lr and L0r are isomorphic if and only if they have the same
γ
structure constants. Namely, if Lr and L0r have the same Cαβ ; they are isomorphic;
γ
if Lr and L0r are isomorphic, then one can find such bases in them that Cαβ of both
Lie algebras coincide in these bases.
Proof. Let Lr and L0r be isomorphic and let fuα g be a basis of Lr : If ψ is an isomor-
phism of Lr onto L0r ; then fψ (uα )g is a basis of L0r : One has
γ γ
[ψ (uα ); ψ (uβ )] = ψ [uα ; uβ ] = ψ (Cαβ uγ ) = Cαβ ψ (uγ );

γ
where Cαβ are structure constants of Lr : The resulting equation demonstrates that
γ
the same Cαβ provide the structure constants of L0r in the basis fψ (uα )g: Conversely,
let fuα g and fuα0 g be bases in Lr and L0r ; respectively, defined so that
γ
[uα ; uβ ] = Cαβ uγ

and
γ
[uα0 ; uβ0 ] = Cαβ uγ0 :
Let us define an isomorphism ψ by the relation

ψ (uα ) = uα0

on the basis vectors and extend it linearly on the whole Lr : if

u = aα uα ;
2.2 Adjoint algebra 57

then
ψ (u) = aα uα0 :
It is manifest that ψ is a one-to-one mapping. Preservation of the commutator fol-
lows from (2.1.2). Theorem 2.1 is proved.
It is important to point out that there exists a Lie algebra Lr for every set of
γ γ
constants Cαβ ; satisfying the Jacobi relations (2.1.3), for which these Cαβ are its
structure constants. In order to construct such Lr one has to take any r-dimensional
vector space, choose some basis fuα g in it and introduce the operation of com-
mutation by the formula (2.1.2). Then equations (2.1.3) guarantee that the defined
commutator satisfies all axioms of Definition 2.1.
Finally, let us introduce several other notions connected with a Lie algebra.
The Lie algebra L is said to be simple if it does not contain ideals other than zero
(consisting of one zero vector) or other than the algebra L itself.
One can readily verify that the linear span of all commutators [u; v] of vectors of
the Lie algebra L is an ideal in L: It is called the derived algebra of the Lie algebra
L and is denoted by L(1) : One can construct the sequence of derived algebras L(k)
(k = 1; 2; : : :) by determining L(k) as the derived algebra of the Lie algebra L(k 1) :
A Lie algebra L is said to be solvable if L(k) = f0g (null algebra) for a certain
k < ∞:
The Lie algebra L is said to be semi-simple if it does not contain solvable ideals.

2.2 Adjoint algebra

2.2.1 Inner derivation

Let L be a Lie algebra and a 2 L: According to property 1  of Definition 2.1, the


formula
v = [u; a]; u 2 L
defines a linear mapping of L into itself. This mapping is referred to as the inner
derivation of the Lie algebra L and is further denoted by Da : Thus

Da u = [u; a]: (2.2.1)

The term “inner derivation” is justified by the fact that the mapping Da acts on
the commutator [u; v] according to the formula similar to the derivation of a product
of functions, namely
Da [u; v] = [Da u; v] + [u; Dav]: (2.2.2)
Equation (2.2.2) is easily proved by using the Jacobi relations.
When the vector a runs through the whole Lie algebra L; one obtains a set of
inner derivation fDa g; where one can determine linear operations of summation
and multiplication by numbers that turn it into a linear vector space LD :
58 2 Lie algebras and local Lie groups

α Da + β Db = Dα a+β b: (2.2.3)

The null of the space LD is D0 such that

D0 u = [u; 0]:

It maps any vector u 2 L into 0: We introduce in LD the operation of commutation


defined by the formula
[Da ; Db ] = D[a;b] : (2.2.4)
Theorem 2.2. The linear space LD with the operation of commutation (2.2.4) is a
Lie algebra.
Proof. Since the validity of axioms 1  and 2 of Definition 2.1 is evident, one has
only to verify the axiom 3  : According to the definition (2.2.4), one has

[[Da ; Db ]; Dc ] = [D[a;b] ; Dc ] = D[[a;b]c] :

Therefore, for any u 2 L

f[[Da ; Db ]; Dc ] + [[Db; Dc ]; Da ] + [[Dc ; Da ]; Db ]gu

= [u; f[[a; b]; c] + [[b; c]; a] + [[c; a]; b]g] = [u; 0] = D0 u;

which was to be proved.

2.2.2 Adjoint algebra

Definition 2.5. The Lie algebra LD constructed according to Eqs. (2.2.1), (2.2.3),
(2.2.4) is called the algebra of inner derivations or the adjoint algebra of the Lie
algebra L:

Theorem 2.3. The adjoint algebra LD is isomorphic to the quotient algebra L=Z of
the Lie algebra L with respect to its center Z:
Proof. There exists a natural homomorphism ψ of L on LD ; namely

ψ (a) = Da :

Equation (2.2.3) shows that ψ is linear, whereas equation (2.2.4) entails that it pre-
serves the commutator. To complete the proof one has to verify that the kernel J
of the homomorphism ψ coincides with Z: By definition, a 2 J if and only if
ψ (a) = D0 : If a 2 Z; then according to Eq. (2.2.1), Da u = 0; so that ψ (a) = D0 ; i.e.
a 2 J : Conversely, if a 2 J; then Da = D0 and

[a; b] = [b; a] = Da b = D0 b = 0
2.2 Adjoint algebra 59

for any vector b 2 L; so that a 2 Z: Thus, J = Z:


Let us consider the case of a finite-dimensional Lr with a basis fuα g: Any vector
u 2 Lr is written in the form u = xα uα ; where xα (α = 1; : : : ; r) are coordinates
of the vector u in the basis fuα g: Let us introduce “basis” inner derivations by the
following formula:
Dα = Du α (α = 1; : : : ; r):
Then,
γ
Dβ u = [u; uβ ] = xα [uα ; uβ ] = Cαβ xα uγ
and one arrives at the formula
γ
Dβ (xα uα ) = Cαβ xα uγ : (2.2.5)

Let us introduce the operators

γ ∂
Eβ = Cαβ xα (β = 1; : : : ; r) (2.2.6)
∂ xγ
acting in the r-dimensional space of the points x (x1 ; : : : ; xr ) and consider their linear
combinations
E = eβ Eβ
with constant (i.e. independent of x) coefficients e1 ; : : : ; er : Let fEg be the set of all
such operators E:
Theorem 2.4. The set fEg is a Lie algebra of operators, isomorphic to the adjoint
algebra LD of the Lie algebra L:
Proof. The set fEg is a linear space of operators according to construction. The
axioms 1 —3 of Definition 2.1 always hold (see x1.6) for operators with a usual
definition of the commutator (see Definition 1.7). Therefore, in order to prove that
fEg is a Lie algebra, one has only to verify that [Eβ ; Eθ ] 2 fEg for any β ; θ =
1; : : : ; r: This is provided by straightforward calculations invoking the properties of
structure constants (2.1.3). One has
 
γ ∂ γ ∂ ∂
[Eβ ; Eθ ] = Cαβ xα γ (Cσε θ xσ ) Cαθ xα γ (Cσε β xσ )
∂x ∂x ∂ xε

γ ε γ ε ∂ γ γ ∂
= (Cαβ Cγθ Cαθ Cγβ )xα = (Cαβ Cγθε ε
+Cθ α Cγβ )xα ε
∂ xε ∂x
 
γ ε α ∂ γ ε α ∂ γ
= Cβ θ Cγα x = Cβθ C αγ x = Cβ θ Eγ :
∂ xε ∂ xε

Further, using the fact that equation (2.2.4) entails the equality
γ
[Dβ ; Dθ ] = Cβ θ Dγ
60 2 Lie algebras and local Lie groups

and comparing Eqs. (2.2.5) and (2.2.6) one concludes that the mapping ψ (Eα ) = Dα
is an isomorphism.
The Lie algebra of the operators fEg is said to be a representation of the adjoint
algebra LD :

2.2.3 Inner automorphisms of a Lie algebra

Every operator from fEg generates a one-parameter group G1 of transformations of


the r-dimensional space E r of points (x): If

γ ∂
E = eβ Eβ = eβ Cαβ xα ;
∂ xγ
then, according to Theorem 1.2, the transformations composing the corresponding
G1 can be obtained by integrating the following system of ordinary differential equa-
tions with the initial conditions:
dx0γ γ
= eβ Cαβ x0α ; x0γ (0) = xγ (γ = 1; : : : ; r): (2.2.7)
dt
Since (2.2.7) is a system of linear homogeneous equations with constant coeffi-
cients, the solution of (2.2.7) is a linear form of the initial data x1 ; : : : ; xr and can be
written in the form
γ
x0γ = fσ (t)xσ (γ = 1; : : : ; r): (2.2.8)
Equations (2.2.8) determine the desired transformations in E r composing the
group G1 : These transformations are linear. We will interpret them as transforma-
tions of coordinates of the vector u 2 Lr ; i.e. as transformations of the vectors u 2 Lr
given in the basis fuα g: These transformations are denoted by the symbol At so that
A0 is an identical transformation.
Theorem 2.5. The transformations At are automorphisms of the Lie algebra Lr :
γ γ
Proof. It is manifest that the mapping u0 = At u is one-to-one (since fσ (0) = δσ )
and linear. It remains to verify that this mapping preserves the commutator. It is
sufficient to prove this property for the basis vectors only, i.e. to show that

At [uα ; uβ ] = [At uα ; At uβ ]:

One has
γ γ
At [uα ; uβ ] = Cαβ At uγ = Cαβ fγσ (t)uσ ;
γ γ
[At uα ; At uβ ] = [ fα (t)uγ ; fβθ (t)uθ ] = Cγθ
σ
fα (t) fβθ (t)uδσ :
Setting
σ γ γ
qαβ = Cαβ fγσ (t) σ
Cγθ fα (t) fβθ (t);
one obtains
2.3 Local Lie group 61
σ
dqαβ
= eε Cτε
σ τ σ
qαβ ; qαβ (0) = 0
dt
upon simple but tedious calculations based on Eqs. (2.2.7) and (2.1.3). The unique-
ness of the solution of the above system of equations provides
σ
qαβ (t)  0;

which proves the theorem.


The above automorphisms (2.2.8) are also referred to as inner automorphisms of
the Lie algebra Lr :

2.3 Local Lie group

2.3.1 Coordinates in a group

They say that it is possible to introduce coordinates in the group G if elements g 2 G


can be put into one-to-one correspondence with the points a of a set Ω  E r where
E r is an r-dimensional Euclidian space. If an element of G corresponding to the
point a 2 Ω is denoted by ga ; then the formula

ga gb = gc

implies
c = ϕ (a; b);
where ϕ (a; b) is a function determined on Ω  Ω : The function ϕ (a; b) can be called
a multiplication law of elements of the group G: Sometimes the multiplication law
can be determined not for the whole group G; but only for some subset Gr 2 G;
which leads to the notion of a local group.
Definition 2.6. A subset Gr of the group G containing the unit element g0 is called
a local Lie group if the following conditions are satisfied:
(i) there is a one-to-one correspondence between elements of Gr with the points
a 2 Q of an open sphere Q  E r with the center 0; so that g0 $ 0;
(ii) there exists ε > 0 such that ga gb 2 Gr and ga 1 2 Gr for any points a; b with
jaj < ε ; jbj < ε ;
(iii) the multiplication law c = ϕ (a; b) is a thrice continuously differentiable
function of coordinates of the points a and b:
Remark 2.1. In general Gr is not a group. Therefore the notion of a local Lie group
can be defined without the supposition that Gr is included into a group G: Then it is
a set with an associative operation of multiplication containing a unit and an inverse
transformation of elements. However, these operations are determined not for all
elements, but only for those that are “sufficiently close” to the unit element in the
meaning of Definition 2.6.
62 2 Lie algebras and local Lie groups

The properties of multiplication of elements lying in a sufficiently small neigh-


borhood of the unit are investigated in local Lie groups. Therefore local Lie groups
Gr and G0r such that G0r  Gr are indistinguishable in this theory if they differ only
by the size of the sphere Q and by the value of ε :
Let the points a 2 Q have coordinates aα (α = 1; : : : ; r): Then the multiplication
law ϕ (a; b) in the group multiplication

ga gb = gc : c = ϕ (a; b)

is written in the coordinates as follows:

cα = ϕ α (a; b) = ϕ α (a1 ; : : : ; ar ; b1 ; : : : ; br ) (α = 1; : : : ; r): (2.3.1)

Investigation of local Lie groups is reduced to investigation of properties of the


functions ϕ (a; b) in a neighborhood of the origin of coordinates. Of course, since
the system of coordinates in E r can be chosen arbitrarily, we are interested only
in the properties of ϕ (a; b) independent of this choice. In what follows, the system
of coordinates in E r is called a system of coordinates in Gr and is denoted by the
symbol ∑a : The transition from the system of coordinates ∑a to a system ∑ā is
given by the following equations:

aα = f α (ā) = f α (ā1 ; : : : ; ā2 ) (α = 1; : : : ; r); (2.3.2)

where the functions f α (ā) are thrice continuously differentiable and satisfy the con-
dition  α
∂f
6= 0: (2.3.2 0 )
∂ āβ ā=0
Let us point out some simple properties of the multiplication law. Since the unit
element g0 corresponds to the point a = 0 (aα = 0; α = 1; : : : ; r); the equations

g0 g0 = g0 ; ga g0 = ga ; g0 gb = gb

yield
ϕ (0; 0) = 0; ϕ (a; 0) = a; ϕ (0; b) = b: (2.3.3)
By virtue of Eqs. (2.3.3), the Taylor expansion of ϕ (a; b) yields

ϕ α (a; b) = aα + bα + rβαγ aβ bγ + O(jaj2 + jbj2): (2.3.4)

The free index α appearing in this formula runs through the values 1 ; : : : ; r even
though it is not written explicitly. Many formulae follow this rule in what follows.

2.3.2 Subgroups

Let us introduce the auxiliary functions


2.3 Local Lie group 63

∂ ϕ α (a; b)
Aβα (a) = : (2.3.5)
∂ bβ b=0

According to Eqs. (2.3.3), we have

Aβα (0) = δβα :

Using these functions, one can write the expansion of ϕ (a; b) as follows:

ϕ α (a; b) = aα + Aβα (a)bβ + O(jbj2 ): (2.3.6)

A family of elements g(t) 2 Gr depending on a real parameter t is called a curve


if the coordinates aα (t) of these elements are continuous differentiable functions of
t when jtj < ε : The vector e with the coordinates

α daα (t)
e = (α = 1; : : : ; r) (2.3.7)
dt t=0

is called the directing vector of the curve g(t):


Definition 2.7. The curve g(t) is said to be a one-parameter subgroup (or, for the
sake of brevity, a subgroup G1 ), if

g(s)g(t) = g(s + t)

for all admissible values of s;t:


The property of the curve g(t) to be a subgroup of G1 is written in the coordinates
by the equation
aα (t + s) = ϕ α (a(t); a(s)): (2.3.8)
Theorem 2.6. A subgroup G1 with the directing vector e satisfies the system of
equations
daα
= Aβα (a)eβ ; aα (0) = 0: (2.3.9)
dt
Conversely, whatever the vector e is, the solution of the system (2.3.9) determines
the subgroup G1 with the directing vector e:
Proof. Differentiating Eq. (2.3.8) with respect to s; setting s = 0 and invoking Eqs.
(2.3.5) and (2.3.7), one obtains Eqs. (2.3.9). In order to prove the converse it suffices
to verify that (2.3.8) holds, since equations (2.3.7) follows from Eqs. (2.3.9) and
(2.3.3). One has

daα (t + s)
= Aβα (a(t + s))eβ ; aα (t + s)js=0 = aα (t)
ds
by construction. Further, for solution of the system (2.3.9) one has

aα (t) = eα t + O(jtj2)
64 2 Lie algebras and local Lie groups

and

aα (s + u) = aα (s) + eβ Aβα (a(s))u + O(juj2 )


= aα (s) + Aβα (a(s))aβ (u) + O(juj2)
= ϕ α (a(s); a(u)) + O(juj2);

where the latter equality follows from Eq. (2.3.6). Therefore, we have the equation

ϕ α (a(t); a(s + u)) = ϕ α (a(t); ϕ (a(s); a(u))) + O(juj2 ):

Note that the value ϕ α (a(t); ϕ (a(s); a(u))) is a coordinate of the element

g(t)[g(s); g(u)] = [g(t); g(s)]g(u)

(since the multiplication is associative) and therefore it equals

ϕ α (ϕ (a(t); a(s)); a(u)):

Thus, differentiating the above equation with respect to u; setting u = 0 and invoking
Eqs. (2.3.7), (2.3.5) and (2.3.3), one obtains

d ϕ α (a(t); a(s))
= Aβα (ϕ (a(t); a(s)))eβ ; ϕ α (a(t); a(s))js=0 = aα (t):
ds
Hence, the left and the right-hand sides of Eq. (2.3.8) satisfy one and the same dif-
ferential equation and the same initial condition, and hence they coincide according
to the theorem on uniqueness of the solution of the Cauchy problem. Theorem 2.6
is proved.
Corollary 2.1. For any vector e there exists one and only one subgroup G1 ; having
this vector e as its directing vector.

2.3.3 Canonical coordinates of the first kind

Definition 2.8. The system of coordinates ∑a in a local Lie group Gr is referred to


as a canonical system of coordinates of the first kind if the curve

g(t) : aα = eα t

is a subgroup of G1 for any vector e:


Theorem 2.7. One can introduce a canonical coordinate system ∑a of the first kind
in any local Lie group Gr :
Proof. Let aα = f α (e;t) be a solution of the system (2.3.9) depending on the vector
e: Since multiplication of the vector e by k is equivalent to multiplication of t by k;
2.3 Local Lie group 65

the uniqueness of the solution entails that the function f α has the property

f α (ke;t) = f α (e; kt):

Setting here t = 1 and replacing k by t; one obtains

f α (e;t)  f α (te; 1):

Then, we introduce a new system of coordinates ∑ā in Gr defined by the equation


aα = f α (ā; 1): First we verify the condition (2.3.2 0 ):

∂ aα ∂ f α (0; : : : ; āβ ; : : : ; 0; 1) ∂ f α (0; : : : ;t; : : : ; 0; 1)
= =
∂ āβ ā=0 ∂ āβ β
ā =0 ∂t
t=0


∂ f α (0; : : : ; 1; : : : ; 0;t) α α
=
∂t = Aβ (0) = δβ :
t=0

Finally we demonstrate that ∑ā is a canonical system of coordinates of the first kind.
Indeed, if āα = eα t; then

aα = f α (et; 1) = f α (e;t);

so that the curve g(t) with the coordinates aα = aα (t) is a subgroup G1 : Theorem 2.7
is proved.
Corollary 2.2. One can draw a one-parameter subgroup through every element of
a local Lie group Gr sufficiently close to the unit element g0 :
In what follows, the symbol a 1 denotes a point from E r corresponding to the
element ga 1 ; so that ga 1 = ga 1 : Let us introduce auxiliary functions

α ∂ ϕ α (a; b)
Vβ (b) = ; Vβα (0) = δβα ; (2.3.10)
∂ bβ a=b 1

where the second equation holds due to Eqs. (2.3.3).


Now let us formulate and prove the so-called Lie’s fundamental theorems.

2.3.4 First fundamental theorem of Lie

Theorem 2.8. The functions ϕ α (a; b) satisfy the system of equations

γ ∂ ϕα γ
Vα (ϕ ) = Vβ (b); ϕ α (a; 0) = aα : (2.3.11)
∂ bβ
Conversely, given twice continuously differentiable functions Vβα (b); such that
Vβα (0) = δβα ; with which the system (2.3.11) has a single solution ϕ (a; b) with any
66 2 Lie algebras and local Lie groups

values aα ; there exists a local Lie group Gr with the multiplication law ϕ (a; b) and
with the given auxiliary functions Vβα (b):
Proof. Let us replace b by b + ∆ b in the formula gc = ga gb with a unaltered. Then,
c is replaced by c + ∆ c: Multiplying the equation

gc+∆ c = ga gb+∆ b

from the left by


gc 1 = gb 1 ga 1
one obtains
gc 1 gc+∆ c = gb 1 gb+∆ b:
The latter equality is written in terms of the functions ϕ as

ϕ γ (c 1 ; c + ∆ c) = ϕ γ (b 1 ; b + ∆ b):

Noting that
∂ϕα β
∆ cα = ∆ b + O(j∆ bj2 )
∂ bβ
and
ϕ α (b 1 ; b + ∆ b) = Vβα (b)∆ bβ + O(j∆ bj2 );
one transforms the above equality into

γ ∂ϕα β γ
Vα (ϕ ) ∆ b = Vβ (b)∆ bβ + O(j∆ bj2 );
∂ bβ
whence equations (2.3.11) follow.
Let us prove now the converse statement. The assumptions about the functions
Vβα (b) guarantee that the solution ϕ α (a; b) of the system (2.3.11) is determined
and is thrice continuously differentiable with respect to the variables aα ; bβ from a
certain neighborhood ω of the origin of coordinates in the space E r of the points a:
Further considerations refer to this neighborhood ω without special notice.
We determine the operation of multiplication a  b for points E r by the formula a 
b = ϕ (a; b) and prove that E r (specifically, some sphere Q  E r ) is a local Lie group
with this multiplication. Since ϕ (a; b) is smooth, our operation of multiplication is
determined in a sufficiently small vicinity of the origin of coordinates. Therefore, it
remains to prove the validity of group axioms only.
First, let us establish a relation between the functions Vβα given by Eqs. (2.3.11),
and the functions Aβα determined by the solution ϕ (a; b) of the system (2.3.11) ac-
cording to the formulae (2.3.5). This relation is given by
γ γ
Vα (a)Aβα (a) = δβ ; (2.3.12)

so that the matrix (Vβα ) is the inverse matrix to (Aβα ): Equations (2.3.12) follow di-
rectly from Eq. (2.3.11) upon setting b = 0: By virtue of (2.3.12), equations (2.3.11)
2.3 Local Lie group 67

can be rewritten in the equivalent form

∂ϕα γ
= Aαγ (ϕ )Vβ (b); ϕ α (a; 0) = aα : (2.3.13)
∂ bβ
Let us prove that the introduced multiplication is associative. If we set

u = ϕ (a; b); v = ϕ (b; c); ω = ϕ (u; c); ω̄ = ϕ (a; v);

we will have to prove only the equality w = w̄: Turning to coordinates and using
Eqs. (2.3.13) and (2.3.12), one obtains

∂ wα γ
= Aαγ (w)Vβ (c); wα jc=0 = uα ;
∂ cβ
∂ w̄α ∂ ϕ α (a; v) ∂ vσ γ γ
= = Aατ (w̄)Vστ (v)Aσγ (v)Vβ (c) = Aαγ (w̄)Vβ (c);
∂c β ∂ vσ ∂ cβ
w̄α jc=0 = ϕ α (a; b) = uα :
One can see that wα and w̄α satisfy one and the same system of differential equations
of the form (2.3.13) and the same initial conditions. By virtue of the uniqueness of
the solution it follows that wα = w̄α :
Further, it follows from ϕ (a; 0) = a that the point 0 is the right unit. Letting a = 0
in Eqs. (2.3.13), one can see that the solution is ϕ α = bα ; so that ϕ (0; b) = b; i.e.
the point 0 is the left unit as well.
Finally the system of equations ϕ α (a; b) = 0 (α = 1; : : : ; r) determines the func-
tions bα = bα (a) in a vicinity of the point 0: Setting (a 1 )α = bα (a) one obtains
the inverse a 1 to the element a:
To complete the proof one has only to verify that the functions
α
α ∂ ϕ (a; b)

V β (b) =
∂ bβ a=b 1

coincide with the given functions Vβα (b): Setting a = b 1 in Eqs. (2.3.11) and taking
into account the equations

ϕ (b 1 ; b) = 0; Vβα (0) = δβα ;

one obtains
γ γ ∂ ϕ α ∂ ϕ γ γ
Vβ (b) = δα = = V β (b):
∂ bβ a=b 1 ∂ bβ a=b 1

Theorem 2.8 is proved.


In order to proceed we have to tackle the problem of solvability of systems of the
form (2.3.13). In general, let us discuss the problem on determination of functions
ui = ui (x) = ui (x1 ; : : : ; xr ) (i = 1; : : : ; m) by solving the system of the form
68 2 Lie algebras and local Lie groups

∂ ui
= fαi (x; u); ui (x0 ) = ui0 (i = 1; : : : ; m; α = 1; : : : ; r): (2.3.14)
∂ xα
Definition 2.9. The system (2.3.14) is said to be totally integrable if it has a solution
for any initial data x0 ; u0 :
Lemma 2.1. In order for the system (2.3.14) to be totally integrable, it is necessary
and sufficient that the equations

∂ fαi j ∂ fβ ∂ fβ j
i i
∂ fαi
+ f = + fα (2.3.15)
∂ xβ ∂ u j β ∂ xα ∂ u j
hold identically with respect to the independent variables x; u:
Proof. Necessity. Calculating the derivative

∂ 2 ui
∂ xα ∂ xβ
by two ways, one arrives at Eqs. (2.3.15) on the solution and, in particular, at the
initial point x0 ; u0 : Since the point x0 ; u0 is arbitrary, equations (2.3.15) are identities
with respect to the independent variables x; u:
Sufficiency. Let us determine the functions vi = vi (t; e) as the solution of the
system of ordinary differential equations ( e is a constant vector)

∂ vi
= eα fαi (te; v); vi (0) = ui0 : (2.3.16)
∂t
For the sake of simplicity we can prove without loss of generality that there exists
a solution of the system (2.3.14) with the initial data at the point x0 = 0: We set
ui (x) = vi (1; x) and prove that this is a solution of the problem (2.3.14). To this end
we note that the following equation is satisfied:

vi (t; e) = vi (1;te):

It is derived in the same way as the similar property of the function f α (e;t) in the
proof of Theorem 2.7. We will verify Eqs. (2.3.14) demonstrating that

∂ ui
Rαi (x) = fαi (x; u)  0:
∂ xα
To this end let us introduce the functions

Sα (t) = tRαi (et):

Differentiating with respect to t; using Eqs. (2.3.16), the identities (2.3.15) and the
definition of Sαi ; one obtains
2.3 Local Lie group 69

dSαi ∂ fβ ji
= eβ S ; Sαi (0) = 0:
dt ∂uj α
Thus, Sαi (t) satisfy the system of linear homogeneous ordinary differential equations
and have the zero initial values. Therefore, Sαi (t)  0 and Lemma 2.1 is proved.
Note that the system (2.3.14) always has a unique solution for sufficiently smooth
right-hand parts fαi (x; u); independently of conditions (2.3.15).

2.3.5 Second fundamental theorem of Lie

The following theorem has the same relation to the system (2.3.13) as Lemma 2.1
to the system (2.3.14).
Theorem 2.9. The system (2.3.13) is completely integrable if and only if the func-
tions Vβα (b) satisfy the system of equations

∂ Vβα ∂ Vγα
= Cσατ Vβσ Vγτ ; Vβα (0) = δβα ; (2.3.17)
∂ bγ ∂ bβ
where Cβαγ (α ; β ; γ = 1; : : : ; r) are some constants.
Proof. If the system (2.3.13) is completely integrable, then according to Lemma 2.1,
the right-hand sides of Eqs. (2.3.13) should satisfy equations of the form (2.3.15).
Due to the special form of these right-hand sides, one can reduce the corresponding
Eqs. (2.3.15) to the form
" α # " α #
β γ
∂ Vβ (ϕ ) ∂ Vγα (ϕ ) β γ
∂ Vβ (b) ∂ Vγα (b)
Aσ (ϕ )Aτ (ϕ ) = Aσ (b)Aτ (b) ;
∂ ϕγ ∂ ϕβ ∂ bγ ∂ bβ

upon simple transformations where only the relations (2.3.12) are used. Since the
variables b and ϕ = ϕ (a; b) are independent, the resulting equality can be an identity
with respect to b; ϕ only if the common value of both expressions is a constant
" α #
β γ
∂ Vβ (b) ∂ Vγα (b)
Aσ (b)Aτ (b) = Cσατ = const:
∂ bγ ∂ bβ

Multiplying both sides by Vβσ Vγτ one obtains Eqs. (2.3.17) which are thereby equiv-
alent to the conditions of complete integrability of the system (2.3.13). Theorem 2.9
is proved.
Definition 2.10. Equations (2.3.17) are referred to as the Maurer-Cartan equations.
The constants Cβαγ in Eqs. (2.3.17) are called the structure constants of the local Lie
group Gr :
70 2 Lie algebras and local Lie groups

Note that the Maurer-Cartan equations can be transformed into equivalent equa-
tions for the auxiliary functions Aβα (a): Namely, multiplying Eqs. (2.3.17) by
β γ
Aαv Aπ Aρ and using the identities (2.3.12) one obtains

∂ Aαγ ∂ Aβα
Aσβ Aσγ = Cβσγ Aασ : (2.3.18)
∂ aσ ∂ aσ

2.3.6 Properties of canonical coordinate systems of the first kind

In what follows one can see that the local Lie group Gr is completely determined
by the set of its structure constants (the third Lie theorem). First, let us deduce two
properties of a canonical coordinate system of the first kind.
Lemma 2.2. The necessary and sufficient condition for the system of coordinates
∑a to be canonical of the first kind is that the functions Vβα satisfy the relations

bβ Vβα (b) = bα (α = 1; : : : ; r): (2.3.19)

Proof. Necessity. Let ∑a be canonical of the first kind. Then the curve aα = eα t
(α = 1; : : : ; r) is a subgroup G1 for any vector e (e1 ; : : : ; er ) and hence satisfies Eqs.
(2.3.9). Substituting aα = eα t there, one obtains

eα = Aβα (et)eβ :

When t = 1 and e = b the above equation yields

bα = Aβα (b)eβ :
γ
Whence, multiplying by Vα (b) and applying Eq. (2.3.12) we get Eqs. (2.3.19).
Sufficiency. Equations (2.3.19) entail that

aα = Aβα (a)aβ :

Setting aα = eα t one obtains


aα = Aβα (a)eβ t
or
daα
= Aβα (a)eβ :
dt
By virtue of Theorem 2.6, it follows that the curve aα = eα t is a subgroup G1 :
Therefore, according to Definition 2.8, the system of coordinates ∑a is canonical of
the first kind. The lemma is proved.
Lemma 2.3. In a canonical coordinate system of the first kind, the functions Vβα (b)
are determined uniquely by the structure constants.
2.3 Local Lie group 71

Proof. Let
Wβα (t) = tVβα (te):
It is sufficient to prove that Wβα (t) is determined uniquely by Cβαγ ; because

Vβα (e) = Wβα (1):

Note that equations (2.3.19) imply the relations

β
∂ β Vβα
bα = δγα Vγα :
∂ bγ
Therefore, by virtue of Eqs. (2.3.17),

dWβα ∂ Vβα (te) ∂ Vγα


= Vβα + teγ = Vβα + teγ + teγ Cσατ Vβσ Vγτ
dt ∂ bγ ∂ bβ
= Vβα + δβα Vβα +Cσατ teτ Vβσ = δβα +Cσατ eτ Wβσ :

Thus, the functions Wβα satisfy the system of equations

dWβα
= δβα +Cσατ eτ Wβσ ; Wβα (0) = 0: (2.3.20)
dt
Since the solution of the system (2.3.20) is unique, it is completely determined if
the set of the structure constants Cβαγ is given. Thus, Lemma 2.3 is proved.

2.3.7 Third fundamental theorem of Lie

Theorem 2.10. The structure constants Cβαγ of the local Lie group Gr satisfy the
Jacobi relations

Cβαγ = Cγβ
α
; σ
Cαβ Cστ γ +Cβσγ Cστ α +Cγα
σ τ
Cσ β = 0: (2.3.21)

Conversely, given any set of constants Cβαγ satisfying the relations (2.3.21), there
exists a local Lie group Gr whose structure constants coincide with the given Cβαγ :

Proof. We set b = 0 in Eqs. (2.3.17). Since Vβα (0) = δβα ; one obtains
!
∂ Vβα ∂ Vγα
Cβαγ = ; (2.3.22)
∂ bγ ∂ bβ b=0

whence the first Jacobi relation (2.3.21) follows. Further, applying the operation
∂ =∂ bε to Eqs. (2.3.17) and using these equations once more one obtains
72 2 Lie algebras and local Lie groups

∂ 2Vβα ∂ Vβσ  
∂ 2Vγα ∂ Vετ µ
= Cσατ Vτ +Cσατ Vβσ +Cλτ µ Vγλ Vε
∂ bγ ∂ bε ∂ bβ ∂ bε ∂ bε γ ∂ bγ
∂ Vβσ ∂ Vεσ µ
= Cσατ Vγτ Cσατ Vβτ +Cσατ Cλτ µ Vβσ Vγλ Vε :
∂ bε ∂ bγ
Whence, setting b = 0 one obtains the relation
σ α
Cεγ Cσ β = ωεγβ ωβ εγ ;

where !
∂ 2Vβα σ
α ∂ Vε

ωεγβ = +Cσ β ∂ bγ

:
∂ bε ∂ bγ b=0

Making the circular permutation of indices ε ; γ ; β twice in the above relation, one
obtains two more similar relations, namely

Cβσε Cσαγ = ωβ εγ ωγβ ε ;


σ α
Cγβ Cσ ε = ωγβ ε ωεγβ :
Summing up the three obtained relations, one arrives at the second Jacobi relation
(2.3.21).
Conversely, consider given Cβαγ satisfying Eqs. (2.3.21). Then, one can construct
the system of Eqs. (1.4.8), whose solution is obviously unique and depends on the
choice of the vector e: Let the solution be

Wβα = Wβα (t; e):

Let us set
Vβα (b) = Wβα (1; b)
and demonstrate that these Vβα together with the given Cβαγ satisfy the Maurer-Cartan
equations (2.3.17). To this end, we introduce the functions

∂ Wβα ∂ Wγα
hβα γ (t) = Cσατ Wβσ Wγτ :
∂ eγ ∂ eβ
It is clear that hβα γ (0) = 0: Differentiating the above functions with respect to t and
using Eqs. (2.3.20) and the relations (2.3.21), one obtains

dhβα γ
= Cσατ eτ hσβ γ :
dt
Thus, the functions hβα γ satisfy the system of linear homogeneous differential equa-
tions with the zero initial conditions. Therefore,

hβα γ (t)  0;
2.3 Local Lie group 73

and, in particular,
hβα γ (1)  0:
The latter are Eqs. (2.3.17) for the functions Vβα (e): Letting e = 0 in Eqs. (2.3.20)
one obtains the equations
dWβα
= δβα
dt
whose solution is
Wβα (t; 0) = δβα t:
Hence
Vβα (0) = δβα :
Using Theorem 2.9 we conclude that the system of Eqs. (2.3.13) is completely
integrable with the obtained Vβα (b): According to Theorem 2.8, there exists a lo-
cal Lie group Gr where these Vβα are auxiliary functions. The given Cβαγ are the
structure constants for the constructed Gr because they are expressed through Vβα
by the formulae (2.3.22) following from Eqs. (2.3.17). This completes the proof of
Theorem 2.10.

2.3.8 Lie algebra of a local Lie group

The proof of Theorems 2.8—2.10 suggests also an algorithm for constructing a


group Gr by structure constants. It consists in integration of the system (2.3.20),
construction of the functions Vβα and in solving the system (2.3.11). The algorithm
shows that in order to restore Gr by its structure constants one has to integrate ordi-
nary differential equations at most.
The algorithm has two more important peculiarities. First, it furnishes the group
Gr in a definite system of coordinates ∑a : This ∑a appears to be a canonical system
of the first kind. Lemma 2.3 shows that in order to prove this fact it suffices to verify
that the resulting Vβα (b) satisfy the relations (2.3.19). Indeed, if we set

uα (t) = eβ Wβα (t; e) teα ;

then using Eqs. (2.3.20) we readily obtain the equations

duα
= Cσατ eτ uσ ; uα (0) = 0;
dt
Uniqueness of the solution uα (t) of the above initial value problems gives the equa-
tion uα (t)  0; which provides that (2.3.19) holds by construction of the function
Vβα :
The other peculiarity is that the resulting coordinates ∑a are analytic. In other
words, the multiplication law ϕ (a; b) is a holomorphic function of the variables
74 2 Lie algebras and local Lie groups

aα ; bβ at the point a = b = 0: Indeed, since the functions Wβα (t; e) solve the (non-
homogeneous) system of linear equations (2.3.20) with constant (with respect to t)
coefficients, they are determined and analytic with respect to t when ∞ < t < +∞:
Further, the right-hand sides in Eqs. (2.3.20) are analytic functions of the coordi-
nates e1 ; : : : ; er of the vector e: Therefore, the solution is holomorphic with respect
to e; at least in the vicinity of the point e = 0: Due to the equation

Vβα (b) = Wβα (1; b)

the functions Vβα (b); and hence Aβα (b) are holomorphic. Finally, the solution of
the completely integrable system (2.3.11) with the holomorphic right-hand sides
is holomorphic, which was to be proved. Thus, analytic coordinates exist in any
local Lie group Gr :
In fact, the three fundamental theorems of Lie show that investigation of groups
Gr is reduced to investigation of tensors of the third order Cβαγ : Of course, this re-
duction takes place with the accuracy to the local isomorphism of groups Gr : Two
local Lie groups Gr and Gr are said to be locally isomorphic if elements of some
vicinities of the unit element in Gr and Gr can be set into one-to-one correspondence
ga $ ḡa so that
g0 $ ḡ0 ; ga gb $ ḡa ḡb ; ga 1 $ ḡa 1 :
Obviously, it is necessary and sufficient for the local isomorphism Gr and Gr that
there exist such systems of coordinates ∑a and ∑a in Gr and Gr ; respectively, where
the multiplication laws for elements of Gr and Gr coincide:

ϕ (a; b) = ϕ̄ (a; b):

The criterion of local isomorphism can be also formulated in terms of structure


α α
constants Cβαγ and Cβ γ : Namely if Cβ γ = Cβαγ ; then Gr and Gr are locally isomorphic.
Conversely, in locally isomorphic Gr and Gr one can choose systems of coordinates
α
so that the structure constants calculated in them are equal to each other, Cβαγ = Cβ γ :
Comparing the property of a local Lie group Gr to be determined by the set of its
structure constants with the corresponding property of Lie algebras (Theorem 2.1),
as well as the properties of these constants, one arrives at the following important
notion.
Definition 2.11. A Lie algebra Lr is called the Lie algebra of a local Lie group Gr
if there exist a basis in Lr and a system of coordinates in Gr such that the structure
constants Cβαγ and Cβαγ of the algebra Lr and of the group Gr ; respectively, coincide,
i.e., Cβαγ = Cβαγ :
Since any set of constants Cβαγ satisfying Eqs. (2.3.21) determines a Lie algebra
Lr and a local Lie group Gr ; Definition 2.11 establishes a one-to-one correspondence
(up to isomorphism) between Lie algebras Lr and local Lie groups Gr :
Let us discuss how to realize the Lie algebra Lr of a given local Lie group Gr as a
Lie algebra of operators. Consider a space E r of points a with determined auxiliary
2.4 Subgroup, normal subgroup and factor group 75

functions Aβα (a) of the given Gr and construct the operators


Xα = Aσα (a) (α = 1; : : : ; r): (2.3.23)
∂ aσ
Computing the commutators of the operators (2.3.23) according to Definition 1.7:
!
∂ Aβσ σ
τ τ ∂ Aα ∂
[Xα ; Xβ ] = Aα τ Aβ τ
∂a ∂a ∂ aσ

and invoking the Maurer-Cartan equations in the form (2.3.18), one obtains
σ
(Xα ; Xβ ) = Cαβ Xσ ;
σ are the structure constants of the given G : The resulting relations
where Cαβ r
demonstrate that the linear span fX g of the operators (2.3.23),

X = eα Xα ;

is a Lie algebra of operators, namely the algebra Lr ; whose structure constants are
equal to the numbers Cβαγ in the basis (2.3.23). The linear independence of the oper-
ators (2.3.23) follows from the fact that they take the form


Xα =
∂ aα
at the point a = 0: Thus, the Lie algebra of operators Lr spanned by the operators
(2.3.23) is the Lie algebra of the given Gr :
The operators (2.3.23) are sometimes termed the shift operators on the group Gr :

2.4 Subgroup, normal subgroup and factor group

In this section we will discuss details of the correspondence between Lie algebras
Lr and local Lie groups Gr established by Definition 2.11.

2.4.1 Lemma on commutator

It is convenient to use another realization of the Lie algebra Lr of the group Gr :


Namely we consider the Lie algebra Lr of the directing vectors e of subgroups G 1
of the group Gr : This algebra Lr is the linear space of vectors e (e1 ; : : : ; er ); where
the operation of commutation is determined by means of the structure constants Cβαγ
of the group Gr by the formulae
76 2 Lie algebras and local Lie groups

β γ
[e1 ; e2 ]α = Cβαγ e1 e2 (α = 1; : : : ; r): (2.4.1)

One can readily verify that the introduced operation of commutation satisfies all the
axioms of Definition 1.7 due to the properties (2.3.21) of the structure constants. Let
us verify that structure constants of the derived Lr are equal to constants Cβαγ in the
β β
basis feα g determined as follows: the vector eα has the coordinates eα = δα : This
follows from Eq. (2.2.2), written for [eβ ; eγ ]; namely

[eβ ; eγ ]α = Cσατ eβσ eτγ = Cσατ δβσ δγτ = Cβαγ = Cβσγ δσα = Cβσγ eασ ;

whence
[eβ ; eγ ] = Cβσγ eσ ;
which was to be proved.
Let us make another preliminary observation. The formulae (2.3.4) entail that

∂ ϕ α
Aαγ (a) = = δγα + rβαγ aβ + O(jaj2 );
∂ bγ b=0

whence
∂ Aαγ
= rβαγ : (2.4.2)
∂ aβ a=0
Therefore, equations (2.3.18) provide

Cβαγ = rβαγ α
rγβ (2.4.3)

when a = 0:
Consider in Gr two subgroups G1 ; g1 (t) and g2 (t) with the directing vectors e1
and e2 ; respectively. Let us construct a new curve in Gr :
p p p p
ĝ(t) = g1 ( t)g2 ( t)g1 1 ( t)g2 1 ( t); (2.4.4)

where t  0:
Lemma 2.4. The curve ĝ(t); determined by Eq. (2.4.4) has the directing vector e =
[e1 ; e2 ]; where the commutator is defined by Eqs. (2.4.1).
Proof. Equations (2.3.9) and (2.4.2) entail that
1
aα (t) = eα t + rβαγ eβ eγ t 2 + O(t 3)
2
along the subgroup Gp 1 with the
p directing vector e: Therefore, if b(t) are the coordi-
nates of the curve g1 ( t)g2 ( t); then
p p p p β p γ p
bα (t) = ϕ α (a1 ( t); a2 ( t)) = aα1 ( t) + aα2 ( t) + rβαγ a1 ( t)a2 ( t) + O(t 2 )
3

p 1 β γ β γ β γ
= (eα1 + eα2 ) t + rβαγ (e1 e1 + e2 e2 )t + rβαγ e1 e2t + O(t 2 ):
3

2
2.4 Subgroup, normal subgroup and factor group 77

Likewise,
p 1 β γ β γ β γ
cα (t) = (eα1 + eα2 ) t + rβαγ (e1 ; e1 + e2 e2 )t + rβαγ e1 e2t + O(t 2 )
3

2
p p
for the coordinates c(t) of the curve g1 1 ( t)g2 1 ( t): Consequently, the coordi-
nates â(t) of the curve ĝ(t) from (2.4.4) that are equal to ϕ (b(t)c(t)) are given by

âα (t) = ϕ α (b(t); c(t)) = bα + cα + rβαγ bβ cγ + o(jbj2 + jcj2 )


β γ β γ β γ β β γ γ
= [rβαγ (e1 e1 + e2 e2 ) + 2rβαγ e1 e2 rβαγ (e1 + e2 )(e1 + e2 )]t + O(t 2 )
3

β γ β γ
= (rβαγ α
)e1 e2t + O(t 2 ) = Cβαγ e1 e2t + O(t 2 ):
3 3
rγβ

The relations (2.4.3) are used in the latter transition. Whence, differentiating with
respect to t and setting t = 0; one obtains Eqs. (2.4.1) for the vector
 
d â
e=
dt t=0:

The lemma is proved.


Lemma 2.4 is termed a lemma on commutator.

2.4.2 Subgroup

Definition 2.12. A subset Gs  Gr (0  s  r) is called a subgroup of the local


Lie group Gr if Gs is closed with respect to the multiplication in Gr and if the coor-
dinates of elements ga 2 Gs are given by means of thrice continuously differentiable
functions
aα = f α (ā1 ; : : : ; ās ) (α = 1; : : : ; r) (2.4.5)
for which f α jā=0 = 0 and the rank of the matrix
 α 
∂f

∂ āβ ā=0
equals to s:
Let Gr be a local Lie group and let Lr be its Lie algebra realized as a Lie algebra
of the directing vectors of subgroups G1 according to Eqs. (2.4.1).
Theorem 2.11. If Gs is a subgroup of Gr ; then the set feg of the directing vectors e
for various curves passing in Gs generates a subalgebra Ls  Lr : Conversely, if Ls is
a subalgebra in Lr ; then the set fg(t)g of elements of various subgroups of G1 with
directing vectors from Ls generates a subgroup Gs  Gr :
Proof. Let e1 and e2 be the directing vectors of the curves g1 (t) and g2 (t)  Gs ;
respectively. Then the curve g1 (t)g2 (t) has the directing vector e1 + e2 ; and the
78 2 Lie algebras and local Lie groups

curve g1 (α t) has the vector α e1 : Therefore, feg is a linear space, namely a subspace
Ls  Lr : Further, the curve ĝ(t)  Gs constructed according to Eq. (2.4.4) has the
directing vector e = [e1 ; e2 ]: Therefore, for any e1 ; e2 2 Ls one also has [e1 ; e2 ] 2 Ls ;
i.e. Ls is a subalgebra in Lr according to Definition 2.3.
In order to prove the inverse, we consider Gr in canonical coordinates of the
first kind. Let us choose a basis in Lr so that the basis vectors eα 0 (α 0 = 1; : : : ; s)
provide a basis in the subalgebra Ls : Let us agree, that the Greek indices with one
prime α 0 ; β 0 ; σ 0 ; : : : ; and with two primes α 00 ; β 00 ; σ 00 ; : : : ; run the values 1; : : : ; s;
and s + 1; : : :; r; respectively. Then, the coordinates of the vectors e 2 Ls are such,
that eα = 0: Since Ls is a subalgebra in Lr ; the equations
00

Cβαγ = [eβ ; eγ ]α

entail that
Cβα0γ 0 = [eβ 0 ; eγ 0 ]α = 0:
00 00
(2.4.6)

Let Gs be the set fg(t)g of elements of subgroups G1 with the directing vectors
from Ls : Let us introduce into Gr a canonical system of coordinates of the first
kind ∑a resulting from construction of Gr by using structure constants described in
x2.3. Note that the elements of Gs in the coordinate system ∑a are characterized by
the equation aα = 0: Indeed, since ∑a is canonical, the elements of G1 have the
00

coordinates aα = eα t; whence aα = eα t = 0 if e 2 Ls :
00 00

Now let us demonstrate that by virtue of Eq. (2.4.6) one has

Vβα0 (b0 )  0;
00
b0 = (b1 ; : : : ; bs ; 0; : : : ; 0) (2.4.7)

in ∑a : To this end, take a vector e 2 Ls and single out in the system of Eqs. (2.3.20)
a subsystem with α = α 00 ; β = β 0 which has the form

dWβα0
00

= Cσα 00r 0 er Wβσ0 ; Wβα0 (0) = 0


00 0 00 00

dt
(see Eq. (2.4.6)). Due to homogeneity of the equations the solution of the above
subsystem is Wβα0 (t) = 0; which entails Eqs. (2.4.7).
00

In order to prove that Gs is closed with respect to multiplication in Gr ; it is


sufficient to verify that ϕ α (a0 ; b0 ) = 0: This follows from the fact that, by virtue of
00

Eqs. (2.4.7) the Lie equations (2.3.11) for γ = γ 00 ; β = β 0 and a = a0 ; b = b0 have


the form
γ 00 ∂ ϕ α (a0 ; b0 )
= 0; ϕ α (a0 ; 0) = 0:
00
Vα (ϕ ) β
∂b
0

Therefore, we satisfy the complete system (2.3.11) by setting ϕ α (a0 ; b0 ) = 0: In-


00

deed, the previous equations in this case are satisfied identically by virtue of Eqs.
(2.4.7), whereas the remaining equations become a completely integrable system

∂ ϕ α (a0 ; b0 )
0
γ0 γ0
ϕ α (a0 ; 0) = aα :
0 0
Vα 0 (ϕ 0 ) β
= Vβ 0 (b0 );
∂b
0
2.4 Subgroup, normal subgroup and factor group 79

The latter statement follows from the fact that, by virtue of Eq. (2.4.6), Cβα0γ 0 provide
0

a system of structure constants for Ls and hence, satisfy the Jacobi relations (2.3.21).
Theorem 2.11 is proved.

2.4.3 Normal subgroup

Definition 2.13. A subgroup Gs  Gr is called a normal subgroup of the group Gr


if
gGs g 1 = Gs for any g 2 Gr :
Theorem 2.12. If Gs is a normal subgroup of Gr ; then the corresponding subalgebra
Ls is an ideal in Lr and vice versa.

Proof. If Gs is a normal subgroup of Gr ; then

hgh 1g 1
2 Gs

for any g 2 Gs and any h 2 Gr : Therefore, Lemma 2.4 entails that if c̄ 2 Ls and
e 2 Lr ; then one also has [ē; e] 2 Ls : According to Definition 2.3 it means that Ls is
an ideal in Lr :
Conversely, let Ls be an ideal in Lr : By virtue of Theorem 2.11 it corresponds to
a subgroup Gs  Gr : Let us prove that

ĝ(t) = ga g(t)ga 1 2 Gs

for any one-parameter subgroup g(t) 2 Gs and any ga 2 Gr : Since the curve ĝ(t) is
also a subgroup G1 ; it is sufficient to prove that its directing vector ê belongs to Ls :
Introducing in Gr a system of coordinates ∑a as we did in the proof of Theorem 2.11
and assuming that g(t) has a directing vector e; one finds out that ĝ(t) is written in
the coordinate from as follows:

êα t = ϕ α (ϕ (a; et); a):

Whence, differentiating with respect to t and letting t = 0; one obtains

êα = Vσα ( a)Aσγ (a)eγ : (2.4.8)

Further, choosing the basis in Lr in the same way as in the proof of Theorem 2.11
and invoking that Ls is an ideal, one obtains

Cβαγ 0 = [eβ ; eγ 0 ]α = 0
00 00

instead of Eq. (2.4.6). Proceeding as in the proof of Theorem 2.11 one obtains the
equations
Vβα0 (b) = 0
00
(2.4.9)
80 2 Lie algebras and local Lie groups

instead of Eq. (2.4.9). Then, the Lie equations (2.3.11) provide


!
∂ϕα
00
γ 00
Vα 00 (ϕ ) =0
∂ bβ
0

00
when γ = γ ; β = β 0 : Whence
∂ϕα
00

0 = 0;
∂ bβ
which means that in the given case one also has

Aβα 0 (a) = 0:
00
(2.4.10)

By virtue of Eqs. (2.4.9) and (2.4.10), the formulae (2.4.8) written for e 2 Ls provide

êα = Vσα ( a)Aγσ0 eγ = Vσα00 ( a)Aγσ0 (a)eγ = 0


00 00 0 00 00 0

i.e., ê 2 Ls ; which was to be proved.


The center of the group Gr is the set of the elements from Gr that are permutable
with any element of Gr : Note that the center of the group Gr corresponds to the
center Z of the Lie algebra Lr of the group Gr : We leave it as an exercise for the
reader to prove the theorem.

2.4.4 Factor group

Let Gs be a normal subgroup in Gr : One can introduce the equivalence relation in


Gr by the following rule:

gs  gb if gc gb 1 2 Gs :

Accordingly, Gr splits into classes of equivalent elements. Let a system of coordi-


nates ∑a in Gr be chosen as in the proof of Theorem 2.11.
Lemma 2.5. The equivalence relation gc  gb is identical with the equations

c00 = b00 :

Proof. We have already shown that

∂ ϕ α (a; b)
00

=0
∂ bβ
0

while proving Theorem 2.12. Since ∑a is canonical system of the first kind, one has

ϕ α (a; b) = ϕ α ( b; a);
2.4 Subgroup, normal subgroup and factor group 81

whence
∂ ϕ α (a; b)
00

=0
∂ aβ
0

as well. Hence, the equations

ϕ α (a; b) = ϕ α (a00 ; b00 )


00 00

hold. If c00 = b00 then

ϕ α (c; b) = ϕ α (b00 ; b00 ) = 0;


00 00

whence
gc gb 1 2 Gs :
Conversely, let gc = ga gb ; where ga 2 Gs : Then a00 = 0 and

cα = ϕ α (a0 ; b) = ϕ α (0; b00 ) = bα ;


00 00 00 00

i.e. c00 = b00 ; which was to be proved.


Let h(g) be the class of equivalent elements containing g: One can determine an
operation of multiplication of classes by the rule

h(g1 )h(g2 ) = h(g1 ; g2 ):

One can easily verify that this operation does not depend on the choice of “repre-
sentatives” g1 ; g2 of the classes h(g1 ); h(g2 ) and that it satisfies the group axioms.
Let us introduce coordinates into the set of classes by taking the numbers aα as
00

coordinates of the class h(ga ): Lemma 2.5 demonstrates that the correspondence
h(ga ) $ a00 is one-to-one. The multiplication law for classes in these coordinates
is given by the functions ϕ 00 (a00 ; b00 ) satisfying the smoothness requirement for the
multiplication law in the local Lie group. Hence, the set of classes h(ga ) is a local
Lie group.
Definition 2.14. The set of classes of equivalent elements of the group Gr generated
by its normal subgroup Gs is called the factor group of the group Gr by its normal
subgroup Gs and is denoted by the symbol Gr =Gs :
Definition 2.4 given in Lie algebras is an equivalent to Definition 2.14. If Gs is a
normal subgroup in Gr and if Ls is a corresponding ideal in the Lie algebra Lr of the
group Gr ; then one can construct the factor group Gr =Gs and the quotient algebra
Lr =Ls :
Theorem 2.13. The quotient algebra Lr =Ls is the Lie algebra of the factor group
Gr =Gs :
Proof. The quotient algebra Lr =Ls is the set of vectors representing the classes of
equivalent vectors e 2 Lr with respect to the ideal Ls :

e  ē if e ē 2 Ls :
82 2 Lie algebras and local Lie groups

Choosing the basis in Lr in the same way as in the proof of Theorem 2.11, one
obtains the equivalence criterion in the coordinate form: e00 = ē 00 or, to be more
exact,
eα = ēα :
00 00

Hence, Lr =Ls can be considered as the set of vectors of the form e00 : By virtue of
Lemma 2.5 and construction of the factor group Gr =Gs ; the directing vectors of one-
parameter subgroups from Gr =Gs also have the from e00 : Therefore, the operation of
commutation (2.4.1) in the Lie algebra of the group Gr =Gs is given by
β 00 γ 00
[e1 ; e2 ]α = Cβα00γ 00 e1 e2 :
00 00

The proof is completed by the observation that Cβα00γ 00 are structure constants of the
00

quotient algebra Lr =Ls :

2.5 Inner automorphisms of a group and of its Lie algebra

2.5.1 Inner automorphism

Definition 2.15. An inner automorphism of a group Gr is a mapping of Gr onto


itself given by the formula
g ! h 1 gh;
where h 2 Gr : If h = ga ; then the corresponding inner automorphism is denoted by
the symbol Γa ; so that
Γa (g) = ga 1 gga :
The operation of multiplication in the set GA of inner automorphisms of the group
Gr is defined by
ΓaΓb = Γϕ (a;b): (2.5.1)
It means that the automorphism ΓaΓb acts upon elements g 2 Gr as follows:

ΓaΓb (g) = (ga gb ) 1 g(ga gb ) = gb 1 (ga 1 gga )gb

= Γb (Γa (g)):

It is manifest that the automorphisms Γa and Γb coincide if and only if the element
ga gb 1 belongs to the center Z of the group Gr : Therefore, there is a one-to-one cor-
respondence between inner automorphisms and elements of the factor group Gr =Z.
Moreover, the multiplication law in GA will be the same as in Gr =Z due to (2.5.1).
It follows that the set GA is a local Lie group with the multiplication (2.5.1) and that
the group GA is isomorphic to the factor group Gr =Z:
2.5 Inner automorphisms of a group and of its Lie algebra 83

Let us demonstrate that the group GA can be represented as a group of linear


transformations in the set feg of the directing vectors of subgroups G1 of the group
Gr : With this purpose in mind, let us introduce a canonical system of coordinates
∑a of the first kind in Gr and consider a subgroup G1 with the directing vector e :
g(t) = get : As it was mentioned in the proof of Theorem 2.12, the curve

ĝ(t) = Γa g(t) = ga 1 g(t)ga

is also a subgroup G1 with the directing vector (2.4.8). The formula (2.4.8) deter-
mines a linear transformation la in Lr given by the matrix

la : lβα (a) = Vσα (a)Aβσ ( a): (2.5.2)

2.5.2 Lie algebra of GA and adjoint algebra of Lr

It is evident that the automorphisms Γa and Γb are identical if and only if the transfor-
mations la and lb are identical and that the product ΓaΓb corresponds to the product
la lb = lϕ (a;b) ; given by the matrix

lβα (ϕ (a; b)) = lσα (b)lβσ (a):

It follows that the set L of matrices lβα (a) with the variable a is a local Lie group that
is isomorphic to the group of inner automorphisms GA : Thus, we have the following
statement.
Theorem 2.14. The Lie algebra of the group GA is isomorphic to the adjoint algebra
of the Lie algebra Lr :
Proof. Let us consider an automorphism Γa along a one-parameter subgroup a = ut
with the directing vector u: Then, Γut is also a subgroup G1 in the group GA ; and
hence the matrix lβα (ut) is a one-parameter subgroup of matrices in the group L :
Therefore
lβα (u(t + s)) = lσα (us)lβσ (ut):
Differentiating with respect to s and letting s = 0 one obtains

dlβα (ut) ∂ lσα (a)
= uγ l σ (ut):
dt ∂ aγ a=0 β

The constant is calculated by using (2.5.2) and the equation Vσα (a)Aσβ (a) = δβα :
84 2 Lie algebras and local Lie groups
 α 
∂ lσα (a) ∂ Vτ (a) τ α ∂ Aτσ ( a)
= Aσ ( a) +Vτ (a)
∂ aγ a=0 ∂ aγ ∂ aγ a=0
" #
β
∂ Aβv (a) ∂ Aτ ( a)
= Vτ (a)Vvα (a)Aτσ ( a) +Vτα (a) σ γ
∂ aγ ∂a
a=0
 
∂ Aασ (a) ∂ Aασ ( a)
= +
∂ aγ ∂ aγ a=0

∂ Aα (a)
= 2 σ γ = α
2rγσ :
∂a a=0

The latter equality follows from Eq. (2.4.2). The equations a 1 = a and

ϕ α (a; a) = rβαγ aβ aγ + o(jaj2) = 0;

i.e. rβαγ aβ aγ = 0; yield that the constants rβαγ are skew-symmetric with respect to the
lower indices in the canonical system of coordinates. Therefore, equation (2.4.3)
provides
α
2rγσ = Cσαγ :
Thus, the matrix (2.5.2) satisfies the system of equations

dlβα (u;t)
= Cσαγ uγ lβσ (ut); lβα (0) = δβα (2.5.3)
dt
along the subgroup G1 : a = ut:
We have already constructed inner automorphisms of the Lie algebra Lr given by
Eqs. (2.2.8) with the matrices fβα (t) in x2.2. Substitution of expressions (2.2.8) into
equations of the subgroup G1 ; (2.2.7) yields the following system of equations for
these matrices:
d fβα (t)
= Cσαγ eγ fβσ (t); fβα (0) = δβα ;
dt
coinciding with (2.5.3) when u = e: Hence,

lβα (ut) = fβα (t)

when u = e and one obtains that the group Z is isomorphic to the group of inner au-
tomorphisms of the Lie algebra Lr : Recall that isomorphic groups have isomorphic
Lie algebras. Furthermore, a Lie algebra of a group of automorphisms of the Lie
algebra Lr is a Lie algebra fEg of operators E = eβ Eβ ; generated by the operators
(2.2.6). The latter Lie algebra is isomorphic to the adjoint algebra LD of the Lie
algebra Lr : This completes the proof of Theorem 2.14.
2.6 Local Lie group of transformations 85

2.6 Local Lie group of transformations

2.6.1 Introduction

We return now to transformations of the space E N (x) discussed in Chapter 1 and


assume that a family of such transformations fTa g is given and depends on r param-
eters a (a1 ; : : : ; ar ) :

Ta : x0i = f i (x; a) = f i (x1 ; : : : ; xN ; a1 ; : : : ; ar ): (2.6.1)

Definition 2.16. The family fTa g is called a local Lie group GNr of point transfor-
mations of the space E N if fTa g is a local Lie group Gr with an usual multiplication
of transformations and if the functions f i (x; a) in (2.6.1) are twice continuously
differentiable with respect to the variables x; a:
If ϕ (a; b) is the multiplication law of elements in Gr ; then multiplication of trans-
formations in GNr is carried out by the formulae

Tb Ta = Tϕ (a;b) : f i ( f (x; a); b) = f i (x; ϕ (a; b)): (2.6.2)

Since the group GNr is a group Gr given by the multiplication rule ϕ (a; b); all
notions and facts concerning Gr refer to GNr as well. However, due to the special
form of the multiplication law in GNr some new notions and facts arise.
Let us introduce the auxiliary functions

∂ f i (x; a)
ξαi (x) = (i = 1; : : : ; N; α = 1; : : : ; r) (2.6.3)
∂ aα a=0

and construct the following operators with these functions:


Xα = ξαi (x) (α = 1; : : : ; r): (2.6.4)
∂ xi
The operators Xα are referred to as basis operators of the group GNr :

2.6.2 Lie’s first theorem

Theorem 2.15. The functions x0i = f i (x; a) satisfy the system of equations

∂ x0i
= ξσi (x0 )Vασ (a); x0i ja=0 = xi ; (2.6.5)
∂ aα
β
where Vβα (a) are auxiliary functions of the group Gr and the linear operators Xα
(2.6.4) are linearly independent. Conversely, let us suppose that a local Lie group
86 2 Lie algebras and local Lie groups

Gr with auxiliary functions Vβα (a) and linearly independent operators Xα (2.6.4) are
given. If the system of equations (2.6.5) has a unique solution for any x 2 E N ; then
substitution of the solution of the system (2.6.5) into the formulae (2.6.1) determines
a local Lie group of transformations GNr isomorphic to the group Gr :

Proof. Let ∆ a be a (small) shift of the point a: By virtue of Eqs. (2.6.1) and (2.6.2),
the formula of multiplication of transformations

Ta+∆ a = (Ta+∆ a Ta 1 )Ta

is written in the coordinate form as follows:

f i (x; a + ∆ a) = f i (x0 ; ϕ (a 1
; a + ∆ a)):

Making the Taylor expansion of the right-hand and the left-hand sides, using the
definition (2.6.3), and comparing the principal parts when ∆ a ! 0; one obtains Eqs.
(2.6.5). The initial conditions of (2.6.5) follow directly from Definition 2.16, since
a unit element of the group GNr is an identical transformation of E N : In order to
prove that the operators (2.6.4) are linearly independent, let us assume that ∑a is a
canonical system of the first kind and that eα0 Xα = 0 for some vector e0 ; i.e.

eα0 ξαi  0 (i = 1; : : : ; N):

One has
x0i = f i (x; e0t)
along the subgroup G1 with equations aα = eα0 t; and Equations (2.6.5), (2.3.19)
yield
dx0i ∂ x0i
= eα0 α = ξσi (x0 )Vασ (e0t)eα0 = ξσi (x0 )eσ0 = 0: (2.6.6)
dt ∂a
Equations (2.6.6) show that x0i = xi along the whole G1 ; i.e., all transformations G1
are identical. It is possible only for e0 = 0; which was to be proved.
Let us prove the converse statement of Theorem 2.15. Let the functions obtained
as a solution of the system (2.6.5) have the form (2.6.1), thus determining the family
fTa g of transformations of E N : Let us demonstrate that these Ta satisfy Eqs. (2.6.2).
To this end assume that

x0 = f (x; a); x00 = f (x0 ; b) and y = f (x; ϕ (a; b)):

Using Eqs. (2.6.5) and (2.3.13), as well as the property (2.3.12), one obtains that

∂ x00i
= ξσi (x00 )Vασ (b); x00i jb=0 = x0i ;
∂ bα

∂ yi ∂ yi ∂ ϕ β β
= = ξσi (y)Vβσ (ϕ )Aτ (ϕ )Vατ (b) = ξσi (y)Vασ (b);
∂ bα ∂ ϕ β ∂ bα
2.6 Local Lie group of transformations 87

y0 jb=0 = x0i :
One can see that x00i and yi satisfy one and the same system of equations (2.6.5) as
functions of the point b and the same initial conditions. Uniqueness of the solution
guarantees that x00i = yi for any x; a; b; which is Eq. (2.6.2).
This proves the group property of the family fTa g and shows that the mapping
Gr ! fTa g given by the formula ψ (a) = Ta is at least homomorphic. One has only
to demonstrate that ψ is an isomorphism. Indeed, one would have x0i = xi for all
a = e0t along the subgroup G1 from the kernel of the homomorphism ψ with the
directing vector e0 ; and hence, eσ0 ξσi (x)  0 according to (2.6.6), i.e.

eσ0 Xσ = 0:

Since the operators (2.6.4) are linearly independent by assumption, the above equa-
tion yields e0 = 0; so that the kernel ψ consists of one point a = 0: This proves that
ψ is an isomorphism and Theorem 2.15 is proved.
Corollary 2.3. The functions x0i = f i (x; et) satisfy the equations

dx0i
= eα ξαi (x0 ); x0i jt=0 = xi (2.6.7)
dt
along the subgroup G1 with the directing vector e: In fact, these equations have
already been written out in Eqs. (2.6.6).
The structure constants Cβαγ of the group Gr are also called the structure constants
of the group of transformations GNr :

2.6.3 Lie’s second theorem

Theorem 2.16. The linear span of the operators Xα (2.6.4) is a Lie algebra of oper-
γ
ators, the structure constants of which coincide with the structure constants Cαβ of
the group Gr ; so that
σ
[Xα ; Xβ ] = Cαβ Xσ : (2.6.8)
Conversely, if an r-dimensional Lie algebra of operators with the basis (2.6.4) is
given in E N ; then there exists a local Lie group of transformations GNr ; whose basis
operators coincide with the given (2.6.4).

Proof. The solvability of the system (2.6.5) with any initial conditions guarantees
that it is completely integrable. Note that the system has the form (2.3.14). Writing
the test (2.3.15) for complete integrability and using the Maurer-Cartan equations
(2.3.17), one can verify that the criterion for complete integrability of the system
(2.6.5) is given by Eqs. (2.6.8). Note that this discovers the equivalence of complete
integrability of Eqs. (2.6.5) with validity of Eqs. (2.6.8).
88 2 Lie algebras and local Lie groups

Conversely, given a Lie algebra of operators spanned by linearly independent op-


erators (2.6.4) for which equations (2.6.8) hold. Let us construct a local Lie group
Gr by the structure constants Cβαγ taken from (2.6.8) and certainly satisfying the
Jacobi relations (2.3.21). This is possible due to Theorem 2.11. Let us write the
system (2.6.5) with the auxiliary functions Vβα (b) in this Gr : The system is com-
pletely integrable by virtue of (2.6.8). Application of Theorem 2.15 yields the group
GNr isomorphic to Gr ; constructed earlier. Transformations of the GNr satisfy Eqs.
(2.6.5) which entail that the operators (2.6.4) coincide with the basis operators of
the resulting group GNr : This completes the proof of Theorem 2.16.
In what follows, the Lie algebra of operators spanned by the basis operators
(2.6.4) of the group GNr is denoted by LNr : It is clear that LNr is a Lie algebra of
the group GNr in the meaning of Definition 2.11.
Let us point out one application of the above results to group transformations ad-
mitted by a system of differential equations (S). The system (S) will be said to admit
the group GNr if it admits any subgroup G1 2 GNr in the meaning of Definition 1.6
(Chapter 1). Then, the following corollary of Theorem 2.16 can be formulated: if
(S) admits a Lie algebra of operators LNr ; then (S) admits a local Lie group GNr
corresponding to this LNr in the meaning of (2.3.17). The proof is trivial.
Examples of local Lie groups of transformations are provided by the so-called
parametric groups of the local Lie group Gr : Let us consider the multiplication law
c = ϕ (a; b) of elements G as transformation of the point a to the point c of the space
E r (a) depending on parameters b: The associativity of multiplication of elements
Gr :
(gc ga )gb = gc (ga gb );
written in a coordinate form provides the equations

ϕ i (ϕ (c; a); b) = ϕ i (c; ϕ (a; b))

similar to Eqs. (2.6.2). Hence, the transformations

a!c: cα = ϕ α (a; b)

form a local Lie group of transformations of Grr : It is known as the first paramet-
ric group of the group Gr : The formulae (2.6.3) for this group of transformations
coincide with the formulae (2.3.5), and the basis operators of its Lie algebra Lrr co-
incide with the operators (2.3.23). Likewise, one can define the second parametric
group of the group Gr as the group of transformations of the space E r (b); namely
the transformations b ! c given by the same formulae ϕ (a; b) = c:

2.6.4 Canonical coordinates of the second kind

As it has already been mentioned, the proofs of theorems of existence of the groups
Gr or GNr contain an algorithm for constructing multiplication laws (in Gr ) or trans-
2.6 Local Lie group of transformations 89

formations (in GNr ) as well. The corresponding groups are obtained in canonical
coordinates of the first kind. However, the algorithm is inconvenient in applications
because it appears to be very cumbersome and is hardly ever used in fact. The algo-
rithm of constructing Gr and GNr based on finding a “basis” set of subgroups G1 is
more applicable in practice.
The algorithm consists in the following. Let us suppose that we know a set of
r subgroups G1 of the group Gr with the property that the directing vectors of the
subgroups G1 are linearly independent in a certain system of coordinates ∑a : The
subgroups can be written by the formulae a = aα (āα ) or

gα (āα ) = ga α (āα ) (α = 1; : : : ; r); (2.6.9)

where āα is the parameter of the subgroup with the number α : Now let us compose
a single product of all gα (āα ): It will be some element ga 2 Gr in the coordinates
∑a :
ga = g1 (ā1 )g2 (ā2 )    gr (ār ): (2.6.10)
We claim that every element ga 2 Gr is uniquely represented in the form (2.6.10)
when the parameters āα vary independently of each other. In other words, we state
that the values of parameters āα can be taken as new coordinates in Gr thus provid-
ing a new system of coordinates ∑ā : Indeed, equation (2.6.10) shows that equations
(2.3.2) hold for a and ā; and one has only to verify that the Jacobian
α
∂ a

∂ āβ
ā=0

does not vanish. Since all āα are independent, one can assume while differentiating
with respect to āβ ; that all āα are equal to zero except for āβ : Then equation (2.6.10)
takes the form
ga = gβ (āβ )
or, by virtue of Eq. (2.6.9), ga = ga β (āβ ) so that the derivative

∂ aα
∂ āβ ā=0

is equal to the coordinate eβα of the directing vector eβ of the subgroup G1 with the
number β according to definition of the directing vector. Therefore, the considered
Jacobian equals to jeβα j and does not vanish due to linear independence of the vectors
eβ (β = 1; : : : ; r):
The coordinates introduced in Gr according to the formula (2.6.10), are called
canonical coordinates of the second kind.
Likewise, one can introduce canonical coordinates of the second kind in GNr : In
this case, one multiplies the transformations Ta α of subgroups G1 with the parame-
ters aα : The result is written in the form
90 2 Lie algebras and local Lie groups

T = Ta1 Ta2    Tar (2.6.11)

instead of (2.6.10).
Example 2.5. Let us consider the Lie algebra of operators L22 with the basis

∂ ∂ ∂
X1 = ; X2 = x +y
∂x ∂x ∂y

and construct the corresponding group G22 in canonical coordinates of the second
kind. The subgroup G1 with the operator X1 is the group G1 of translations and, as
we already know from Chapter 1, has the form

Ta : x0 = x + a; y0 = y:

The subgroup G1 with the operator X2 is the group G1 of dilations and has the form

Tb : x0 = bx; y0 = by:

The general form of the transformation T(a;b) 2 G22 is provided by multiplication

T(a;b) = Ta Tb : x0 = bx + a; y0 = by:

However, one can multiply in the reverse order as well:

T(a;b) = Tb Ta : x0 = b(x + a); y0 = by;

which obviously leads to another system of coordinates in G2 :


In conclusion, let us mention the notion of the prolonged group G eNre ; which is an
evident generalization of Definition 1.5. The Lie algebra of the group G eNre is the Lie
e where X are the operators of the Lie algebra
algebra of the prolonged operators X;
N
of the group Gr :
Chapter 3
Group invariant solutions of differential
equations

3.1 Invariants of the group GN


r

3.1.1 Invariance criterion

Let GNr be a local Lie group of point transformations x0 = Ta x in the space E N (x)
and let

Xα = ξαi (x) i (α = 1; : : : ; r) (3.1.1)
∂x
be a basis of its Lie algebra LNr of operators.
Definition 3.1. A function I(x); which is not identically constant, is called an in-
variant of the group GNr if
I(Ta x) = I(x)
for all transformations Ta 2 GNr :
Comparing this definition with Definition 1.3 and invoking the corollary of The-
orem 2.7, one can see that the function I(x) is an invariant of the group GNr if and
only if it is an invariant of every subgroup G1  GNr : Theorem 1.4 gives the nec-
essary and sufficient condition for the group G1 to have I(x) as is its invariant. By
virtue of this theorem, I(x) is an invariant of GNr if and only if

XI(x) = 0

for any operator X 2 LNr : Since


X = eα Xα ;
one obtains finally the criterion for the invariance of the function I(x) with respect
to the group GNr in the form

Xα I(x) = 0 (α = 1; : : : ; r); (3.1.2)

where Xα are basis operators (3.1.1) of the Lie algebra LNr :


92 3 Group invariant solutions of differential equations

Thus, an invariant I(x) of the group GNr has to be a solution of the system of
linear differential equations (3.1.2). The questions then arise about the existence
and the multitude of the solutions of the system (3.1.2). First of all, it is clear
that if I 1 (x); : : : ; I s (x) are some solutions of the system (3.1.2), then any function
F(I 1 (x); : : : ; I s (x)) is also a solution:

∂F
Xα F(I) = Xα I σ = 0:
∂ Iσ
Thus, the questions of functional dependence and independence of functions arise.
These questions belong to the classical analysis. Let us recall some facts from this
field.

3.1.2 Functional independence

Functions f 1 (x); : : : ; f s (x) are said to be functionally dependent if there exists a not
identically vanishing function F(z1 ; : : : ; zs ) such that the function F( f 1 (x); : : : ; f s (x))
vanishes identically with respect to the independent variables x = (x1 ; : : : ; xN ) 2 E N ;

F( f 1 (x); : : : ; f s (x))  0:

If such a function F(z1 ; : : : ; zs ) does not exist, i.e. if the identity

F( f 1 (x); : : : ; f s (x))  0

with respect to x implies the identity

F(z1 ; : : : ; zs )  0

with respect to the variables z; then the functions f 1 (x); : : : ; f s (x) are said to be
functionally independent. In what follows, functions f σ (x) are supposed to be once
continuously differentiable.
Lemma 3.1. The functions f σ (x) (σ = 1; : : : ; s) are functionally independent if and
only if the general rank R(J) of the Jacobi matrix
 s
∂f
J=
∂ xi
is equal to s;
R(J) = s:
If R = R(J) < s; then there exist s R functionally independent functions Fµ (z1 ; : : : ;
zs ); such that
Fµ ( f 1 (x); : : : ; f s (x))  0 (µ = 1; : : : ; s R):
3.1 Invariants of the group GNr 93

This lemma is a well-known result of classical analysis and is not to be proved


here. Let us consider a few examples. If the number of functions f σ (x) is such
that s > N; then the functions are always functionally dependent. Further, if at least
one of the functions f σ (x) is identically constant, then the functions are functionally
dependent as well. If f σ (x) are functionally independent, then E N contains a system
of coordinates (y); where the functions f σ (x) are reduced to y1 ; : : : ; ys :

3.1.3 Linearly unconnected operators

Let us turn to investigate a system of equations in the form (3.1.2). The operators
Xα are of the form (3.1.1) and not supposed to be the basic operators of LNr so far.
Definition 3.2. Operators Xα (α = 1; : : : ; r) are said to be linearly connected if there
exist functions ϕ α (x) (α = 1; : : : ; r); not all identically zero, such that

ϕ α Xα  0;

and unconnected otherwise.


Note that if Xα are linearly unconnected, then r  N:
In what follows, we will use the notion of a commutator of two operators intro-
duced in Definition 1.7.
Definition 3.3. Operators Xα (α = 1; : : : ; s) are said to compose a complete sys-
tem if they are linearly unconnected and if their commutators [Xα ; Xβ ] satisfy the
equations
σ
[Xα ; Xβ ] = ϕαβ (x)Xσ
σ (x): A complete system of operators is said to be Jacobian
with some functions ϕαβ
σ (x)  0:
if all ϕαβ
A system of equations

∂f
Xα f  ξαi (x) = 0 (α = 1; : : : ; s) (3.1.3)
∂ xi
is said to be complete (Jacobian) if the operators Xα compose a complete (Jacobian)
system of operators.
Lemma 3.2. If the system (3.1.3) is complete, then s  N: When s < N there ex-
ist exactly N s functionally independent solutions of the system, and any of its
solutions is their function.
Proof. Note that the property of the system (3.1.3) to be complete or Jacobian does
not depend on the choice of a system of coordinates in E N : This follows from
Lemma 1.4, see x1.6. Further, if s > N; then there are functions ϕ α (x) such that
ϕ α Xα  0; since in this case the matrix
94 3 Group invariant solutions of differential equations

ξαi (x)

has the number of columns N; less then the number of rows s; so the rows are to be
linearly dependent for any x 2 E N :
Let us introduce the notion of equivalent systems of operators (or equations of the
form (3.1.3)). A system of operators fXα0 g is said to be equivalent to a system of op-
erators fXα g if Xα0 are independent linear combinations (with variable coefficients)
β
of operators Xα ; specifically, if there exist functions ωα (x) such that
β
jωα (x)j 6= 0

and
β
Xα0 = ωα (x)Xβ :
It is evident that equivalent systems of equations (3.1.3) have the same solutions. Let
us demonstrate that any complete system is equivalent to some Jacobian system.
Indeed, the completeness of the system fXα g entails that the general rank of the
matrix 
ξαi (x)
equals to s: If we assume, without loss of generality, that a non-vanishing minor of
the order s of the matrix is composed by the first s columns, then one obtains an
β
equivalent system fXα0 g with ωα (x); equal to elements of the inverse matrix of the
minor. The matrix ξα of the resulting equivalent system has the form
0i

0 1
1 0  0 ξ10s+1  ξ10N
B C
B0
B 1  0 ξ20s+1  ξ20N C
C
B C:
B. .. .. .. .. C
B .. . . . . C
@ A
0 0  1 ξs0s+1  ξs0N

The system fXα0 g is Jacobian. Indeed, the equation


σ
[Xα0 ; Xβ0 ] = ϕαβ Xσ0

and definition of the commutator entail the identities


σ
ϕαβ = Xα0 ξβ0σ Xβ0 ξα0σ

= Xα0 δβσ Xβ0 δασ = 0

for σ = 1; : : : ; s: Hence, the system (3.1.3) can be considered to be Jacobian from


the very beginning without loss of generality.
3.1 Invariants of the group GNr 95

3.1.4 Integration of Jacobian systems

We will carry out a procedure of s steps to reduce the system (3.1.3) to the simplest
form. The first step consists in finding by means of Theorem 1.3 such a system of
coordinates (y) in E N ; where yi = yi (x) (i = 1; : : : ; N); in which the operator X1
becomes the translation operator with respect to y1 :


X1 = Y1 = 
∂ y1

In order to construct the system (y); one has to find solutions yi (x) of the equations
0
X1 y1 (x) = 1; X1 yi (x) = 0 (i0 = 2; : : : ; N):

The operators Xα written in the variables (y) provide a system of operators fYα g
with

Y1 = 
∂ y1
As it has already been mentioned, the system fYα g is Jacobian again and if


Yα 0 = ηαi 0 (y) (α 0 = 2; : : : ; N);
∂ yi
then
[Y1 ;Yα 0 ] = 0
provides that
∂ ηαi 0
= 0:
∂ y1
Thus, the coordinates ηαi 0 are independent of the variable y1 : Since the differentia-
tion with respect to y1 is absent in the coordinates of the commutator [Yα 0 ;Yβ 0 ] with
the numbers 2; : : : ; N; it follows from the above facts that the system of operators

∂ 0 ∂
Y10 = ; Yα0 0 = ηαi 0 (y0 ) (α 0 = 2; : : : ; s; i0 = 2; : : : ; N);
∂ y1 ∂ yi
0

which is equivalent to the system fXα g; is Jacobian again. Moreover, the operators
fYα0 0 g (α 0 = 2; : : : ; s) act in the space E N 1 (y0 ) of the points y0 = (y2 ; : : : ; yN ) and
compose a Jacobian system. The construction of the system fYα0 0 g completes the
first step.
Since fYα0 0 g has all the properties of fXα g; one can make the second, : : : ; s-th
step of our procedure and as a result arrive at a system of coordinates (z) and a
system of operators fZα g having the form

∂ ∂
Z1 = ; : : : ; Zs = s 
∂ z1 ∂z
96 3 Group invariant solutions of differential equations

The system becomes


∂f
= 0 (σ = 1; : : : ; s):
∂ zσ
If s < N; solutions to the latter system are functions zs+1 ; : : : ; zN ; which are obvi-
ously functionally independent and any solution has the form

f = f (zs+1 ; : : : ; zN ):

Returning to the initial coordinates (x); one obtains solutions of the system (3.1.3),

f τ = zτ (x) (τ = s + 1; : : :; N);

with the properties mentioned in Lemma 3.2. This proves the lemma.
In addition note that when s = N the complete system has no functionally inde-
pendent solutions at all, for it can be satisfied only by a constant.
Note that the above proof of Lemma 3.2 contains, in fact, the algorithm for con-
structing functionally independent solutions of the system (3.1.3). The basic ele-
ment of the algorithm is the transition fXα g ! fYα0 g; requiring only integration of
ordinary differential equations.

3.1.5 Computation of invariance

Let us turn back to the problem of invariants of the group GNr with the basis set of
operators (3.1.1) of its Lie algebra LNr : Let us introduce a matrix of coordinates of
operators Xα : 
M = ξαi (x) ; (3.1.4)
where α is the number of the row, i is the number of the column, and α = 1; : : : ; r;
i = 1; : : : ; N: The general rank of the matrix M is denoted by R; so that R = R(M):
The following result is formulated in this notation.
Theorem 3.1. The group GNr has invariants if and only if R < N: If this inequality
is satisfied, then there exist t = N R functionally independent invariants I τ (x) (τ =
1; : : : ;t) of the group GNr such that any of its invariants is their function.
Proof. Since Xα span a Lie algebra LNr ; the system of operators (3.1.1) is closed with
respect to the operation of commutation, but the operators can be linearly connected.
The maximum number of linearly unconnected operators Xα equals to the general
rank R of the matrix M (3.1.4). Consider the system (3.1.2) and eliminate operators
expressed via R unconnected operators from it. Then one obtains a complete system
of R operators. If R = N; there are no invariants. If R < N; the statement follows
from Lemma 3.2. Theorem 3.1 is proved.
Note that the algorithm described in the proof of Lemma 3.2 is efficient in finding
invariants in practice. Let us consider an example of its realization.
3.1 Invariants of the group GNr 97

Example 3.1. L33 :

∂ ∂ ∂ ∂ ∂ ∂
X1 = z y ; X2 = z +x ; X3 = y x 
∂y ∂z ∂x ∂z ∂x ∂y
These operators are linearly connected, namely

xX1 + yX2 + zX3 = 0:

Since this is the only connection, we have R = 2 and, according to Theorem 3.1,
there is one invariant (t = N R = 3 2 = 1): Let us find it. First,
p let us determine
invariants of the operator X1 which are obviously x and ρ = y2 + z2 : Let us turn
to new variables x; ρ ; z: According to the formula (2.3.7) from 1.2, one obtains

∂ ∂ xz ∂ ∂ xy ∂
Y1 = y ; Y2 = z + ; Y3 = y ;
∂z ∂x ρ ∂ρ ∂x ρ ∂ρ
or, turning to equivalent operators,

∂ ∂ ∂
Y10 = ; Y20 = ρ x ;
∂x ∂x ∂ρ
and the operator Y3 is eliminated as it is linearly connected with Y2 :
y
Y3 = Y2 :
z
Invariant of the operator Y20 on the plane (x; ρ ) is

I = x2 + ρ 2 :

This is the desired invariant of the corresponding G33 : Turning back to the initial
coordinates, one finally obtains

I = x2 + y2 + z2 :

Let us point out some other notions connected with existence of invariants of GNr :
A group GNr is said to be transitive if it has no invariants. A transitive group is
characterized by the relations r  R = N: If r = R the group is said to be simply
transitive, and if r > R it is termed multiply transitive.
If the group GNr has invariants, it is said to be intransitive. By virtue of Theo-
rem 3.1, a criterion of intransitiveness is the inequality R < N:
These terms are connected with properties of a group to contain transformations
Ta mapping every point x 2 N to any other point x0 :
The notion of a differential invariant of the group GNr is a direct extension of
Definition 3.1 to the prolonged group G eNre : Therefore, we do not dwell on it here
e
and only mention that due to an unbounded increase of dimension of the space EeN ;
98 3 Group invariant solutions of differential equations

any group GNr has differential invariants (its order can be higher than one) when
successive prolongations are fulfilled and ranks Re are limited by the number r:

3.2 Invariant manifolds

3.2.1 Invariant manifolds criterion

Definition 3.4. A manifold M  E N is called an invariant manifold of a group G Nr


if
Ta x 2 M
for every point x 2 M and for all transformations Ta 2 GNr :
As in x1.3, we consider manifolds M given regularly by the equations
 σ  
σ ∂ψ
M : ψ (x) = 0 (σ = 1; : : : ; s); R = s: (3.2.1)
∂ xi M
Theorem 3.2. The manifold M regularly given by Eqs. (3.2.1) is an invariant man-
ifold of the group GNr if and only if the equations

X ψ σ (x) M = 0 (σ = 1; : : : ; s) (3.2.2)

hold for all operators X 2 LNr :


Proof. It is evident that M is an invariant manifold of the group GNr if and only if
M is an invariant manifold of every subgroup G1  GNr : Therefore, Theorem 3.2 is
a corollary of Theorem 1.5.

3.2.2 Induced group and its Lie algebra

Substantially new facts connected with GNr when r > 1 begin from the following
definition where the matrix M (3.1.4) and its general rank R are meant.
Definition 3.5. A manifold M  E N is termed a nonsingular manifold of the group
GNr ; if
R(MjM ) = R:
Otherwise, i.e. when the rank of M decreases on M as compared to its general rank,
the manifold M is said to be a singular manifold of the group GNr :
Let x̄ be the points of an invariant manifold M of the group GNr : Then, by defi-
nition,
x̄0 = Ta x̄ 2 M :
3.2 Invariant manifolds 99

Hence, a family fTa g of point transformations of M into itself is determined on M :


Manifestly, this family generates a local Lie group denoted further by GNr jM :
Definition 3.6. The group GNr jM is called the group induced by the group GNr in its
invariant manifold M :
Let us construct the Lie algebra of the induced group GNr jM : To this end, we
use the fact that M is given regularly by Eqs. (3.2.1) and introduce the system of
coordinates (y) in E N by the formulae

yσ = ψ σ (x) (σ = 1 : : : ; s); yτ = xτ (τ = s + 1; : : : ; N): (3.2.3)

Then ys+1 ; : : : ; yN can be taken as coordinates of the point ȳ = x̄ on M : Turning to


variables y in the operator

X = ξ i (x) i
∂x
and taking the result on M ; one obtains operators of the induced group GNr jM in
the form

XjM = ξ̄ τ (ȳ) ; ξ̄ τ (ȳ) = ξ τ jM (τ = s + 1; : : :; N) (3.2.4)
∂ yτ
due to the criterion (3.2.2) of invariance of M :
The operator (3.2.4) is called an operator, induced by the operator X on the invari-
ant manifold M : Manifestly, the operation of transition from X to XjM is linear. Let
us demonstrate that it preserves the commutator. To this end, invoke Theorem 1.10,
according to which a manifold M that is invariant with respect to Xα ; Xβ ; is invari-
ant with respect to the commutator [Xα ; Xβ ] as well. Therefore, using (3.2.4), one
obtains

∂ ∂
[Xα ; Xβ ] M = [Xα ; Xβ ](yi ) i = [Xα ; Xβ ](yτ )jM τ
∂y M ∂y

= [Xα (Xβ yτ ) Xβ (Xα yτ )] M τ
∂y
!
∂ ξ̄ τ ∂ ξ̄ τ ∂
= ξ̄αθ ξ̄βθ θ = [Xα jM ; Xβ jM ];
∂ yθ ∂y ∂ yτ

which was to be proved. Thus, the operators X jM ; induced by all operators X 2 LNr ;
compose a Lie algebra denoted by LNr jM and called the Lie algebra induced by the
Lie algebra LNr on its invariant manifold M : The induced Lie algebra LNr jM of the
operators (3.2.4) is actually the Lie algebra of the induced group GNr jM :
The above fact means that there is a homomorphism ψ of the algebra LNr to the
Lie algebra LNr jM given by the formula

ψ (X ) = X jM :
100 3 Group invariant solutions of differential equations

The kernel of this homomorphism consists of operators X corresponding to the sub-


groups G1  GNr ; transformations of which leave points of the manifold M unal-
tered. These are operators whose coordinates satisfy the system of equations

ξ i (x̄) = 0 (i = 1; : : : ; N)

for all x̄ 2 M :

3.2.3 Theorem on representation of nonsingular invariant


manifolds

Let us consider the problem of constructing invariant manifolds of the group GNr : If
GNr is intransitive and I τ (x) (τ = 1; : : : ;t) is the complete set of its invariants, then
any manifold M ; given by a system of equations of the form

Φ σ (I 1 (x); : : : ; It (x)) = 0 (σ = 1; : : : ; s); (3.2.5)

is an invariant manifold. This follows directly from Definitions 3.4 and 3.1. Let us
demonstrate that this procedure of generating invariant manifolds of the group GNr
is the most general in a sense.
Theorem 3.3. The group GNr has nonsingular invariant manifolds if and only if
R < N: If this inequality is satisfied and if

I τ (x) (τ = 1; : : : ;t = N R)

is a complete set of functionally independent invariants of the group GNr ; then any
of its nonsingular invariant manifolds can be given by a system of equations of the
form (3.2.5).
Proof. Let M be regularly given by Eqs. (3.2.1). The invariance conditions (3.2.2)
written for the basis operators Xα ;
σ

i ∂ψ
ξα = 0 (α = 1; : : : ; r);
∂ xi M

show that there are linear dependencies between columns of the matrix

MjM

so that its rank is less than N: Since M is a nonsingular manifold of GNr it follows
that R < N by virtue of Definition 3.5.
Let us assume that R < N: Consider a nonsingular invariant manifold M of the
group GNr given regularly by Eqs. (3.2.1). Note that the transformations of the in-
duced group GNr jM act in the space of N S dimensions. Therefore, GNr jM has a
complete set of N s R functionally independent invariants. Now let us take the
3.2 Invariant manifolds 101

invariants I τ (x) (τ = 1; : : : ;t) and consider them on M : These are functions I τ (x̄)
which are invariants of the induced group GNr jM ; so that we have t = N R as
its invariants. Assume that there are N R s0 functionally independent invariants
among them; the number can not be higher than the number of functionally inde-
pendent invariants of the group CrN jM ; so that

N R s0  N s R;

whence
s  s0 :
Moreover, by virtue of Lemma 3.1, there exist s0 functionally independent functions
Φ σ (z1 ; : : : ; zt ); such that

Φ σ (I 1 (x̄); : : : ; It (x̄))  0 (σ = 1; : : : ; s0 ): (a)

Let us define a manifold M 0  E N given by the equations

M0 : Φ σ (I 1 (x); : : : ; It (x)) = 0 (σ = 1; : : : ; s0 ) (b)

with these functions Φ σ : The manifold M 0 contains the given invariant manifold
M because Equations (b) are satisfied identically for the points x̄ 2 M by virtue of
(a). Further, the dimension of M 0 is N s0 and therefore, the inclusion M 0  M
provides the inequality
N s0  N s;
whence
s  s0 :
Thus, s = s0 : The equality of dimensions of the manifolds M and M 0 and the in-
clusion M  M 0 entail that
M0  M:
Since Equations (b) of the manifold M 0 have the required form (3.2.5), Theorem 3.3
is proved.
Theorem 3.3 can also be termed a theorem on representation of nonsingular in-
variant manifolds of the group GNr : Singular invariant manifolds may have no rep-
resentation of the form (3.2.5). In order to find them one has to compose a manifold
M given by the system of equations obtained by nullifying all minors of the maxi-
mum order of the matrix M (3.1.4). This M should also be checked for “invariance”
by means of Theorem 3.2.
Let us introduce a numerical characteristics of the invariant manifold of the group
GNr which will be of importance in what follows. The dimension of the manifold M
is denoted by dim M :
Definition 3.7. The number

ρ =t (N dimM ) = dim M R
102 3 Group invariant solutions of differential equations

where R = R(M ); is called the rank of the invariant manifold M of the group GNr :
In other words, the rank of the invariant manifold M of the group GNr is the
dimension of M in the space of invariants of the group. Here the space of invariants
is understood as the space E t (I); where the coordinates of the point are values of
the invariants I 1 ; : : : ; It of the group GNr :

3.2.4 Differential invariant manifolds

Differential invariant manifolds of the group GNr are determined likewise. Namely
they are invariant manifolds with respect to transformations of the prolonged group
GeNre : They will be manifolds in the prolonged space E Ne and will be denoted by
M f: Naturally, the notion of nonsingular M fis connected with the general rank of
the matrix M whose entries are the coordinates of the prolonged operators Xeα : All
e
nonsingular M f can be represented by a system of equations of the form (3.2.5)
but, generally speaking, the left-hand sides will contain differential invariants of the
group GNr :
Let us describe one more procedure for obtaining differential invariant manifolds
of the group GNr : Using the operators of total differentiation (see x1.4)

∂ ∂
Di = + pli l (i = 1; : : : ; n);
∂x i ∂u
one can formulate the procedure in the following theorem.
Theorem 3.4. If I(x; u) is an invariant of the group GNr and if the manifold, deter-
mined by the system of equations

Di I(x; u) = 0 (i = 1; : : : ; n); (3.2.6)


e
is given regularly in E N ; then it is a differential invariant manifold of the group GNr :
Proof. Let X be an operator arbitrarily taken from LNr ; and let Xe be its prolongation.
Applying Lemma 1.3 to the function I(x; u) and noting that XI(x; u)  0; one obtains
the identities
e i I(x; u) = Di (ξ j )D j I(x; u):
XD
Their right-hand sides vanish on the manifold (3.2.6), and hence
e i I(x; u) = 0
XD

on this manifold. Therefore, the manifold (3.2.6) is invariant by virtue of the crite-
rion (3.2.2) of Theorem 3.2 which obviously holds for differential invariant mani-
folds as well.
All the above reasoning, notions and results can extended directly to higher pro-
longations of a group, and hence to differential invariants and differential invariant
manifolds of higher orders.
3.3 Invariant solutions of differential equations 103

3.3 Invariant solutions of differential equations

3.3.1 Definition of invariant solutions

Consider a system of differential equations (S) in the notation accepted in x1.5. We


assume that (S) admits a local Lie group H of transformations of the space E N (x; u):
A solution
uk = ϕ k (x) (k = 1; : : : ; m)
of the system (S) is considered as a manifold

Φ  E N (x; u):

This manifold is also called a solution of the system (S).


Definition 3.8. A solution Φ is called an invariant H-solution of the system (S) if
Φ is an invariant manifold of the group H admitted by the system (S).
In what follows we consider only those solutions of Φ that are nonsingular man-
ifolds of the group H: Moreover, it will be assumed that the manifold S is a nonsin-
gular differential invariant manifold of the group H:
If an invariant H-solution of Φ is a nonsingular invariant manifold of the group
H; then this manifold has a definite rank ρ which is also termed the rank of the
invariant H-solution of Φ : Let us find necessary conditions for existence of invariant
H-solutions. Let
∂ ∂
Xα = ξαi (x; u) + ηαk (x; u) k (α = 1; : : : ; r) (3.3.1)
∂x i ∂u
be a basis of the Lie algebra of the group H: Consider the matrix

M = ξαi ; ηαk ; (3.3.2)

where α are numbers of rows and R = R(M) is the general rank of the matrix. Defi-
nition 3.8 and Theorem 3.3 provide a necessary condition for existence of invariant
H-solutions. Namely, the group H should be intransitive, i.e. that it should have
invariants. Hence, R < N: Let

I τ = I τ (x; u) (τ = 1; : : : ;t = N R) (3.3.3)

be a complete set of functionally independent invariants of the group H:


According to Theorem 3.3, equations of a nonsingular invariant H-solution Φ
can be written in the form

Φ: Φ k (I 1 ; : : : ; It ) = 0 (k = 1; : : : ; m); (3.3.4)

where the functions Φ k (I) are functionally independent so that


104 3 Group invariant solutions of differential equations
 
R
∂Φk = m:
∂ Iτ
Moreover, by definition of solutions of the system (S), all variables

uk (k = 1; : : : ; m)

should be determined as functions of x from Eqs. (3.3.4). Therefore, the Jacobian


should not vanish:
∂Φk

∂ ul 6= 0;
i.e.  
R
∂Φk = m:
∂ ul
Note that the equation
     τ
∂Φk ∂Φk ∂I
= 
∂ ul ∂ Iτ ∂ ul
and the known fact that the rank of a product of matrices is not higher than the
smallest rank of factors guarantee the following equation:
 τ 
∂I
R = m:
∂ ul
Note that the general rank R of the matrix
 τ
∂I
∂ ul
is independent of the specific form of the invariants (3.3.3), provided that they are
functionally independent.
Thus, we have the following necessary conditions for existence of invariant H-
solutions:  τ 
∂ I (x; u)
R(M) < N; R = m: (3.3.5)
∂ uk
It will be proved further that when the conditions (3.3.5) hold, invariant H-
solutions exist, at least “potentially”. We will need the following result.
Lemma 3.3. Let I(x;˜ u; p) be a differential invariant of the group H and let Φ k (x; u)
be invariants of the group H satisfying the condition

∂ Φk

∂ ul 6= 0:

If the equations
3.3 Invariant solutions of differential equations 105

Di Φ k (x; u) = 0 (i = 1; : : : ; n; k = 1; : : : ; m)

are satisfied identically upon the substitution

pki = ψik (x; u);

then the function


˜ u; ψ (x; u))
I  (x; u) = I(x;
is an invariant of the group H:
Proof. Acting by any operator X from the Lie algebra L of the group H on I  (x; u);
one obtains  
∂ I˜ ∂ I˜ ∂ I˜
XI  = ξ i i + η k k + k X ψik :
∂x ∂u ∂ pi p=φ (x;u)

e the equation
˜ u; p) is an invariant of the prolonged group H;
Since I(x;

∂ I˜ ∂ I˜ ∂ I˜
XeI˜ = ξ i i + η k k + ζik k = 0
∂x ∂u ∂ pi

is satisfied identically with respect to the variables x; u; p: This equation becomes


an identity with respect to the variables x; u upon the substitution pki = ψik (x; u):
Hence, in order to prove the equation XI  = 0 (which is the statement of the lemma
by virtue of the criterion (3.5.2)) it is sufficient to prove the identity

ζik (x; u; ψ (x; u)) = X ψik (x; u): (c)

To this end, we let


F = Φ k (x; u)
in the identity (1.4.18) in Lemma 1.3 of Chapter 1. Then, by virtue of the conditions
of Lemma 3.3, one obtains the identity
e i Φ k j p=ψ (x;u) = 0;
XD

which is written in detail as


∂ Φk l ∂Φ
k ∂ Φk
X + ψ X + ζ l
(x; u; ψ (x; u)) = 0:
∂ xi i
∂ ul i
∂ ul
On the other hand,
∂ Φk ∂ Φk
+ ψil 0
∂x i ∂ ul
by the condition of our lemma. Applying the operator X to this identity, one obtains

∂ Φk l ∂Φ
k
l ∂Φ
k
X + ψ X + (X ψ ) = 0:
∂ xi i
∂ ul i
∂ ul
106 3 Group invariant solutions of differential equations

Comparing these two identities, one obtains

∂Φk
[ζil (x; u; ψ (x; u)) X ψil (x; u)] = 0;
∂ ul
whence, by virtue of the assumed inequality

∂Φk

∂ ul 6= 0;

one obtains (c) and Lemma 3.3 is proved.

3.3.2 The system (S=H)

Theorem 3.5. If the system (S) admits a group H satisfying the conditions (3.3.5),
then there exists a system (S=H) which connects only the invariants I τ (τ = 1; : : : ;t);
the functions Φ k (I) (k = 1; : : : ; m) and derivatives of Φ k with respect to I τ and which
has the following property. The functions Φ k (I) provide a solution of the system
(S=H) for any invariant H-solution Φ written in the form (3.3.4). Conversely, any
solution of the system (S=H); for which
 
R
∂ Φ k
= m;
∂ Iτ
provides an invariant H-solution Φ in the implicit form (3.3.4).
Proof. For the sake of simplicity, let us suppose that the system (S) is of the first
order and give the algorithm for constructing the system (S=H): It is very simple:
write Eqs. (1.2.2) with indefinite functions Φ k (I); apply the operators of total dif-
ferentiation Di to them and obtain the system of equations

∂ Φ k ∂ Iτ ∂ Φ k ∂ Iτ l
+ τ p = 0; (3.3.6)
∂ I τ ∂ xi ∂ I ∂ ul i
whence all pki (i = 1; : : : ; n; k = 1; : : : ; m) are obtained. The latter operation can be
carried out by virtue of (3.3.5). Substituting the resulting expressions for pki into
equations (S) we obtain the system (S=H):
Let us demonstrate that the system (S=H) thus constructed, in fact, connects
only invariants of the group H: To this end, write the differential invariant manifold
S (which as we agreed above, is supposed to be a nonsingular manifold for H) e of
the group H in an equivalent form via differential invariants of H; namely

S: Ω µ (I; I)
˜ = 0: (3.3.7)
3.3 Invariant solutions of differential equations 107

The elimination of the variables pki from the equations S; as is described in the
algorithm, consists of substitution of the expressions pki = ψik (x; u) obtained from
Eqs. (3.3.6), into differential invariants I˜ = I(x;
˜ u; p): According to Lemma 3.3, the
latter become invariants of the group H; i.e. they are functions of the invariants
I τ (τ = 1; : : : ;t): The expressions for pki derived from Eqs. (3.3.6) in fact contain
derivatives ∂ Φ k =∂ I τ : Hence equations of the system (S=H) connect only the in-
variants I τ and the derivatives ∂ Φ k =∂ I τ :
If equations (3.3.4) are equations of a given invariant H-solution Φ ; then the
functions Φ k (I) have a specific form. Upon the mentioned substitution of expres-
sions for pki into (3.3.7), equations (3.3.7) are satisfied identically (by definition of
a solution) with respect to the variables x; u; and hence, with respect to the variables
I: Conversely, if  
R
∂Φk =m
∂ Iτ
for some solution of the system (S=H); then equations (3.3.4) provide functions uk =
ϕ k (x) by virtue of (3.3.5). For these functions their derivatives coincide with those
derived from the system (3.3.6) and the latter turn the equations S into identities
by construction. Hence, these functions furnish a solution of the system (S), which
is obviously its invariant H-solution. Theorem 3.5 is proved for the case when the
system (S) is of the first order. Alternations to be made in the proof for higher-order
systems are evident.
At first sight the system (S=H) seems to be more complicated than the original
system (S), because the unknown functions in (S=H) depend on t = N R = n +
m R independent variables and this number can be greater than the number n of
independent variables in the system (S). However, the solutions of the system (S=H)
are manifolds in a t-dimensional space of the point I and we are interested only
in such solutions that provide m independent equations (3.3.4), i.e. that have the
dimension t m in this t-dimensional space. Therefore, the system (S=H) contains
actually only t m independent variables. Since this number is equal to the rank of
the invariant manifold (3.3.4) of the group H; one obtains finally that the system
(S=H) is a system with
ρ =t m=n R (3.3.8)
independent variables. In other words, the number of independent variables equals
to the rank of invariant H-solutions.
In this meaning, the system (S=H) is simpler than the original system (S), since
one can see from (3.3.8) that ρ < n always.
In practice, one often comes across the case when the variables x; u in invariants
of the group H “split” in the following sense. Invariants I τ (x; u) can be chosen so
that they divide into invariants depending on x; u :

I k (x; u) (k = 1; : : : ; m)

and invariants depending only on x :


108 3 Group invariant solutions of differential equations

I m+r (x) (r = 1; : : : ; ρ ):

The condition (3.3.5) is written in this case as follows:


k
∂ I

∂ ul 6= 0: (3.3.9)

If the condition (3.3.9) is satisfied, equations (3.3.4) can always be reduced to the
form
I k (x; u) = V k (I m+1 (x); : : : ; It (x)); (3.3.10)
so that V k (k = 1; : : : ; m) will be the unknown functions. In this case, it is convenient
to introduce special notation for the latter ρ invariants and to write invariant H-
solutions in the form

I k (x; u) = vk (y1 ; : : : ; yρ ); yr = I m+r (x) (k = 1; : : : ; m; r = 1; : : : ; ρ ) (3.3.11)

Then, the system (S=H) is just a system with respect to the functions v(y) of ρ
independent variables y1 ; : : : ; yρ :
Theorem 3.5 says nothing about existence of invariant H-solutions. It provides
only the fact of their “potential” existence in the meaning of reducibility of the
system (S) to the system (S=H): Indeed, the system (S=H) can have no solutions
and even be just inconsistent. Let us consider a simple example.
Let the system (S) consist of one equation for z = z(x; y) :

∂z ∂z
x +y = 1:
∂x ∂y
One can readily verify that this equation admits the group H1 with the operator

∂ ∂
X =x +y 
∂x ∂y

This group has the following invariants in the space E 3 (x; y; z) :


x
I 1 = z; I2 = 
y
One can see that the variables x; y and z are separated here, so that any invariant
H1 -solution should have the form
 
x
z=v :
y

However,
∂z ∂z
x +y 0
∂x ∂y
3.3 Invariant solutions of differential equations 109

for such z; with any function v(λ ): Therefore, the system (S=H) has the form 0 = 1;
i.e. it is inconsistent.

3.3.3 Examples from one-dimensional gas dynamics

Consider several examples of invariant H-solutions for a system of equations of


one-dimensional gas dynamics (see Eqs. (1.5.23) in Chapter 1):
1
ut + uux + px = 0;
ρ
(3.3.12)
S : ρt + uρx + ρ ux = 0;
pt + upx + γ pux = 0:

The Lie algebra admitted by the system (3.3.12) has been calculated in Chapter 1.
We will consider here the case γ 6= 3: In this case the system (3.3.12) admits the Lie
algebra L56 spanned by the operators

∂ ∂ ∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = t + ; X4 = t +x ;
∂t ∂x ∂x ∂u ∂t ∂x
(3.3.13)
∂ ∂ ∂ ∂ ∂
X5 = x + u 2ρ ; X6 = ρ +p 
∂x ∂u ∂ρ ∂ρ ∂p

Hence, the system (3.3.12) admits a local Lie group G56 and any subgroup of this
group. We will denote the subgroup H; whose Lie algebra has a basis X ;Y; : : : ; by
the symbol H < X;Y; : : : > :
The system (3.3.12) has n = 2 independent variables (t; x), m = 3 unknown func-
tions (u; ρ ; p). In this case N = 2 + 3 = 5: The formula (3.3.8) for the rank of an
invariant H-solution takes the form ρ̂ = 2 R: Since R > 0 it follows that possible
invariant H-solutions are either of the rank ρ̂ = 0; R = 2 or of the rank ρ̂ = 1; R = 1
(the rank of the solution is denoted by ρ̂ not to mixed it with the unknown function
ρ in the system (3.3.12)). Examining the operators (3.3.13) thoroughly, one can see
that R = 1 only for subgroups with one operator, i.e. for one-parameter subgroups.
Example 3.2. The subgroup H < X1 > has invariants

I 1 = u; I2 = ρ ; I 3 = p; I 4 = x:

The variables here are “separated” so that the H-solutions have the form

u = U(x); ρ = R(x); p = P(x);

where U; R; P are unknown functions which by virtue of Theorem 3.5 should satisfy
the system (S=H): Substituting the above expressions for u; ρ ; p and their derivatives
110 3 Group invariant solutions of differential equations

with respect to t; x into Eqs. (3.3.12), one obtains the system (S=H) in the form

1
UU 0 + P0 = 0; UR0 + RU 0 = 0; UP0 + γ PU 0 = 0:
R
This is a system of ordinary differential equations for U; R; P which is easily inte-
grable. Any solution of the system provides an invariant H < X1 >-solution. Here
and in what follows we do not solve the occurring systems (S=H) up to the end, for
it is not important in the present lecture notes, besides, it is not always easy to do it.
The examples serve to illustrate only the process of forming invariant H-solutions
and the abundance of new possibilities.
Example 3.3. The subgroup H < X3 > has invariants
x
I1 = u ; I2 = ρ ; I 3 = p; I 4 = t:
t
The variables are “separated” and the invariant H-solution has the form
x
u= +U(t); ρ = R(t); p = P(t):
t
The system (S=H) appears to be as follows:

tU 0 +U = 0; tR0 + R = 0; tP0 + γ P = 0:

Example 3.4. The subgroup H < X4 > has the invariants


x
I 1 = u; I2 = ρ ; I 3 = p; I4 = 
t
The solution has the form
x
u = U(λ ); ρ = R(λ ); p = P(λ ); λ= 
t
The system (S=H) has the form
8 1
>
< (U
> λ )U 0 + P0 = 0;
R
> (U λ )R0 + RU 0 = 0;
>
:
(U λ )P0 + γ PU 0 = 0:

The corresponding H-solutions are known as “self-similar”.


Example 3.5. The subgroup H < X6 > has the invariants
p
I 1 = u; I2 = ; I 3 = t; I 4 = x:
ρ
Composing the matrix
3.3 Invariant solutions of differential equations 111
 
∂ Iτ
∂ uk
for the case, one can see that its rank equals to two. Therefore, the necessary condi-
tion (3.3.5) is not satisfied (for m = 3) and the system (S=H) cannot be constructed.
Invariant H-solutions do not exist.
Example 3.6. The subgroup H < X1 + X6 > has the invariants

I 1 = u; I2 = ρ e t ; I 3 = pe t ; I 4 = t:

The necessary condition (3.3.5) is already fulfilled here and the system (S=H) can
be constructed.
The number of such examples of invariant H-solutions of the rank ρ̂ = 1 for the
system (3.3.12) can be increased to infinity. The general form of the one-parameter
subgroup is H < eα Xα >; so that its invariants are functions of the constants eα
(α = 1; : : : ; 6): However, it is not so easy to find them in such a general form, for
one will have to make various assumptions about the constants eα while calculating
(it is supposed to verify it as an exercise). Moreover, the same parameters will be
included in the system (S=H); which will complicate solution of the latter to a much
extent. However, the most important is that the main part of the work will appear
to be useless due to predetermined connections between different H-solutions; this
circumstance is to be discussed in x3.4.
Let us take a quick look at invariant H-solutions of the rank ρ̂ = 0: In this case
t = m and equations (3.3.4) are equivalent to equations I τ = Cτ (τ = 1; : : : ; m); where
Cτ are some constants, which are not determined beforehand. The system (S=H) for
this case is just a system of finite equations with respect to unknown Cτ : The rank
ρ̂ = 0 for the system (3.3.12) provides R = 2; by virtue of which such solutions are
to be sought on the subgroups H with two operators. For example, the subgroup
H < X1 ; X2 > has the invariants u; ρ ; p: Hence, the H-solution has the form

u = C1 ; ρ = C2 ; p = C3 ;

where Cα are constants. Then, the system (S=H) is satisfied identically when Ck are
arbitrary.

3.3.4 Self-similar solutions

The notion of a self-similar solution is often used when investigating problems of


physics and mechanics. The term is usually used if the solution is such that “pro-
portional” change of some quantities leaves other quantities under consideration
unaltered.
To make a more specific formulation, the notion of a dilation operator mentioned
in x1.2 is used. One can say that H is a group of dilations if all operators of its Lie
algebra are operators of dilation.
112 3 Group invariant solutions of differential equations

An invariant H-solution is termed a self-similar solution in a narrow sense if H


is a group of dilations.
We consider a case when the group H acts in the space E N (x; u) and suppose that
the basis of its Lie algebra LNr consists of the following dilation operators:
n
∂ m

Xα = ∑ λαi xi + ∑ µαk uk k (α = 1; : : : ; r); (3.3.14)
i=1 ∂ x k=1
i ∂u

where λαi ; µαk are constants.


One can readily verify that a group of dilations has the following two properties:
it is Abelian and its Lie algebra LNr does not contain linearly connected operators.
Therefore, for a group of dilations one always has r = R: Considering the matrix M
(3.3.2) composed by the coordinates of the operators (3.3.14), one can easily see that
it has the general rank at any point where xi 6= 0; uk 6= 0 (i = 1; : : : ; n; k = 1; : : : ; m):
Therefore, instead of M; one can consider the matrix M1 obtained from M by letting
x1 =    = x n = u 1 =    = u m = 1 :
0 1 1
λ1    λ1n µ11    µ1m
B .. C :
M1 = @ ... ..
.
..
. . A (3.3.15)
λr1  λrn µr1  µrm

We have R(M1 ) = R(M) = r  N = n + m:


Let us find invariants of the group H with the operators (3.3.14) and find out
conditions when (3.3.5) holds. First, it should be r < N; for otherwise H is transitive
and has no invariants. Further, let us demonstrate that a complete set of functionally
independent invariants of the group H can be composed of invariants of the form

I = (x1 )θ1    (xn )θn (u1 )σ1    (um )σ m : (3.3.16)

To this end, note that


Xα I = (λαi θi + µαk σk )I;
by virtue of which the function I is an invariant of H if and only if the exponents
θi ; σk satisfy the system of linear homogeneous equations

λαi θi + µαk σk = 0 (α = 1; : : : ; r): (3.3.17)

The matrix of this system is M1 (3.3.15) and since its rank equals to r; then the
system (3.3.17) has N r linearly independent vector solutions (θ ; σ ); namely the
vectors (θ τ ; σ τ ) (τ = 1; : : : ; N r): According to the formula (3.3.16) these vectors
provide the invariants I τ : These invariants are functionally independent. Indeed, the
general rank of the matrix
 τ   τ 
∂ I ∂ Iτ θi τ σkτ τ
; = I ; kI ;
∂ xi ∂ u k xi u
3.3 Invariant solutions of differential equations 113

which is to be calculated
 at the point xi = 1; uk = 1; coincides with the rank of the
τ τ
matrix θi ; σk ; which is equal to N r due to linear independence of the vectors
(θ τ ; σ τ ):
In order to meet the condition (3.3.5) it is necessary and sufficient that the rank
of the matrix (σkτ ) equals to m: Let us demonstrate that the latter is fulfilled if and
only if the rank of the matrix (λαi ) equals to r: If

R((λαi )) < r;

then there are numbers ω α such that ω α λαi = 0: Then, denoting

χ k = ω α µαk

one obtains from (3.3.17) that the equations χσk k = 0 are satisfied for any solution of
the system (3.3.17). Therefore,

R((σkτ )) < m:

Conversely, if
R((λαi )) = r;
then the system λαi θi = 0 has exactly n r linearly independent solutions. If we
take them as a part of the complete system from N r solutions, then the remaining
N r (n r) = m solutions of the system should be such that R((σkl )) = m: Thus,
finally, the necessary and sufficient condition of “potential” existence of self-similar
H-solutions for the group H with the operators (3.3.14) is

R((λαi )) = r  n: (3.3.18)

The notion of a group of dilations is closely connected with the so-called the-
ory of dimensions of physical quantities. In order to reveal this connection, let us
construct final transformations of the group H with the operators (3.3.14). Upon
the construction, e.g. in canonical coordinates of the second kind, one obtains the
transformations
λi λi µk µk
x0i = a1 1    ar r xi ; u0k = a1 1    ar r uk : (3.3.19)
λi λi
depending on r independent parameters a1 ; : : : ; ar : Let us term the aggregate a1 1    ar r
as the “dimension” of the quantity xi and parameters aα as a “unit of dimension”.
Similar terms are used for the quantities uk : The “dimensions” are denoted in the
theory of dimensions as follows:
λi λi µk µk
[xi ] = a1 1    ar r ; [uk ] = a1 1    ar r :

Then invariants I of the form (3.3.16) will be “dimensionless” quantities. This fol-
lows from Eqs. (3.3.17). The theory of dimensions provides the so-called Π-theorem
claiming that any dimensionless quantity is a function of invariants I of the form
114 3 Group invariant solutions of differential equations

(3.3.16). Here it is a particular case of Theorem 3.1 concerning the group of dila-
tions H with the operators (3.3.14).
The notion of a self-similar solution formulated above in a narrow sense is not
satisfactory, because it is not invariant with respect to the choice of the system of co-
ordinates in E N : Upon a change of a coordinate system, the dilation operator (3.3.14)
is no longer a dilation operator in general. For instance, using the coordinates

yi = log xi ; vk = log uk ; (3.3.20)

one obtains from Xα the translation operators

∂ ∂
Yα = λαi + µαk k  (3.3.21)
∂ yi ∂v
In this connection, the following definition taking into account the main pecu-
liarity of a dilation group is suggested.
Definition 3.9. An invariant H-solution is said to be self-similar in a broad sense if
the group H is Abelian.
Note that the dilation group has two peculiarities: it is Abelian and it does not
contain linearly connected operators. In the above definition we use only the first
property. The second property appears to be of no importance from the viewpoint
of invariants due to the following statement.
Theorem 3.6. An Abelian group H; such that R(M) = R; contains a subgroup HR
which is similar to the group of dilations and has the same value R:
Proof. The equation R(M) = R guarantees that the Lie algebra of the group H con-
tains R linearly unconnected operators


Xα = ξαi (x) (α = 1; : : : ; R): (3.3.22)
∂ xi
Since H is Abelian, one has [Xα ; Xβ ] = 0 for all α ; β : Therefore, there exists a
subgroup HR  H; for which the operators (3.3.22) provide a basis of its Lie algebra.
Since these operators are linearly unconnected, we have

R((ξαi )) = R:

In order to reduce the group HR to a group of dilations, it suffices to reduce it to a


translation group. This follows from transformation of (3.3.14) to (3.3.21) by means
of (3.3.20). Let us prove that one can determine functions ϕ i (x) (i = 1; : : : ; n) such
that they satisfy the system of equations

Xα ϕ i = δαi (α = 1; : : : ; N; α = 1; : : : ; R) (3.3.23)

and are functionally independent. If such functions exist, then turning to coordinates
yi = ϕ i (x) in E N ; one has the transformed operators
3.3 Invariant solutions of differential equations 115

∂ ∂
Xα = Yα = Xα (ϕ i ) = α
∂y i ∂y

and the theorem is proved. In order to construct the function ϕ = ϕ i (x) when i  R
we search it in the implicit form F(x; ϕ ) = 0; where F(x; ϕ ) is such that

∂F
6= 0:
∂ϕ
The equations
∂F ∂F ∂ϕ
+ =0
∂xj ∂ϕ ∂xj
show that it is sufficient to determine the function F(x; ϕ ) as a solution of the system

∂F
Zα F = Xα F + δαi = 0 (α = 1; : : : ; R) (3.3.24)
∂ϕ
satisfying the condition
∂F
6= 0:
∂ϕ
Let us demonstrate that the system (3.3.24) is complete (Definition 3.3). Indeed,
firstly, we see that
[Zα ; Zβ ] = [Xα ; Xβ ] = 0;
and secondly, operators Zα (α = 1; : : : ; R) are linearly unconnected, for any linear
connection between them would be a linear connection of the operators Xα (3.3.22),
which are linearly unconnected by assumption. Since R  N; and the operators Zα
act in the space of the point (x1 ; : : : ; xN ; ϕ ); having the dimension N + 1; then, by
Lemma 3.2, the system (3.3.24) has at least one independent solution F(x; ϕ ): There
is certainly such a solution among solutions of the system (3.3.24), for which

∂F
6= 0:
∂ϕ
Otherwise all solutions of the system (3.3.24) would also be solutions of the system
Xα F = 0 (α = 1; : : : ; R): This is impossible because the latter system has only N R
functionally independent solutions, while (3.3.24) has N + 1 R: Thus we have
obtained some functions ϕ i (x) for i  R: If i > R; then equations (3.3.23) have the
form
Xα ϕ i = 0 (α = 1; : : : ; R);
so that ϕ i are invariants of the group HR : By virtue of Lemma 3.2, there exist N R
functionally independent invariants. We take them as functions ϕ R+1 ; : : : ; ϕ N : Thus,
the system of functions ϕ i (x) satisfying Eqs. (3.3.23) is constructed. Let us demon-
strate that these ϕ i (x) are functionally independent. Indeed, if
116 3 Group invariant solutions of differential equations
i
∂ ϕ

∂ x j  0;

then there are functions µi = µi (x) such that

∂ϕi
µi = 0 ( j = 1; : : : ; N):
∂ xj

Multiplying this equation by ξαj and summing over j; one obtains

µi Xα ϕ i = µα = 0 (α = 1; : : : ; R):

Hence, the previous equalities have the form

∂ ϕ R+1 ∂ϕN
µR+1 +    + µN = 0 ( j = 1; : : : ; N)
∂x j ∂xj
and mean that the functions ϕ R+1 ; : : : ; ϕ N are functionally dependent (if not all
µR+1 ; : : : ; µN are zero). Since this contradicts the choice of the functions ϕ R+1 ; : : : ; ϕ N
it follows that the equation i
∂ϕ

∂xj  0
is impossible. Theorem 3.6 is proved.
Corollary 3.1. For any self-similar H-solution in a broad sense there is such a sub-
group HR  H and such a system of coordinates in the space E N ; where this solution
is a self-similar HR -solution in a narrow sense.
Corollary 3.2. Any self-similar H-solution in a broad sense can be derived as a so-
lution independent of some independent variables in a certain system of coordinates
in E N :
The latter follows from the possibility of reducing the group HR to a translation
group.
Note that since a one-parameter group H1 is always Abelian, then all invariant
H1 -solutions of the rank ρ̂ = 1 for the equations of gas dynamics (3.3.12) are self-
similar in a broad sense.

3.4 Classification of invariant solutions

3.4.1 Invariant solutions for similar subalgebras

It has been demonstrated in the previous section that if the system (S) admits a group
G then one can search for particular solutions of the system (S); namely invariant
H-solutions for any subgroup H  G: An important numerical characteristics of
3.4 Classification of invariant solutions 117

invariant H-solutions is their rank ρ : It denotes the number of independent variables


in the system (S=H):
Therefore, the main difference (classification) of invariant H-solutions is made
with respect to ranks of these solutions. For the rank ρ ; one has the formula (3.3.8),

ρ =n R;

where R > 0: Accordingly, the rank can have the values ρ = 0; 1; : : : ; n 1:


When ρ is fixed, various H-solutions are obtained on different subgroups having
one and the same value of the rank R of the matrix M from coordinates of operators
of these subgroups. Thus, the problem of enumerating all subgroups H  G with a
given R arises. Note that the number R is connected with the fact that elements of the
group G are transformations in E N : If one takes some other group G0 ; isomorphic to
G; it can appear to have no naturally determined number R; though G0 and G have
the same subgroups as a matter of fact. Therefore, from the group view-point it is
more convenient to operate with the number r; which is the order of the group H;
instead of R: Since the difference r R equals to the number of independent linear
connections between operators of the basis of the Lie algebra of the group H; then
one can easily sort subgroups with respect to the values of R:
Thus, one arrives at a purely group problem on enumerating subgroups H of a
given order of the local Lie group G: The set of such subgroups is infinite. However,
from the viewpoint of invariant H-solutions it is not necessary to know all such
subgroups as it follows from the following statements.
Definition 3.10. Subgroups H 0 and H of the group G are said to be similar (in the
group G) if there exists an element (transformation) T 2 G such that H 0 = T HT 1 :
Lemma 3.4. Let two subgroups H 0 and H be similar and let

H 0 = T HT 1
:

Let Φ be any invariant H-solution. Then

Φ0 = T Φ

is an invariant H 0 -solution.
Proof. The property of solution of Φ to be an invariant H-solution can be expressed
by the formula H Φ = Φ : Therefore, one has

H 0 Φ 0 = T HT 1
T Φ = T HΦ = T Φ = Φ 0;

which was to be proved.


If invariant H-solutions Φ are known, then one can consider solutions Φ 0 = T Φ
to be known as well. The latter solutions result form Φ upon applying a simple
transformation T (which is assumed to be known in its turn) without integration of
any system of differential equations.
118 3 Group invariant solutions of differential equations

Hence, solving the above group problem, it is sufficient to know the subgroup
H  G up to similarity in the meaning of Definition 3.10. Note that the relation of
similarity of groups H 0 and H; expressed by the formula

H 0 = T HT 1
;

is an equivalence relation. Therefore, the whole set of subgroups H of a given or-


der is separated into classes of similar subgroups. Accordingly, the group theoretic
problem on enumerating subgroups of a given order of the group G should be con-
sidered as a problem on enumerating classes of similar subgroups.
The one-to-one correspondence between subgroups of a local Lie group G and
subalgebras of its Lie algebra L established in x2.4 makes it sufficient to solve this
problem for Lie algebras. Note that the similarity transformation of subgroups ex-
pressed by the formula H 0 = T HT 1 is nothing else but an inner automorphism of
the group G (see Definition 2.15). Hence, the class of subgroups similar to a given
subgroup H is obtained by subjecting it to various inner automorphisms Γa 2 GA ;
where GA is the group of inner automorphisms of the group G: In the Lie algebra L
it corresponds to transformations of a subalgebra L(H)  L by means of the group
of inner automorphisms of the Lie algebra L:

3.4.2 Classes of similar subalgebras

In a finite-dimensional case, L = Lr ; the group of inner automorphisms, denoted


β
here by A(L); is realized in the form of a group of matrices (lα (a)) by the formulae
(2.5.2) of Chapter 2.
Let Xα (α = 1; : : : ; r) be a basis of the Lie algebra LNr of operators of the group
Gr : Let us write an arbitrary operator X 2 LNr in the form of an expansion with
N

respect to the basis with the coordinates e (e1 ; : : : ; er ) :

X = eα Xα :
β
In the notation of x2.5, the automorphism la with the matrix (lα (a)) acts on basis
“vectors” Xα of the Lie algebra LNr by the formula
β
Xα0 = lα (a)Xβ (α = 1; : : : ; r): (3.4.1)

Hence, any operator X 2 LNr is transformed by this automorphism by the formula


β
X 0 = eα lα (a)Xβ ;

which means that in a fixed basis Xα the same automorphism can be considered as
transformation of the vector l to l 0 with the coordinates
β
l 0β = lα (a)eα (β = 1; : : : ; r): (3.4.2)
3.4 Classification of invariant solutions 119

By virtue of the formula



∂ lβα (a)
= Cβαγ ;
∂ aγ a=0

resulting from Eqs. (2.5.3) of Chapter 2, the operators of the group of transforma-
tions (3.4.1), derived by the formulae (2.6.3) of Chapter 2, have the form

γ ∂
Eα = Cαβ Xγ
∂ Xβ
γ
or by virtue of the correlation [Xα ; Xβ ] = Cαβ Xγ ; finally


Eα = [Xα ; Xβ ] (α = 1; : : : ; r): (3.4.3)
∂ Xβ

According to Theorem 2.14, these operators span the adjoint algebra of the Lie
algebra LNr : Application of the operators (3.4.3) for representing the adjoint algebra
is particularly convenient because the coordinates of the operators Eα are taken
directly from the table of commutators of basis operators of the Lie algebra LNr : One
can easily restore the finite automorphisms lα by the operators Eα ; e.g. in canonical
coordinates of the second kind.
Every subalgebra LNS  LNr ; as a linear subspace in LNr can be given by the system
of equations
l α = ξσα t σ (α = 1; : : : ; r; σ = 1; : : : ; s); (3.4.4)
where t σ (σ = 1; : : : ; s) are arbitrary parameters, and ξσα are fixed constants, char-
β
acterizing a subalgebra. Under the action of an automorphism lα ; this subalgebra
transforms into a similar subalgebra, the corresponding equations of which have the
form
β
l α = [lβα (a)ξσ ]t σ (α = 1; : : : ; r) (3.4.5)
by virtue of (3.4.2). The formula (3.4.5) contains the whole class of subalgebras
similar to the subalgebra (3.4.4).
The formulae (3.4.4) and (3.4.5) look especially simple in the case of one-
dimensional subalgebras. Since there will be only one parameter t; then considering
l α to be determined up to an arbitrary common factor, one can assume t = 1: Then
(3.4.5) becomes (3.4.2).
Example 3.7. Consider the Lie algebra L23 of operators with the basis

∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = x +y
∂x ∂y ∂x ∂y
and find all classes of similar one-dimensional subalgebras L1 : The table of com-
mutators of operators Xα has the form
120 3 Group invariant solutions of differential equations

X1 X2 X3
X1 0 0 X1
X2 0 0 X2
X3 X1 X2 0

The formulae (3.4.3) give the following operators Eα of the group of inner automor-
phisms:

∂ ∂ ∂ ∂
E1 = X1 ; E2 = X2 ; E3 = X1 X2 
∂ X3 ∂ X3 ∂ X1 ∂ X2
Let us find one-parameter groups of automorphisms Aα (t) for every operator
Eα by means of integrating Eqs. (2.6.7) of Chapter 2. For example, for E1 these
equations are written

dX10 dX20 dX30


= 0; = 0; = X10 ; Xα0 jt=0 = Xα :
dt dt dt
Whence, one obtains the automorphism

A1 (t) : X10 = X1 ; X20 = X2 ; X30 = tX1 + X3 :

Likewise,
A2 (t) : X10 = X1 ; X20 = X2 ; X30 = tX2 + X3 ;
A3 (t) : X10 = e t X1 ; X20 = e t X2 ; X30 = X3 :
Let us construct the general inner automorphism in canonical coordinates of the
second kind (a; b; c) by the formula

la;b;c = A3 ( log a)A2 (b)A1 (c);

(with an identity when a = 1; b = c = 0) whence,

la;b;c : X10 = aX1 ; X20 = aX2 ; X30 = bX1 + cX2 + X3 :


β
We write the matrix lα of the automorphism, treating the vector X(X1 ; X2 ; X3 ) as the
row-vector and obtain 0 1
a 0 b
A = @0 a c A :
0 0 1
According to (3.4.2), the corresponding transformation of the vector e is obtained by
the formula e0 = Ae; where the vector e (e1 ; e2 ; e3 ) is to be considered as a column-
vector. Therefore, the formulae (3.4.2) provide

e01 = ae1 + be3 ; e02 = ae2 + ce3; e03 = e3 :


3.5 Partially invariant solutions 121

Thus, the adjoint group of the Lie algebra L23 is constructed. Now we take the sub-
algebra L1 ; consisting of the operator X = eα Xα and consider the following possi-
bilities.
(a) e3 6= 0; the automorphism A can be chosen so that e01 = e02 = 0: To this end,
it is sufficient to let
e1 e2
b = a 3; c = a 3 
e e
Moreover, one can let e3 = 1: This provides a class of subalgebras L1 ; similar to the
subalgebra < X3 > :
(b) e3 = 0; here by means of the parameter a; one can have

(e01 )2 + (e02 )2 = 1

or
e01 = cos ϕ ; e02 = sin ϕ :
This provides a one-parameter family of classes of subalgebras similar to the subal-
gebras < cos ϕ X1 + sin ϕ X2 > depending on the parameter ϕ :
The final result of the classification is as follows. Any subalgebra L1  L23 is
similar to one of subalgebras

< X3 >; < cos ϕ X1 + sin ϕ X2 >;

while these subalgebras are not similar to each other for any ϕ from the interval
0  ϕ < 2π :
If properly developed, this method of constructing classes of similar subalgebras
of a finite-dimensional Lie algebra can be used for subalgebras of the second, third
and higher orders.

3.5 Partially invariant solutions

3.5.1 Partially invariant manifolds

Let GNr be a local Lie group of point transformations in the space E N (x): This section
is devoted to expansion of the notion of an invariant H-solution. This is done on the
basis of the following definition.
Definition 3.11. A manifold N  E N is said to be a partially invariant manifold
of the group GNr if N lies in an invariant manifold M of the group GNr ; i.e.

N  M:

In what follows, M will be taken as the smallest invariant manifold of the group
GNr containing N : It is determined as an intersection of all invariant manifolds
122 3 Group invariant solutions of differential equations

of the group containing N : Moreover, we naturally assume that M 6= E N (even


locally), for otherwise the given definition is meaningless.
The smallest invariant manifold M of the group GNr containing the given mani-
fold N can be constructed as follows. Take any transformation Ta 2 GNr and trans-
form N by means of Ta : This yields the manifold

Na = Ta N :
S
The union (Na ) of all manifolds Na obtained when Ta runs the whole group GNr
a
obviously contains the given N : It is invariant with respect to all transformations
Ta 2 GNr and is the smallest invariant manifold of the group GNr containing N :
Hence, M can be determined by the formula
[
M= (Ta N ); Ta 2 GNr : (3.5.1)
a

We make an additional assumption that nonsingular manifolds N ; M of the


group GNr are considered. Let us find out how the dimensions of the manifolds N
and M are connected with each other.
Let R be the general rank of the matrix

M = (ξαi (x))

whose entries are the coordinates of basis operators of the Lie algebra LNr of the
group GNr : We transform a point x0 2 N by an arbitrary Ta 2 GNr and obtain the
manifold of points x0 = Ta x0 depending on the parameters a: The tangent element to
this manifold is given by the infinitesimal transformation

x0i = xi0 + ξαi (x0 )aα (i = 1; : : : ; N)

and has the dimension equal to the dimension of the space Λ of vectors λ with
the coordinates λ 0 = ξαi (x0 )aα ; resulting when the vector a(a1 ; : : : ; ar ) runs an r-
dimensional space. It is known from linear algebra that the dimension of Λ is equal
to the rank of the matrix (ξαi (x0 )): The latter equals to R due to nonsingularity of
N : Thus, the dimension of the manifold of the points

x0 = Ta x0

with any Ta 2 GNr equals to R; due to which the dimension of M determined by the
formula (3.5.1) exceeds the dimension of N by R at most:

dim M  dim N + R: (3.5.2)

The invariant manifold M ; being nonsingular for GNr ; has the definite rank ρ
(Definition 3.7) given by the formula

ρ = dim M R: (3.5.3)
3.5 Partially invariant solutions 123

3.5.2 Defect of invariance

Definition 3.12. The rank of the partially invariant manifold N is the rank of the
smallest invariant manifold M containing it. The defect of invariance of a partially
invariant manifold N is the number

δ = dim M dimN : (3.5.4)

The defect of invariance is an important numerical characteristics of a partially


invariant manifold demonstrating “to what extent” it is not invariant. Equations
(3.5.3) and (3.5.4) provide the following correlation between the basic numerical
characteristics of N and of the group GNr :

ρ = δ + dim N R: (3.5.5)

Let N be regularly given by the system of equations

ψ σ (x) = 0 (σ = 1; : : : ; s): (3.5.6)

Then
dim N = N s:
Let us find some inequalities for the invariance defect δ : We introduce the number
t = N R; equal to the number of invariants in a complete set of invariants of the
group GNr : In this notation one has

dim N R=t s:

Since ρ  0; then it follows from (3.5.5) that δ  s t: Further, equations (3.5.2)


and (3.5.4) provide that δ  R: At last, the inequality ρ  t 1 and (3.5.5) provide

δ t 1+R N+s = s 1:

Thus, the invariance defect δ satisfies the inequalities

maxfs t; 0g  δ  minfR; s 1g: (3.5.7)

By means of the invariance defect one can formulate the following necessary
condition for partial invariance of a manifold N : Let Xα (α = 1; : : : ; r) be a basis of
the Lie algebra LNr of the group GNr :
Theorem 3.7. If the partially invariant manifold N is regularly given by Eqs.
(3.5.6) and has an invariance defect δ ; then the general rank of the matrix

∆ = (Xα ψ σ (x))

at the points N is equal to δ :


124 3 Group invariant solutions of differential equations

Proof. There are functions ψ̄ σ (x; a) such that the manifold Na = Ta N is given by
the equations
ψ̄ σ (x; a) = 0 (σ = 1; : : : ; s): (3.5.8)
Moreover, these functions can be chosen so that

ψ̄ σ (x; 0) = ψ σ (x) (σ = 1; : : : ; s):

On the other hand, the manifold Na is a locus of the point x0 = f (x; a) when the
point x runs through the manifold N : Therefore, the equations

ψ̄ σ ( f (x; a); a) = 0 (σ = 1; : : : ; s)

hold on N identically with respect to the parameters a: Differentiating with respect


to aα and using the Lie equations (2.6.5) of Chapter 2, one obtains

∂ ψ̄ σ i 0 β ∂ ψ̄ σ
ξ (x )Vα (a) = 0:
∂ x0i β ∂ aα
Letting a = 0 here and invoking the choice of functions ψ̄ σ (x; a) we see that the
following equations hold on N :

∂ ψ̄ σ (x; a)
Xα ψ σ (x) = : (3.5.9)
∂ aα a=0

Now let us consider the following procedure of constructing the manifold M


(3.5.1) by means of Eqs. (3.5.8). Let us take the matrix
 σ
∂ ψ̄
(3.5.10)
∂ aα
and denote its general rank by ν : Then, one can express ν parameters a from Eqs.
(3.5.8), e.g. a1 ; : : : ; aν via variables x and the remaining parameters ā(aν +1 ; : : : ; ar ):
To this end one has to use ν equations (3.5.8). Substituting the resulting expressions
into the remaining s ν equations, one arrives at equations containing no parameters
a: Obviously, these are equations of the smallest invariant manifold M containing
N : Whence,

dim M = N (s ν) = N s + ν = dim N + ν

and by virtue of Definition 3.12


ν = δ:
Thus, the general rank of the matrix (3.5.10) is equal to δ ; due to which the statement
of the theorem follows from (3.5.9).
The condition of Theorem 3.7 may appear to be sufficient, i.e. the equation

R(∆ ) = δ
3.5 Partially invariant solutions 125

can guarantee that the defect of N given by Eqs. (3.5.6) equals to δ ; but we cannot
prove it. Note only that when δ = 0; the manifold N is invariant and then the
criterion of Theorem 3.7 transforms into the criterion of Theorem 3.2, and the latter
is necessary and sufficient.

3.5.3 Construction of partially invariant solutions

Consider a system of differential equations (S) again in the space E N (x; u) and dis-
cuss its solutions considered as manifolds Φ in E N :
Definition 3.13. A solution Φ of the system (S) admitting a group H is said to be
a partially invariant H-solution with the rank ρ and the invariance defect δ ; if Φ is
a partially invariant manifold of the group H and has the rank ρ and the invariance
defect δ :
It is clear that a partially invariant H-solution with the invariance defect δ = 0 is
just an invariant H-solution.
Since the system (S) has n independent variables xi and m unknown functions uk
(n + m = N); the dimension of the manifold Φ is fixed and equals to

dim Φ = n:

In this case it is reasonable to rewrite the correlations (3.5.3) — (3.5.7) keeping in


mind that now
s = m:
Moreover, we are interested only in partially invariant H-solutions with the rank

ρ < n;

and hence, by virtue of (3.5.5),


δ < R:
Furthermore, let us introduce a notation for the number of equations of the invariant
manifold M ; namely
µ = n + m dimM :
Thus, one has the relations

t = n+m R; ρ = δ + n R; µ = m δ ;
(3.5.11)
0  ρ < n; maxfR n; 0g  δ  minfR 1; m 1g

for the system (S) in E n+m :


In particular, the inequalities (3.5.11) show that in the case m = 1; i.e. if there
is only one unknown function in (S) (e.g. one equation), only the value δ = 0 is
possible. Hence, there can be no partially invariant H-solutions other than invariant
H-solutions in the given case.
126 3 Group invariant solutions of differential equations

Let us turn to the problem on the algorithm for finding partially invariant H-
solutions. Unfortunately, we do not have complete representation of a partially in-
variant manifold and therefore we can determine only the invariant part of the solu-
tion, i.e. the manifold M :
Let H be a group with a given number R: One can write the inequalities (3.5.11)
and select some value of δ ; thus defining the numbers ρ and µ according to (3.5.11).
Let us assume that a complete set of invariants I τ (τ = 1; : : : ;t) of the group H is
known. We shall look for manifolds M ; where partially invariant H-solutions of
the rank ρ and of the invariance defect δ can lie, giving them by the system of µ
equations of the form

M : Ψ ν (I 1 ; : : : ; It ) = 0 (ν = 1; : : : ; µ ) (3.5.12)

with unknown functions Ψ ν (I): Unlike the case of invariant solutions, we cannot
require that Equations (3.5.12) provide all variables uk as functions of x: Therefore,
the variables uk (k = 1; : : : ; m) are to be divided into main, e.g. u1 ; : : : ; uµ and para-
metric, uµ +1 ; : : : ; um ; so that equations (3.5.12) could be solved with respect to the
main variables u: Let us denote the parametric variables u by ū; and their deriva-
tives by p̄: Likewise, equations resulting from application of the operators of total
differentiation Di to Eqs. (3.5.12),

DiΨ ν = 0 (i = 1; : : : ; n); (3.5.13)

can provide only the main derivatives p; expressing them via parametric derivatives
p̄:
For the sake of simplicity we assume that the system (S) is of the first order.
Substituting the expressions

u = (x; ū); p = p(x; ū; p̄)

into Eqs. (S), one can find expressions for some parametric derivatives via the re-
maining ones, e.g. in the form

p̄ = p̄ (x; ū; p̄¯):

Since the latter expressions are not derived by differentiation of the form (3.5.13),
but from Eqs. (S), compatibility conditions of the form

Di p̄ lj = D j p̄il (3.5.14)

should hold, where derivatives of the second order can appear due to differentiation
of derivatives p̄¯: If the second derivatives have independent expressions, one has
to write compatibility conditions for them again, etc. If one makes no additional
assumptions on the system (S), then it is rather difficult to trace the procedure up
to the end in detail. In the theory of differential equations it is known as the pro-
cess of reducing an “active” system (i.e. that can generate new equations according
to (3.5.14)) to a “passive” system (for which conditions of the form (3.5.14) pro-
3.5 Partially invariant solutions 127

vide no new equations independent of the available ones). One can only claim that
the resulting “passive” system consists of the proper passive system P imposed on
parametric functions ū; and of a system (S=H) expressing conditions of passiveness
of the system (P): The system (S=H) contains no parametric functions or their
derivatives and connects only invariants I; functions Ψ (I) and their derivatives up
to some order. This statement is proved like the corresponding statement of Theo-
rem 3.5.
Thus, as a result of the above procedure, the system (S) is reduced into the system

(P) + (S=H)

having the following properties. The system (S=H) is a system with respect to func-
tions Ψ (I) from (3.5.12). Taking any solution of (S=H); one can find all paramet-
ric functions ū(x) from the system (P) and then all the main functions u(x) from
(3.5.12). The resulting ū(x) and u(x) provide a solution of the system (S), namely a
partially invariant H-solution of the rank ρ and the invariance defect δ :
One can make the same remark about the system (S=H) as in the case of invariant
H-solutions. Namely, the number of independent variables equals to the rank ρ of
the considered partially invariant H-solution.
Let us consider the system of equations of gas dynamics (3.3.12) as an example.
Before searching for specific partially invariant solutions of this system, let us com-
pose a table of all possible types of such solutions based on the relations (3.5.11).
In case of the system (3.3.12), one has n = 2; m = 3; so that (3.5.11) take the form

t =5 R; ρ = δ +2 R; µ =3 δ;

0  ρ < 2; maxfR 2; 0g  δ  minfR 1; 2g:


This leads to the following table, in the last column of which we have a sketch of the
invariant manifold M containing the corresponding partially invariant H-solutions.

No: R t δ ρ µ Form of M

1 1 4 0 1 3 I 1 ; I 2 ; I 3 (I 4 )
2 2 3 0 0 3 I 1 = C1 ; I 2 = C2 ; I 3 = C3
3 2 3 1 1 2 I 1 ; I 2 (I 3 )
4 3 2 1 0 2 I 1 = C1 ; I 2 = C2
5 3 2 2 1 1 I 1 (I 2 )
6 4 1 2 0 1 I 1 = C1

Solutions of the form 1  and 2 are invariant and have already been discussed in
x3.3. Let us consider an example of the type 3  :
Let us take the subgroup H with the operators X3 and X6 from (3.3.13). One has
R = 2; t = 3: One easily obtains the invariants
128 3 Group invariant solutions of differential equations
p
I 1 = tu x; I2 = ; I 3 = t:
ρ

The manifold M for the type 3  is given by the equations


p
tu x = χ (t); = ψ (t)
ρ
or, in the solved form,
x
M : u= + ϕ (t); p = ψ (t)ρ : (3.5.15)
t
Here, u; p are the main variables, and ρ is a parametric one. Substitution of Eqs.
(3.5.15) into Eqs. (3.3.12) yields
ϕ ρx
ϕ0 + + ψ = 0;
t ρ
ρ1  x ρ
x 1
+ +ϕ + = 0;
ρ t ρ t
ρ1 x ρ ψ
x
ψ +ψ +ϕ + ψ 0 + γ = 0:
ρ t ρ t

One can find from this system both parametric derivatives if ψ 6= 0: Moreover, one
obtains one equation (from the second and the third ones) without ρ ; namely

t ψ 0 + (γ 1)ψ = 0:

For parametric derivatives one obtains

ρx tϕ 0 + ϕ ρt  x  tϕ 0 + ϕ 1
= ; = +ϕ  (3.5.16)
ρ tψ ρ t tψ t

Now one has to write the condition (3.5.14) of compatibility for these equations,
namely
∂ ρx ∂ ρt
= 
∂t ρ ∂x ρ
This yields the equation

∂ tϕ 0 + ϕ tϕ 0 + ϕ
t + = 0;
∂ t tψ tψ

which together with the previous one

t ψ 0 + (γ 1)ψ = 0

composes the system (S=H): The passive system P is reduced to (3.5.16). Integrat-
ing the equations (S=H); one obtains the first integrals
3.6 Reduction of partially invariant solutions 129

tϕ 0 + ϕ
tγ 1
ψ = C1 ; = C2 :
ψ
Eliminating ψ from the second equation by using the first equation and then inte-
grating, one obtains
t ϕ = C2t 2 γ +C3
where
C1C2
C2 = 
2 γ
Finally we obtain the following solution of the system (S=H) :

C3
ψ = C1t 1 γ ; ϕ = C2t 1 γ
+ 
t
Substituting this solution into (3.5.16), one obtains a totally integrable system

ρx C2 ρt x C2 (C3 +C2t 2 γ ) t
= ; = C2 2 + 
ρ t ρ t t2

Integration of the first equation of this system yields


x
log ρ = C2 + log ω (t);
t
whence,
C2 xt
ρ = ω (t)e ; (3.5.17)
where ω (t) is a certain function which is defined from the second equation of the
system.
It is clear that the number of such examples can be increased to infinity by se-
lecting subgroups of the second order of the group G56 in different ways. Generally
speaking, this does not provide new solutions of the system (3.3.12) other than in-
variant H1 -solutions. This issue is discussed in the following section.

3.6 Reduction of partially invariant solutions

3.6.1 Statement of the reduction problem

The section investigates the situation connected with the fact that any invariant man-
ifold of a group H is at the same time an invariant manifold of any subgroup H 0  H:
This follows directly from Definition 3.4, since any transformation Ta 2 H 0 belongs
to H:
Consequently, any partially invariant H-solution is also a partially invariant H 0 -
solution if the subgroup H 0  H: However, this transition from H to H 0  H changes.
Generally speaking, the rank and the defect of invariance of the smallest invariant
130 3 Group invariant solutions of differential equations

manifold M containing the solution Φ : Let us agree to mark all symbols relating to
the subgroup H 0 by a prime.
Lemma 3.5. The rank of a partially invariant solution Φ does not decrease and its
defect of invariance does not increase,

ρ0  ρ; δ0  δ;

when turning to the subgroup H 0  H:


Proof. Let us consider a partially invariant solution Φ as an H-solution and an H 0 -
solution simultaneously, where H 0  H: Let M and M 0 be the corresponding small-
est invariant manifolds containing Φ : Manifestly M 0  M ; since M is an invariant
manifold of H 0 and Φ  M : Further, let t be the number of invariants in the com-
plete set of invariants for the group H and let µ be the number of equations defining
M : Then, ρ = t µ and ρ 0 = t 0 µ 0 ; respectively. Whence,

ρ0 ρ = t0 t (µ 0 µ ): (3.6.1)

Manifestly, t 0  t: Further, the increase of the number of invariants of the complete


set t 0 t with transition from H to H 0  H cannot be less than the increase of the
number of equations of invariant manifolds µ 0 µ : Indeed, otherwise M would not
be the smallest invariant manifold of the group H; containing the solutions Φ : Thus,

t0 t  µ0 µ

and (3.6.1) provides


ρ0  ρ:
The inequality δ 0  δ follows directly from the formula

δ = dim M dim Φ

and from the inclusion M 0  M :


Let us assume now that we are searching for particular solutions of the system
of differential equations (S) making progress step by step increasing the ranks of
the investigated partially invariant solutions. Since the rank equals to the number of
independent variables in the “resolving” system (S=H); then increase of the rank
means a more difficult process of finding solutions of the system (S=H); and there-
fore of partially invariant H-solutions of the system (S): Therefore, if we do not
want to find a partially invariant solution Φ more complicated when turning from
the group H to its subgroup H 0 ; it should be required that the rank is not increased
with the transition. Since by virtue of Lemma 3.5 it cannot decrease either, our re-
quirement means that the rank keeps unaltered ρ 0 = ρ : Further, when ρ 0 = ρ ; the
formulae (3.5.11) provide that δ δ 0 = t 0 t; where t and t 0 are numbers of invari-
ants of a complete set of groups H and H 0 respectively. If δ 0 = δ ; then t 0 = t; i.e. the
groups H and H 0  H have the same invariants and the transition to the subgroup
H 0 provides nothing new.
3.6 Reduction of partially invariant solutions 131

According to the above reasoning we turn to the following formulation of the


problem of “reduction” of partially invariant H-solutions. Find out if there exists a
subgroup H 0  H; for which a partially invariant H-solution of the rank ρ and of the
invariance defect δ is at the same time a partially invariant H 0 -solution of the same
rank ρ 0 = ρ ; but of a smaller invariance defect δ 0 < δ :
Every time when the answer is positive we say that a reduction of a partially
invariant H-solution Φ occurs. Theorems that guarantee the existence of such re-
duction are called theorems on reduction.
The importance of theorems on reduction is that in the above process of search-
ing for partially invariant H-solutions they allow to avoid the abundance of useless
work because partially invariant H 0 -solutions are obtained before H-solutions in this
process when H 0  H:
As a simple example of reduction of a partially invariant H-solution, consider the
solution of the system (3.3.12) derived at the end of the previous section. The group
H was a group H < X3 ; X6 >; where the operators Xα were given in (3.3.13). The
resulting solution was given by the formulae (3.5.15) and (3.5.17). These formulae
provide the following equations
p x
tu x = t ϕ (t); = ψ (t); ρ eC2 t = ω (t); (3.6.2)
ρ

where C2 =const. Let us consider the subgroup H 0 < X3 C2 X6 > of the group H;
whose operator is written in detail as

∂ ∂ ∂ ∂
X3 C2 X6 = t + C2ρ C2 p 
∂x ∂u ∂ρ ∂p
It is readily verified that the functions
p x
I 1 = tu x; I2 = ; I 3 = ρ eC2 2 ; I4 = t
ρ

provide a complete set of independent invariants of the group H 0 (i.e. of invariants


of the operator X3 C2 X6 ). Since equations (3.6.2) connect only these invariants,
the considered partial invariant H-solution (3.6.2), for which

ρ̂ = 1; δ = 1;

is an invariant H 0 -solution with the rank ρ̂ 0 = 1 and the invariance defect δ 0 = 0:


Thus, any partially invariant H-solution (3.6.2) can be reduced to an invariant H 0 -
solution with respect to a subgroup H 0  H: Let us emphasize that the situation is
non-trivial because the subgroup H 0 varies from solution to solution since it depends
on the values of the constant C2 itself. Therefore, the reduction property of partially
invariant H-solutions is not purely of a group character, but depends considerably
on the structure of the initial system (S).
132 3 Group invariant solutions of differential equations

3.6.2 Two auxiliary lemmas

Let us prove two lemmas, the first one being a particular case of a more general
theorem on the number of essential parameters of a system of functions.
Consider a system of functions

ϕ k = ϕ k (x; a) = ϕ k (x; a1 ; : : : ; aτ ) (k = 1; : : : ; m) (3.6.3)

smoothly depending on the point x 2 E n and on parameters a1 ; : : : ; ar : Let us com-


pose two matrices    
∂ϕk ; ∂ ϕ k ∂ 2ϕ k ; (3.6.4)
;
∂ aα ∂ aα ∂ xi∂ aα
where α = 1; : : : ; r are the numbers of rows.
Lemma 3.6. If the matrices (3.6.4) have the same general rank equal to δ ; then
there are functions Bε (a) (ε = 1; : : :; δ ) and functions ϕ̄ k (x; B1 ; : : : ; Bδ ) (k=1;: : :; m);
such that the following equations hold identically in the variables x; a :

ϕ k (x; a) = ϕ̄ k (x; B(a)) (k = 1; : : : ; m): (3.6.5)

Proof. It can be assumed without loss of generality that the rank minor of the
first matrix (3.6.4) is in the left upper corner, i.e. it corresponds to the values
k = 1; : : : ; δ ; α = 1; : : : ; δ . Let us split the values of the index α = 1; : : : ; r into values
σ = 1; : : : ; δ and τ = δ + 1; : : :; r: The conditions of the lemma provide that the last
r δ rows of the first matrix (3.6.4) are linear combinations of the first δ rows:

∂ϕk σ ∂ϕ
k
= λ (k = 1; : : : ; m; τ = δ + 1; : : :; r); (3.6.6)
∂ aτ t
∂ aσ
where, generally speaking, λτσ = λτσ (x; a): Since the second matrix (3.6.4) has the
same rank as the first one and contains the first matrix as a part, then the relations

∂ 2ϕ k σ ∂ 2ϕ k
= λ τ (x; a)
∂ xi ∂ a τ ∂ xi ∂ aσ
hold with the same λτσ : By virtue of these relations, differentiation of Eqs. (3.6.6)
with respect to xi provides the new relations

∂ λτσ ∂ ϕ k
 = 0 (k = 1; : : : ; m; i = 1; : : : ; n)
∂ xi ∂ a σ
expressing the fact of linear dependence of the first δ rows of the first matrix (3.6.4).
But the first δ rows are linearly independent by condition. Hence

∂ λτσ
= 0 (i = 1; : : : ; n):
∂ xi
3.6 Reduction of partially invariant solutions 133

The latter equations mean that the coefficients λτσ in Eqs. (3.6.6) are independent of
x and can only be functions of parameters a at most: λτσ = λτσ (a):
Therefore, equations (3.6.6) can be considered as equations to which functions
ϕ k (x; a) satisfy as functions of parameters a with any fixed x: One can readily see
that these equations generate a complete system (see definition (3.3.18)). Indeed,
the rank of the matrix composed by coordinates of the operators

∂ ∂
Yτ = λτσ (a) (τ = δ + 1; : : :; r)
∂ aτ ∂ aσ
is obviously equal to r δ ; and the commutator [Yτ 1 ;Yτ 2 ] of any two of them should
be expressed linearly through these operators. Otherwise, we would have some
linear dependencies among the first δ rows of the first matrix (3.6.4) which con-
tradicts the assumption. Note that the system (3.6.6) contains r independent vari-
ables and r δ equations for a fixed k: Therefore, according to Lemma 3.2, it has
r (r δ ) = δ functionally independent solutions, that can be chosen to be func-
tions of variables a only. Let us denote these solutions by Bε (a) (ε = 1; : : : ; δ ): Since
every function ϕ k (x; a) satisfies the same system as B(a); then equalities of the form
(3.6.5) should hold with the derived B(a) due to Lemma 3.2. Lemma 3.6 is proved.
The second lemma deals with some property of the prolongation of the group
GNr : We assume as before that a basis of the Lie algebra LNr is given by the operators

∂ ∂
Xα = ξαi + ηαk k (α = 1; : : : ; r) (3.6.7)
∂x i ∂u
and that the prolongation of these operators with respect to functions u(x) has the
form

Xeα = Xα + ζαk i k :
∂ pi
Consider the matrices of coordinates of the operators Xα and Xeα :
 
M = ξαi ; ηαk ; M e = ξαi ; ηαk ; ζ k :
αi

Lemma 3.7. If the group GNr is intransitive and the general ranks of the matrices M
e are equal to each other, then the operators (3.6.7) are linearly unconnected.
and M
Proof. Let us assume that the operators (3.6.7) are linearly connected on the con-
trary to the statement. We can assume, without loss of generality, that the first R
of the operators Xσ (σ = 1; : : : ; R) are linearly unconnected, and that the last r R
operators Xτ are expressed via the first ones by the formulae

Xτ = ωτσ Xσ (τ = R + 1; : : :; r); (σ = 1; : : : ; R); (3.6.8)

where ωτσ = ωτσ (x; u): Let us lead this assumption to a contradiction proving that
equality of the ranks of matrices M and M e implies that all ωτσ are constants. Then
equations (3.6.8) would mean that the operators (3.6.7) are linearly dependent and
hence do not provide a basis of LNr :
134 3 Group invariant solutions of differential equations

The equation
e =R
R(M) = R(M)
shows that the prolonged operators should also be linearly connected with the same
coefficients ωτσ :
Xeτ = ωτσ Xeσ (τ = R + 1; : : :; r): (3.6.9)
Indeed, otherwise one could find a non-vanishing minor of the order R + 1 in the
e Let us write Eqs. (3.6.8) for the coordinates of the operators (3.6.7):
matrix M:

ξτi = ωτσ ξσi ; ητk = ωτσ ησk : (3.6.10)

Taking into account Eqs. (3.6.10) we write the relations (3.6.9) in the coordinate
form as follows:
ζτki = ωτσ ζσk i
or, by virtue of the formulae (1.4.10) of Chapter 1,

Di (ητk ) pkj Di (ξτ ) = ωτσ [Di (ησk ) pkj Di (ξσ )]:


j j

However, equation (3.6.10) provide the equalities

Di (ξτj ) = ωτσ Di (ξσj ) + ξσj Di (ωτσ );

Di (ητk ) = ωτσ Di (ησk ) + ησk Di (ωτσ );


by virtue of which the previous relation is written

ησk Di (ωτσ ) pkj ξσ Di (ωτσ ) = 0;


j

or
∂ ωτσ σ
k ∂ ωτ l
σ
j ∂ ωτ k
σ
j ∂ ωτ l k
ησk + η σ p ξ σ p ξ σ p p = 0; (3.6.11)
∂ xi ∂ ul i
∂ xi j
∂ ul i j
where i = 1; : : : ; n; k = 1; : : : ; m; τ = R + 1; : : : ; r: Since the functions ξ ; η ; ω are
independent of the variables p; and (3.6.11) must be an identity with respect to
independent variables x; u; p; equations (3.6.11) are easily “split” with respect to
variables p; which leads to three series of equations

∂ ωτσ
ησk = 0; (a)
∂ xi
∂ ωτσ ∂ ωτσ
δij ησk δlk ξσj = 0; (b)
∂ ul ∂ xi
∂ ωτσ
ξσj = 0: (c)
∂ ul
Let us consider the matrix MR = (ξσi ; ησk ) similar to the matrix M; but composed
of coordinates of only the first R operators (3.6.7). Let us assume that σ are numbers
of columns in the matrix MR ; so that it has R columns and N rows. Since the group
3.6 Reduction of partially invariant solutions 135

GNr is intransitive, then R < N and there is a row in the matrix MR after eliminating
which the remaining matrix MR0 still has the rank R: Let it be the row with the number
i0 for example (the reasoning for a number k0 is similar). If we let that i = i0 and
j 6= i0 in Eqs. (a), (b) and introduce the quantities

∂ ωτσ
= zσ0 ;
∂ xi 0
we obtain the system of linear homogeneous equations

ησk zσ0 = 0; ξσj zσ0 = 0 ( j 6= i0 ):

The matrix of the latter system is exactly the matrix MR0 : Since there are exactly R
“unknown” zσ0 ; then R(MR ) = R provides that

zσ0 = 0 (σ = 1; : : : ; R):

We let now i = j = i0 in Eqs. (b) and denote

∂ ωτσ
= vσ :
∂ ul
Then, along with Eqs. (c), we obtain the following system of linear homogeneous
equations with the matrix MR :

ησk vσ = 0; ξσj vσ = 0

for vσ (σ = 1; : : : ; R): Whence,

vσ = 0 (σ = 1; : : : ; R)

as well. Finally, denoting


∂ ωτσ
= zσ
∂ xi
and taking k = l in (b), one obtains the equations

ησk zσ = 0; ξσj zσ = 0;

whence it follows that


zσ = 0 (σ = 1; : : : ; R)
as above. This completes the proof of Lemma 3.7.
Note that the requirement of intransitiveness of the group GNr is essential. The
following example demonstrates that Lemma 3.7 can appear to be not valid for
transitive groups. Consider G23 with the operators

∂ ∂ ∂ ∂
X1 = ; X2 = ; X3 = x +u 
∂x ∂u ∂x ∂u
136 3 Group invariant solutions of differential equations

Prolonging these operators with respect to the function u(x); one obtains that

Xeα = Xα (α = 1; 2; 3):

Therefore, R(M) = R(M) e = 2: However, the statement of Lemma 3.7 is not satisfied,
since the operators Xα are linearly connected:

X3 = xX1 + uX2 :

This happened due to the fact that the group G23 under consideration is transitive.

3.6.3 Theorem on reduction

Let us turn back to the problem of reduction of partially invariant H-solutions. It


was shown that the system (S) splits into the system (P) and the system (S=H) for
such solutions. We reveal reduction for the case when the following property of the
systems of equations holds.
Property 3.1. The system (S) is of the first order and equations (P) make it pos-
sible to find expressions for all derivatives of the first order pki (i = 1; : : : ; n; k =
1; : : : ; m):
Theorem 3.8. For any partially invariant H-solution of the rank ρ ; where the Prop-
erty 3.1 is satisfied, there is a subgroup H 0  H such that the solution is an invariant
H 0 -solution of the rank ρ 0 = ρ :

Proof. Let Φ be the considered partially invariant H-solution and Φa be the man-
ifold derived from Φ under the action of the transformation Ta 2 H: Let us write
equations for Φa in the form

Φa : uk = ϕ k (x; a) (k = 1; : : : ; m): (3.6.12)

Consider the Jacobi matrix


 
∂ ϕ k (x; a) (3.6.13)
∂ aα
and assume that its general rank is equal to δ : The number δ is the invariance defect
of the H-solution Φ : Indeed, if we assume, without loss of generality, that the rank
minor of the matrix (3.6.13) is in the upper left corner, then the first δ equations
of (3.6.12) allow one to express the parameters aσ (σ = 1; : : : ; δ ) via the variables
x; u and the remaining parameters denoted here by ā: If one substitutes the resulting
expressions into the remaining m δ equations (3.6.12), then one obtains equations
without parameters ā (otherwise, the rank of the matrix (3.6.13) would be higher
than δ ). These are equations of the smallest invariant manifold M containing the
solution Φ : The number of resulting equations is equal to µ = m δ and according
3.6 Reduction of partially invariant solutions 137

to definition (3.4.3) the invariance defect of the manifold Φ equals to N (m δ )


n = δ : If t is the number of functionally independent invariants of the complete set
for the group H; then the rank of Φ equals to ρ = t (m δ ) = t + δ m:
Differentiating Eqs. (3.6.12), one obtains expressions for all derivatives

∂ ϕ k (x; a)
pki = 
∂ xi
If one substitutes the known expressions aσ = aσ (x; u; ā) into the latter equations
then the result will contain no variables ā: Indeed, the resulting equations should
be equivalent (on solution Φ ) to equations of the passive system (P) by virtue of
Property 3.1. Furthermore, the system (P) is invariant with respect to the group H
and does not contain the parameters a: It follows that the rank of the matrix
 
∂ ϕ k (x; a) ∂ 2 ϕ k (x; a)
;
∂ aα ∂ xi∂ aα
is equal to the rank of the matrix (3.6.13), i.e. δ : By virtue of Lemma 3.6, one can
make a conclusion that there are functions B1 (a); : : : ; Bδ (a) such that the right-hand
sides of (3.6.12) depend only on x and the variables B: Therefore, the part of Eqs.
(3.6.12) which is not invariant can be reduced to the form

ψ σ (x; u) = Bσ (a) (σ = 1; : : : ; δ ): (3.6.14)

Note that according to the above assumption this part is the first δ equations of
(3.6.12).
Consider possible transformations Ta 2 H satisfying the system of equations

Bσ (a) = Bσ (0) (σ = 1; : : : ; δ ): (3.6.15)

Since Φ0  Φ ; all such Ta have the property of leaving the manifold Φ invariant.
Further, if Φ is subjected first to the transformation Ta ; and then to the transforma-
tion Tb from the set of transformations with the property (3.6.15), then one obtains
the transformation Tb Ta = Tϕ (a;b) having the property (3.6.15) again. Therefore, the
set of all Ta 2 H with the property (3.6.15) is a group, namely a subgroup H 0  H:
Since all Ta 2 H 0 leave the manifold Φ invariant, we conclude that Φ is an invari-
ant H 0 -solution. Note that (3.6.15) is exactly an (r δ )-dimensional manifold in
the parametric space of the group H; for otherwise M would not be the smallest
manifold containing Φ : Therefore, the order of the group is equal to r0 = r δ :
Let us prove the statement about the rank. Let R be the rank of the matrix M
composed from the coordinates of basis operators of the group H: Then the number
of invariants is equal to t = N R: Let us consider a prolonged group H: e Since the
k
invariant equations (P) allow to find expressions for all derivatives pi due to Prop-
erty 3.1, the increase of the number of invariants in transition from H to H 0 is not to
be smaller than the number of these variables pki ; i.e. mn: This follows from Theo-
rem 3.3. But this increase can not be greater than mn; for the dimension of the space
138 3 Group invariant solutions of differential equations

is increased by this number. Thus, the number of functionally independent invariants


of the prolonged group is equal to t˜ = t + mn: Whence, according to Theorem 3.1,
one obtains that the rank of the matrix M e from coordinates of the prolonged basis
e
operators of the group H equals to R = N + mn (t + mn) = R: Since the group H
is intransitive (δ < m); Lemma 3.7 shows that the basis operators of the group H
are linearly unconnected and hence, the order of the group is r = R: Basis operators
of the subgroup H 0  H are obviously linearly unconnected as well. Thus the order
of the subgroup H 0 is equal to r0 = R0 : Then, one obtains

ρ0 = n R0 = n r0 = n r+δ = n R+δ = ρ

for the rank ρ 0 of the invariant H 0 -solution of Φ ; which was to be proved.


As an example of application of Theorem 3.8 consider the equations of gas dy-
namics (3.3.12) and find partially invariant solutions of the type 3  from Table in
x3.5.3. Let us make an additional requirement: the desired H-solutions should be
irreducible to invariant H 0 -solutions with respect to a subgroup H 0  H: While it is
unknown on which groups H to search for the solutions, it is known that the smallest
invariant manifold M of the group H containing them is to be given by two equa-
tions. Consider a case when these equations can be solved with respect to the main
variables u; p :
u = f (t; x; ρ ); p = g(t; x; ρ ); (3.6.16)
so that the parametric variable is ρ : By virtue of Theorem 3.8, the above requirement
is reduced to making it impossible to find both parametric derivatives ρt ; ρx from
equations resulting from the substitution of expressions (3.6.16) to Eqs. (3.3.12).
Upon substituting (3.6.15) into (3.3.12), one obtains

(ρ 2 fρ2 gρ )ρx = ρ ft + ρ ( f ρ f ρ ) f x + gx ;

fρ (ρ gρ γ g)ρx = gt + f gx (ρ gρ γ g) fx ; (3.6.17)
ρt + ( f + ρ fρ )ρx + ρ fx = 0:
It is not possible to find both derivatives ρt ; ρx from these equations only if (the
general case)
ρ 2 fρ2 = gρ ; ρ gρ = γ g:
Let us limit our consideration by the case γ 6= 0; 1; 3: Then the general solution of
the latter equations is written in the form

a2 γ 2a γ 1
g= ρ ; f = ρ 2 + b;
γ γ 1

where a = a(t; x); b = b(t; x) are arbitrary functions. Substituting these expressions
into the first two equations (3.6.17), one can see that they are reduced to

at = ax = bt = bx = 0;
3.7 Some problems 139

so that in fact a =const., b =const. Thus, equations of invariant manifolds (3.6.16)


containing a partially invariant H-solution of the system (3.3.12) can be only as
follows:
2a γ 2 1 a2
M : u= ρ + b; p = ρ γ ; (3.6.18)
γ 1 γ
provided that this solution is irreducible to an invariant one. Recall that G56 is the
group generated by the operators (3.3.13). One can easily verify that subgroups
H  G56 having (3.6.18) as an invariant manifold exist indeed. The subgroup H of
the maximum order is H4 and has the form
 

H4 = H X1 ; X2 ; X4 ; X5 bX3 + X6 :
γ 1

Solutions of Eqs. (3.3.12), where the variables are connected by the relations
(3.6.18), are known in gas dynamics as simple waves.

3.7 Some problems

In conclusion of the present lecture notes let us discuss some problems useful for
further development of the theory and applications of group properties of differential
equations.
Based on the fact that a theory is enriched by accumulating examples of its ap-
plication we point out that it is desirable to have calculated admissible groups for
possibly wider classes of partial differential equations. In particular, the following
problems, unsolved so far, can be singled out.
Problem 3.1. Find the group admitted by an arbitrary linear differential equation
with constant coefficients
P(D)u = 0;
where D is the vector  
∂ ∂
;:::; n ;
∂ x1 ∂x
and P(D) is a polynomial with constant coefficients.
Problem 3.2. Find the group admitted by a system of linear partial differential
equations of the first order with constant coefficients.
Problem 3.3. Make the group classification of equations of magnetohydrodynam-
ics in the three-dimensional case.
Problem 3.4. Find the group admitted by Einstein’s equations from the general rel-
ativity.
140 3 Group invariant solutions of differential equations

For some systems of equations the admissible group is already known, but there
is no complete classification of partially invariant solutions. In particular, this is the
case with equations of gas dynamics.
Problem 3.5. Classify partially invariant solutions of equations of gas dynamics in
two-dimensional and three-dimensional cases.
At present there are a lot of examples of nonlinear systems of equations admit-
ting an infinite Lie algebra of operators. However, the issue of using an infinite Lie
algebra for constructing classes of partial solutions of such systems of equations is
not sufficiently investigated.
Problem 3.6. Elaborate efficient algorithms of using an admissible infinite Lie al-
gebra for constructing classes of partial solutions of the corresponding equations.
Classifying partial solutions, e.g. invariant H-solutions, we face the necessity to
construct classes of similar subalgebras of a given Lie algebra. In some cases the
problem is investigated, however there are difficulties in calculation in applications.
Problem 3.7. Elaborate efficient algorithms of constructing classes of similar sub-
algebras of a given Lie algebra.
Together with solutions derived on the basis of finite invariants of a group GNr ;
one can pose a question on finding solutions with application of differential invari-
ants. It is possible that this will enrich the stock of partial solutions provided by
group properties of a system (S): This issue is probably not investigated at all.
Problem 3.8. Develop a theory of differential invariant and partially differential
invariant solutions of differential equations with a known admissible group.
When searching for invariant H-solutions of the system (S), one obtains a new
system (S=H): A group admitted by the system (S=H) should be somehow con-
nected with the properties of the system (S), in particular, with the group G admit-
ted by the system (S). The only result that can be easily obtained is the following: if
H is a normal subgroup in G; then the system (S=H) admits the factor group G=H:
However, examples demonstrate that the most general group admitted by the system
(S=H) can be considerably wider than the factor group.
Problem 3.9. Develop methods for finding the group admitted by the system (S=H)
directly in terms of a system (S) and its admitted group H:
An important part in enumerating partially invariant solutions of a system (S)
is played, as we could already see, by properties of reduction of such solutions
to a smaller invariance defect. Only a particular case of such reduction, based on
Property 3.1, is mentioned in the lecture notes. The general situation with reduction
of partially invariant solutions is not clear.
Problem 3.10. Find theorems on reduction for systems (S) and groups H with more
general properties than Property 3.1.
3.7 Some problems 141

In applied theories connected with solution of problems for differential equations


a given system (S) is often modelled by a simpler system (S ): A thorough investi-
gation of particular cases of such modelling demonstrates that (S ) always appears
to be admitting a wider group than (S).
Problem 3.11. Find general principles of modelling a given system (S) by a simpler
system (S ) whose admitted group is wider than that of (S):
Practice shows that calculation of operators admitted by specific systems (S) with
a large number of variables requires a lot of almost mechanical work on writing out
and elementary transformations of a huge number of equations, in short, a lot of
elementary algebraic calculations. This part of work can obviously be “delegated”
to computers.
Problem 3.12. Develop a computer algorithm of algebraic calculations for max-
imum simplification of systems of determining equations in calculating operators
admitted by a given system (S):
Nonlînear Physical Science focuses on the recent adv础K白
of fundament aJ theories and prinαples, analyti咽I 四d
symbolic approaches, as well as computation aJ techniqu由Iß
nonlinear physical science and nonHnear mathemati臼响曲
engineering applications.

5eri,臼 Edito 哩 Albert C. 1. Luo. Nail H.lbragimov

微分方程群性质理论讲义
L. V. Ovsyannikov

牛 II 锐Dt r 确 E 何利用做分力和4 对称怜的字群方法简明阳市晰的介细 并提供了 rkpl

i卒 4JJJ于和其他 11地 tl 幢刑巾的大 M山 111. W是本书作衍。这↑旦典领域的也跑贞献 +


gql 运也 f守在 J\他现代Il晴 11 1 不 ~"'I 涉且的此非常有用的材料例如 O"~yllll l! ;k"、放段
发蜒的i\I!分布哇WfJlll 由 该用论提供「求解非甜件被分为一 fl 乎11研咒直杂败节模型剑有力的

TH

L\.O、呐 annikO\'披挂佳 20 世纪剧'l'1 t世进恢ll:微

分 }j ß:/f节分析研亢的领奇科学家 他在不哇解栩部
分不 .1l'解 5理论微分方程仰分类 U 及流协 JJ 予中的
l.í Hl Jiñii 作出 ()J础性的贡献在 {h~ulnnik"、教

授的影响1'.李宿学分 trr U 1ÌÎ1 巴拉发展成 .r.if. 闸散学 h


wí 斜内的跃的领域

缸丽丽丽fI.com.(n

You might also like