Professional Documents
Culture Documents
Dokumen - Pub An Introduction To Symmetric Functions and Their Combinatorics
Dokumen - Pub An Introduction To Symmetric Functions and Their Combinatorics
Volume 91
An Introduction
to Symmetric
Functions
and Their
Combinatorics
Eric S. Egge
An Introduction
to Symmetric
Functions
and Their
Combinatorics
STUDENT MATHEMATICAL LIBRARY
Volume 91
An Introduction
to Symmetric
Functions
and Their
Combinatorics
Eric S. Egge
Editorial Board
Satyan L. Devadoss John Stillwell (Chair)
Rosa Orellana Serge Tabachnikov
Copying and reprinting. Individual readers of this publication, and nonprofit li-
braries acting for them, are permitted to make fair use of the material, such as to
copy select pages for use in teaching or research. Permission is granted to quote brief
passages from this publication in reviews, provided the customary acknowledgment of
the source is given.
Republication, systematic copying, or multiple reproduction of any material in this
publication is permitted only under license from the American Mathematical Society.
Requests for permission to reuse portions of AMS publication content are handled
by the Copyright Clearance Center. For more information, please visit www.ams.org/
publications/pubpermissions.
Send requests for translation rights and licensed reprints to reprint-permission
@ams.org.
2019
c by the author. All rights reserved.
Printed in the United States of America.
∞ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at https://www.ams.org/
10 9 8 7 6 5 4 3 2 1 24 23 22 21 20 19
Contents
Preface ix
v
vi Contents
Bibliography 337
Index 341
Preface
ix
x Preface
cover the material more efficiently, but we hope this approach brings
the subject to life in a way a more concise treatment might not.
To get the most out of this book, we suggest reading actively, with
a pen and paper at hand. When we generate data to answer a ques-
tion, try to guess the answer yourself before reading ours. Generate
additional data of your own to support (or refute) your conjecture,
and to verify patterns you’ve observed. Similarly, we intend the ex-
amples to be practice problems. Trying to solve them yourself before
reading our solutions will strengthen your grasp of the core ideas, and
prepare you for the ideas to come.
Speaking of doing mathematics, we have also included a variety
of problems at the end of each chapter. Some of these problems are
designed to test and deepen the reader’s understanding of the ideas,
objects, and methods introduced in the chapter. Others give the
reader a chance to explore subjects related to those in the chapter,
that we didn’t have enough space to cover in detail. A few of the
problems of these types ask the reader to prove results which will be
used later in the book. Finally, some of the problems are there to
tell the reader about bigger results and ideas related to those in the
chapter. A creative and persistent reader will be able to solve many
of the problems, but those of this last type might require inventing
or reproducing entirely new methods and approaches.
This book has benefitted throughout its development from the
thoughtful and careful attention, ideas, and suggestions of a variety
of readers. First among these are the Carleton students who have
used versions of this book as part of a course or a senior capstone
project. The first of these students were Amy Becker ’11, Lilly Betke-
Brunswick ’11, Mary Bushman ’11, Gabe Davis ’11, Alex Evangelides
’11, Nate King ’11, Aaron Maurer ’11, Julie Michelman ’11, Sam
Tucker ’11, and Anna Zink ’11, who used this book as part of a senior
capstone project in 2010–11. Back then it wasn’t really a book; it
was just a skeletal set of lecture notes. Based on their feedback I
wrote an updated and more detailed version, which I used as a text
for a seminar in the combinatorics of symmetric functions in the fall
of 2013. The students in this seminar were Leo Betthauser ’14, Ben
Breen ’14, Cora Brown ’14, Greg Michel ’14, Dylan Peifer ’14, Kailee
Rubin ’14, Alissa Severson ’14, Aaron Suiter ’15, Jon Ver Steegh
Preface xiii
Eric S. Egge
July 2019
Chapter 1
Symmetric Polynomials,
the Monomial
Symmetric Polynomials,
and Symmetric
Functions
1
2 1. Monomial Symmetric Polynomials
Proof. (i) This follows from the fact that π permutes the subscripts
of the variables without changing any coefficients.
(ii) This follows from the fact that if f is a sum of monomials tj ,
then π(f ) is the sum of the monomials π(tj ).
(iii) Suppose first that f and g are each a single term, so that
f (Xn ) = axa1 1 ⋅ ⋅ ⋅ xann and g(Xn ) = bxb11 ⋅ ⋅ ⋅ xbnn for constants a,b,a1 , . . . ,
an , b1 , . . . , bn . Then we have
= π(f )π(g),
and the result holds in this case. To show the result holds in general,
suppose f (Xn ) = Y1 + ⋅ ⋅ ⋅ + Yk and g(Xn ) = Z1 + ⋅ ⋅ ⋅ + Zm , where each
Yj and each Zj is a monomial in x1 , . . . , xn . Using (ii) and the fact
that (iii) holds for monomials, we find
⎛ k m ⎞
π(f g) = π ( ∑ Yj )(∑ Zl )
⎝ j=1 l=1 ⎠
k m
= π( ∑ ∑ Yj Zl )
j=1 l=1
k m
= ∑ ∑ π(Yj Zl )
j=1 l=1
k m
= ∑ ∑ π(Yj )π(Zl )
j=1 l=1
k m
= ( ∑ π(Yj ))(∑ π(Zl ))
j=1 l=1
k m
= π( ∑ Yj )π(∑ Zl )
j=1 l=1
= π(f )π(g),
Solution. The monomial x21 x2 has only one image (other than itself)
under the elements of S2 , namely x1 x22 . Therefore,
m21 (X2 ) = x21 x2 + x1 x22 .
Notice that if we hold λ constant and increase the number of variables,
then we get more images of x21 x2 , and therefore a new monomial
symmetric polynomial, namely
m21 (X3 ) = x21 x2 + x21 x3 + x1 x22 + x1 x23 + x22 x3 + x2 x23 .
The length of the partition (32 , 12 ) is greater than three, so m3311 (X3 )
is 0. The requirement that we take only distinct images of our mono-
mial comes into play when we compute m3311 (X4 ), which turns out
to be
x31 x32 x3 x4 + x31 x2 x33 x4 + x31 x2 x3 x34
+ x1 x32 x33 x4 + x1 x32 x3 x34 + x1 x2 x33 x34 . □
1.2. The Monomial Symmetric Polynomials 9
Proof. When n < l(λ), we have mλ (Xn ) = 0, in which case the result
is clear, so we assume n ≥ l(λ).
In view of Proposition C.5, it is sufficient to show σj (mλ (Xn )) =
mλ (Xn ) for all j with 1 ≤ j ≤ n − 1, where σj is the adjacent trans-
position (j, j + 1). To do this, it is sufficient to show that every
term xµ1 1 ⋅ ⋅ ⋅ xµnn has the same coefficient in σj (mλ (Xn )) as it has in
mλ (Xn ). Note that the coefficient of xµ1 1 ⋅ ⋅ ⋅ xµnn in mλ (Xn ) is 1 if
µ1 , . . . , µn is a reordering of λ1 , . . . , λn and 0 otherwise. On the other
hand, the coefficient of xµ1 1 ⋅ ⋅ ⋅ xµnn in σj (mλ (Xn )) is the coefficient
µ µj
of xµ1 1 ⋅ ⋅ ⋅ xj j+1 xj+1 ⋅ ⋅ ⋅ xµnn in mλ (Xn ). Furthermore, this coefficient
is 1 if µ1 , . . . , µj+1 , µj , . . . , µn is a reordering of λ1 , . . . , λn and 0 oth-
erwise. But µ1 , . . . , µn is a reordering of λ1 , . . . , λn if and only if
µ1 , . . . , µj+1 , µj , . . . , µn is a reordering of λ1 , . . . , λn , and the result
follows. □
{mλ (Xn ) ∣ λ ⊢ k}
When n = 3 we have
m11 (X3 )m21 (X3 ) = x31 x22 + x21 x32 + x31 x23 + x21 x33 + x32 x23 + x22 x33
+ 2x3 x2 x3 + 2x1 x32 x3 + 2x1 x2 x33
+ 2x21 x22 x3 + 2x21 x2 x23 + 2x1 x22 x23 .
for any j ≥ 1.
Corollary 1.14. For any partitions λ, µ, and ν ⊢ ∣λ∣ + ∣µ∣ and any
n ≥ 1, let the numbers aνλ,µ (n) be defined by
Comparing this last line with equation (1.1) and using the fact that
the monomial symmetric polynomials are a basis for Λ(Xn ), we see
aνλ,µ (n + j) = aνλ,µ (n), which is what we wanted to prove. □
as the next two examples suggest, we can also multiply formal power
series.
Before we combine like terms, the terms in f g are exactly the products
of one term from f and one term from g. As a result, the only terms
in f g with nonzero coefficients are those of the form xm xj xj+1 , where
m < j − 1 or m > j + 2, those of the form x2j xj+1 , those of the form
xj x2j+1 , and those of the form xj xj+1 xj+2 . We can only obtain terms
of the first form in one way: choose xj xj+1 from f and xm from g.
Therefore, each of these terms has coefficient 1. Similarly, we can
only obtain terms of the second and third forms in one way, so each
of these also has coefficient 1. But we can obtain a term of the form
xj xj+1 xj+2 in two ways: choose xj xj+1 from f and xj+2 from g, or
choose xj+1 xj+2 from f and xj from g. Therefore, each of these terms
has coefficient 2. Combining these observations, we can express f g as
∞ ∞ ∞
f g = ∑ ∑ xj xj+1 xm + 2 ∑ xj xj+1 xj+2 . □
j=1 m=1 j=1
m≠j−1
m≠j+2
so m21 = m2 m1 − m3 . □
1.4. Problems
1.1. Find all permutations π ∈ S4 for which π(f ) = f , where f (X4 ) =
x1 x22 x44 + x22 x43 x4 + x41 x22 x3 .
1.2. Find all permutations π ∈ S4 for which π(f ) = f , where f (X4 ) =
x1 x2 x23 x24 + x1 x22 x3 x24 + x21 x2 x23 x4 + x21 x22 x3 x4 .
1.3. Find a polynomial f in x1 , x2 , x3 which has π(f ) = sgn(π)f for
all π ∈ S3 , and which has x71 x23 as one of its terms.
1.4. For any set P of polynomials in x1 , . . . , xn , let Sn (P ) be the set
of permutations π ∈ Sn such that π(f ) = f for all f ∈ P . Prove
that for every P , the following hold.
(a) Sn (P ) is nonempty.
(b) Sn (P ) is closed under multiplication of permutations. That
is, if π, σ ∈ Sn (P ), then πσ ∈ Sn (P ).
(c) Sn (P ) is closed under taking inverses. That is, if π ∈
Sn (P ), then π −1 ∈ Sn (P ).
1.5. Prove or disprove: if P is a set of polynomials in x1 , . . . , xn ,
then for any permutation π ∈ Sn and any σ ∈ Sn (P ), we have
πσπ −1 ∈ Sn (P ).
1.6. For any set T ⊆ Sn , let Fix(T ) be the set of polynomials f (Xn )
such that τ (f ) = f for all τ ∈ T . Prove that for every T , the
following hold.
(a) Fix(T ) is a subspace of the vector space of polynomials in
x1 , . . . , xn with coefficients in Q.
(b) Fix(T ) is closed under multiplication. That is, if f ∈
Fix(T ) and g ∈ Fix(T ), then f g ∈ Fix(T ).
1.7. Prove or disprove: if T ⊆ Sn , then for any polynomial f ∈ Fix(T )
and any polynomial g(Xn ), we have f g ∈ Fix(T ).
1.8. Prove, or disprove and salvage: for any n ≥ 1 and any set T ⊆ Sn ,
we have T = Sn (Fix(T )).
1.9. Prove, or disprove and salvage: for any n ≥ 1 and any set P of
polynomials in x1 , . . . , xn , we have P = Fix(Sn (P )).
1.10. For any n ≥ 1, any k ≥ 0, and any T ⊆ Sn , let Fixk (T ) be the
set of polynomials in Fix(T ) which are homogeneous of degree
20 1. Monomial Symmetric Polynomials
∑ mλ
λ
1.5. Notes
The background on formal power series we have developed here will
be enough for our work with symmetric functions. However, more
information is available in a variety of sources, including [Loe17, Ch.
7], [Niv69], and [Wil05, Ch. 2].
Chapter 2
The Elementary,
Complete Homogeneous,
and Power Sum
Symmetric Functions
f (z) = (z − x1 )(z − x2 ) ⋅ ⋅ ⋅ (z − xn ).
23
24 2. The Elementary, Complete, and Power Sum Bases
k coefficient of z k
0 x1 x2 x3 x4
1 −(x1 x2 x3 + x1 x2 x4 + x1 x3 x4 + x2 x3 x4 )
2 x1 x2 + x1 x3 + x1 x4 + x2 x3 + x2 x4 + x3 x4
3 −(x1 + x2 + x3 + x4 )
e k = ∑ ∏ xj .
J⊆P j∈J
∣J∣=k
Example 2.5. Write e21 (X3 ), e21 (X4 ), and e21 (X5 ) as linear com-
binations of monomial symmetric polynomials, and then write e21 as
a linear combination of monomial symmetric functions.
Similarly, for any partitions λ and µ with ∣λ∣ = ∣µ∣, let Mλ,µ (e, m) be
the rational number defined by
(2.4) eλ = ∑ Mλ,µ (e, m)mµ .
µ⊢∣λ∣
2 3 5 7
2
1 4
2 3 5 7
1 1
1 1
1 1
Figure 2.3. The three ways to place the two 1’s in the Ferrers
diagram of (4, 2, 1)
30 2. The Elementary, Complete, and Power Sum Bases
2 2
1 1 2 1 2
1 2 1 2 1
µ coefficient of mµ
(3, 2, 12 ) 1
(3, 14 ) 4
(23 , 1) 3
(22 , 13 ) 11
(2, 15 ) 35
(17 ) 105
2.1. The Elementary Symmetric Functions 31
Proof. Let A be the p(k) × p(k) matrix whose rows and columns
are indexed by the partitions of k, in lexicographic order from small-
est to largest, and whose entries are given by Aλµ = Mλ′ ,µ (e, m).
By Proposition 2.10, A is a lower triangular matrix whose diagonal
entries are all equal to 1, so det A = 1 and A is invertible. Since
eλ′ = ∑µ⊢k Aλµ mµ , each monomial symmetric function mµ is a lin-
ear combination of elementary symmetric functions, and {eλ ∣ λ ⊢ k}
spans Λk by Proposition 1.12. But dim Λk = p(k) = ∣{eλ ∣ λ ⊢ k}∣, so
{eλ ∣ λ ⊢ k} must also be linearly independent. Therefore {eλ ∣ λ ⊢ k}
is a basis, which is what we wanted to prove. □
2.1. The Elementary Symmetric Functions 33
(12 ) (2)
(1)
(2) 1 0
(1) [ 1 ] [ ]
(12 ) 2 1
In Figure 2.5 we have the matrices Aλµ = Mλ′ ,µ (e, m) from the
proof of Corollary 2.11 for ∣λ∣ = ∣µ∣ ≤ 5.
Our solution to Example 2.7 also suggests a combinatorial in-
terpretation of Mλ,µ (e, m) involving fillings of the Ferrers diagram
of λ. As we show next, this interpretation holds in general, and we
can rephrase it in terms of matrices of 0’s and 1’s to get another
description of Mλ,µ (e, m).
Proof. (i) As above, we first note that Mλ,µ (e, m) is the coefficient
l(µ) l(λ)
of the term ∏m=1 xµmm in eλ = ∏j=1 eλj . With this in mind, suppose
l(λ) l(µ)
that for each j we have a term tj from eλj and ∏j=1 tj = ∏m=1 xµmm .
Then we can construct a filling of the Ferrers diagram of λ of the given
type by placing, for 1 ≤ j ≤ l(λ), the subscripts of the variables which
appear in tj in increasing order across the jth row of the diagram.
We can also invert this process: if we have a filling of the given type,
then for each j with 1 ≤ j ≤ l(λ) we can reconstruct tj as the product
∏k xk , which is over all k which appear in the jth row of the filling.
l(µ)
Therefore, we have a bijection between terms of the form ∏m=1 xµmm
in the product eλ1 ⋅ ⋅ ⋅ eλl(λ) and our fillings of the Ferrers diagram of
λ. Now the result follows.
(ii) Given a filling of the Ferrers diagram of λ as in part (i), place
a 1 in the mth entry of the jth column of a k × k matrix A whenever
row j of the given filling contains an m, and let the remaining entries
of A all be 0. By construction the sum of the entries in the jth column
of A will be λj . And since m appears exactly µm times in our given
filling, the sum of the entries in the mth row of A will be µm . We
can also invert this construction: if we have a k × k matrix A of the
given type, then for each j with 1 ≤ j ≤ l(λ) the entries in the jth
row of the associated filling will be the numbers of the rows in which
the jth column of A contains a 1. Now we have a bijection between
2.1. The Elementary Symmetric Functions 35
fillings of the Ferrers diagram of λ of the type given in part (i) and
matrices of the type given in part (ii), and the result follows.
(iii) Given a filling of the Ferrers diagram of λ as in part (i),
place a ball of type m in urn j whenever m appears in row j. By
construction urn j will contain λj balls for each j. And since m
appears exactly µm times in our given filling, we will use exactly µm
balls of type m. We can also invert this construction: if we have an
urn filling of the given type, then for each j with 1 ≤ j ≤ l(λ) the
entries in the jth row of the associated filling will be the numbers on
the balls in urn j. Now we have a bijection between fillings of the
Ferrers diagram of λ of the type given in part (i) and urn fillings of
the type given in part (iii), and the result follows. □
1 5
2 3
1 2 4
Example 2.13. Let λ = (3, 2, 2), and let F be the filling of the Ferrers
diagram for λ given in Figure 2.6. Find the corresponding 7×7 matrix
and way of placing seven balls in three urns described in Proposition
2.12.
⎛1 0 1 0 0 0 0⎞
⎜1 1 0 0 0 0 0⎟
⎜ ⎟
⎜0 0⎟
⎜ 1 0 0 0 0 ⎟
⎜ ⎟
⎜1 0 0 0 0 0 0⎟ .
⎜ ⎟
⎜0 0 1 0 0 0 0⎟
⎜ ⎟
⎜ ⎟
⎜0 0 0 0 0 0 0⎟
⎝0 0 0 0 0 0 0⎠
36 2. The Elementary, Complete, and Power Sum Bases
4
2 3 5
1 2 1
urn 1 urn 2 urn 3
Similarly, since the first row of F contains 1, 2, and 4, the first urn
in our placement of balls in urns will contain balls of types 1, 2, and
4. Continuing in the same way, we find the associated placement of
balls in urns is as in Figure 2.7. □
The matrices with entries Mλ,µ (e, m) in Figure 2.5 all have a nice
symmetry: if you reflect any of them over the diagonal from the lower
left corner to the upper right corner, then the matrix is unchanged.
We can use one of our interpretations of Mλ,µ (e, m) in Proposition
2.12 to give an easy proof that this holds in general.
Proof. For any partitions λ, µ ⊢ k, let Bλ,µ be the set of k×k matrices
in which every entry is 0 or 1, the sum of the entries in row m is µm
for all m, and the sum of the entries in column j is λj for all j. By
Proposition 2.12(ii) we have ∣Bλ,µ ∣ = Mλ,µ (e, m). The result follows
from the fact that the transpose map is a bijection between Bλ,µ and
Bµ,λ . □
There is much we could say about which sets are algebraically in-
dependent and which are not. For example, it is easy to construct a
set which is not algebraically independent by starting with a nonzero
symmetric function and then including some polynomial function of
that symmetric function. For instance, any set including both e2 and
e32 − 6e2 is not algebraically independent. But as our next example
shows, there are more complicated ways for a set to fail to be alge-
braically independent.
Example 2.16. Show that the symmetric functions
m1 , m11 + m2 , m21 + m3 , m31 + m4 , m41 + m5 , . . .
are not algebraically independent.
interesting. This suggests we should also consider a sum over all sub-
sets of [n] in which repeated elements are allowed. To distinguish this
sum from the corresponding sum over all subsets of [n], we will write
[[n]] to denote the multiset {1∞ , 2∞ , . . . , n∞ } with infinitely many
copies of each element of [n], and we will write J ⊆ [[n]] to mean J
is a submultiset of [[n]]. Similarly, we will write [P] to denote the
multiset consisting of infinitely many copies of each positive integer.
λ
Solution. For every λ ⊢ k, one of the terms in hk is xλ1 1 ⋅ ⋅ ⋅ xl(λ)
l(λ)
, and
this term has coefficient 1. Therefore,
(2.9) hk = ∑ mλ . □
λ⊢k
Similarly, for any partitions λ and µ with ∣λ∣ = ∣µ∣, let Mλ,µ (h, m) be
the rational numbers defined by
In Example 2.20 we found Mλ,µ (h, m) for all λ and µ with ∣λ∣ =
∣µ∣ = 3. In Figure 2.8 we summarize the results of Example 2.20
in a matrix, along with the corresponding matrices for ∣λ∣ = ∣µ∣ ≤
5. Problem 2.17 has combinatorial interpretations of these numbers
analogous to the interpretations of Mλ,µ (e, m) in Proposition 2.12.
As the matrices in Figure 2.8 suggest, the matrix we get when
we express the complete homogeneous symmetric functions in terms
of the monomial symmetric functions is not triangular in general.
(But you should take a look at its determinant! See Problem 2.23.)
As a result, it is not as easy to show the complete homogeneous
symmetric functions form a basis for Λk as it was to do this for the
elementary symmetric functions. But we can do it by using a simple
relationship between the ordinary generating function for {en }∞
n=0 and
the ordinary generating function for {hn }∞
n=0 .
(12 ) (2)
(1)
(12 ) 2 1
(1) [ 1 ] [ ]
(2) 1 1
(J1 ,J2 )
where the sum is over all ordered pairs (J1 , J2 ) in which J1 is a subset
of P of size j and J2 is a submultiset of [P] of size n − j. If we set
sgn(J1 , J2 ) = sgn(J1 ) for each ordered pair (J1 , J2 ), then we just need
to give an involution I on these ordered pairs such that I has no fixed
points and sgn(I(J1 , J2 )) = − sgn(J1 , J2 ).
To construct I, suppose (J1 , J2 ) is given. Among all of the ele-
ments of J1 ∪ J2 , let k be the smallest. If k ∈ J1 , then set I(J1 , J2 ) ∶=
(J1 − {k}, J2 ∪ {k}), and if k ∈/ J1 , then set I(J1 , J2 ) ∶= (J1 ∪ {k},
J2 − {k}). In words, move one copy of k from J2 to J1 if you can, and
otherwise move k from J1 to J2 .
Moving the smallest element of J1 ∪ J2 between J1 and J2 does
not change the fact that it is the smallest element in J1 ∪ J2 , and our
construction guarantees J1 is always a set (rather than a multiset),
so we see I(I(J1 , J2 )) = (J1 , J2 ). Furthermore, since I changes the
size of J1 by one, it can have no fixed points, and sgn(I(J1 , J2 )) =
− sgn(J1 , J2 ), as desired. □
44 2. The Elementary, Complete, and Power Sum Bases
and
∞
pk = ∑ xkj .
j=1
For any partition λ, the power sum symmetric polynomial pλ (Xn )
indexed by λ is
l(λ)
(2.13) pλ (Xn ) = ∏ pλj (Xn ),
j=1
Similarly, for any nonempty partitions λ and µ with ∣λ∣ = ∣µ∣, let
Mλ,µ (p, m) be the rational numbers defined by
(2.15) pλ = ∑ Mλ,µ (p, m)mµ .
µ⊢∣λ∣
Solution. Since pk has only terms of the form xkj , and each term has
coefficient 1, we see pk = mk . □
pλ = pλ1 ⋅ ⋅ ⋅ pλl(λ) ,
µ
and we note that Mλ,µ (p, m) is the coefficient of xµ1 1 ⋅ ⋅ ⋅ xl(µ)
l(µ)
in this
product. Each term in the expansion of this product is the product of
λ
one term from each pλj , and each such term has the form xmj for some
m. If all of these m are distinct, then λ = µ. If two of these m are
equal, then µ is obtained from λ by merging parts and rearranging
the resulting numbers into weakly decreasing order, in which case
µ >lex λ, and the result follows.
(ii) Note that in the expansion of the product
pλ = pλ1 ⋅ ⋅ ⋅ pλl(λ) ,
2.3. The Power Sum Symmetric Functions 47
(12 ) (2)
(1)
(12 ) 2 1
(1) [ 1 ] [ ]
(2) 0 1
λ
we can obtain the term xλ1 1 ⋅ ⋅ ⋅ xl(λ)
l(λ)
by choosing, for each j, the term
λ
xj j from the factor pλj . Since the coefficients in pλj are all positive for
λ
all j, the coefficient of xλ1 1 ⋅ ⋅ ⋅ xl(λ)
l(λ)
is at least 1, so Mλ,λ (p, m) > 0. □
∞ ∞
pn n 1
(2.16) ∑ t = log(∏ ).
n=1 n j=1 1 − xj t
∞
pn n
P (t) = ∑ t
n=1 n
∞ ∞
1
=∑∑ (xj t)n
n=1 j=1 n
∞
1
= ∑ log ( )
j=1 1 − xj t
⎛∞ 1 ⎞
= log ∏ ,
⎝j=1 1 − xj t ⎠
2.4. Problems
2.1. Compute Mλ,µ (e, m) for λ = (22 , 14 ) and µ = (32 , 12 ).
2.2. Find and prove a formula for Mλ,1n (e, m), where λ ⊢ n.
2.3. Find and prove a formula for Mλ,n (e, m), where λ ⊢ n.
2.4. Find and prove a formula for Mλ,µ (e, m), where λ ⊢ n and
µ = (n − 1, 1).
2.5. Show ≥lex (which means “>lex or =”) is a partial ordering on the
set of partitions.
2.6. Show that if λ and µ are distinct partitions, then λ >lex µ or
µ >lex λ. That is, show the lexicographic ordering is a total
ordering on the set of partitions.
2.7. Prove or disprove: if λ and µ are partitions with ∣λ∣ = ∣µ∣ and
λ >lex µ, then µ′ >lex λ′ .
2.8. Show that the converse of Proposition 2.10(i) is false. In partic-
ular, find partitions λ and µ with ∣λ∣ = ∣µ∣ for which Mλ,µ (e, m) =
0 even though λ′ >lex µ.
2.9. Suppose λ and µ are partitions. We say λ is greater than or
equal to µ in the dominance order, and we write λ ⊵ µ or
µ ⊴ λ, whenever ∑nj=1 λj ≥ ∑nk=1 µk for all n ≥ 0. Show that
Mλ,µ (e, m) ≠ 0 if and only if ∣λ∣ = ∣µ∣ and λ′ ⊵ µ.
2.10. Show ⊴ is a partial ordering on the set of partitions of n for all
n ≥ 0.
2.11. For each n ≥ 6, find partitions λ ⊢ n and µ ⊢ n for which neither
λ ⊴ µ nor µ ⊴ λ.
2.12. Prove or disprove: if λ and µ are partitions of n and λ ⊵ µ, then
λ ≥lex µ.
2.13. Prove or disprove: if λ and µ are partitions with ∣λ∣ = ∣µ∣ and
λ ⊵ µ, then λ′ ⊴ µ′ .
2.14. Suppose {fn }∞ n=1 is an algebraically independent set of symmet-
ric functions, each fn is homogeneous, and deg fn ≤ deg fn+1 for
all n ≥ 0. Furthermore, suppose every symmetric function can
be written as a polynomial in {fn }∞ n=1 . Show deg fn = n for all
n ≥ 0.
50 2. The Elementary, Complete, and Power Sum Bases
Interlude: Evaluations
of Symmetric Functions
53
54 3. Evaluations of Symmetric Functions
Proof. To prove equation (3.1), note that we have two kinds of terms
in ek (Xn ): those with xn as a factor and those without xn as a factor.
Those without xn as a factor form ek (Xn−1 ), and when we take out
the common factor xn from those terms with xn as a factor, we obtain
ek−1 (Xn−1 ).
The proof of equation (3.2) is Problem 3.1. □
for n ≥ 1. Our next two identities are similar to this, but they give
us expressions for pk which are similar to the sum on the left side of
equation (3.3).
and
k
(3.5) pk = ∑ (−1)k+j jek−j hj .
j=1
Proof. To prove equation (3.4), first note that when we expand the
products on the right in terms of x1 , x2 , . . ., the resulting terms corre-
spond to a choice of one term from ej , followed by a choice of one of
the variables xs in that term, followed by a choice of one term from
3.1. Symmetric Function Identities 55
1 3 4∗ 7 2 2 4
3 4∗ 7 1 2 2 4
correspond with terms in the right side of (3.4) which are negatives
of one another. In short, f is a sign-reversing involution.
Because f is a sign-reversing involution, only terms on the right
side of (3.4) corresponding with marked, filled pairs T of tiles with
f (T ) = T remain after cancellation. From our definition of f , we see
these are exactly the terms in pk .
The proof of equation (3.5) is Problems 3.3 and 3.4. □
and
k
(3.7) kek = ∑ (−1)j−1 ek−j pj .
j=1
Proof. To prove equation (3.6), first note that the terms in khk are
indexed by fillings of a 1 × k tile with positive integers in weakly
increasing order, in which exactly one entry has been marked. For
example, the filling in Figure 3.3 corresponds to the fourth copy of
x1 x43 x4 x26 x7 in h9 .
On the other hand, the terms on the right side of (3.6) are indexed
by pairs of tiles, one 1 × (k − j) and one 1 × j, in which the first tile is
filled with positive integers in weakly increasing order, and the second
tile is filled with j copies of one positive integer. For example, the
filling in Figure 3.4 corresponds to the product of x1 x23 x4 x26 x7 and x23
in the term with j = 2 on the right side of (3.6).
1 3 3 3∗ 3 4 6 6 7
1 3 3 4 6 6 7 3 3
We can now combine equations (3.8) and (3.9) with the identities
in the previous section to get a variety of binomial coefficient identi-
ties, essentially for free. As we will see, some of these identities will
be equivalent to familiar facts.
Proof. This is similar to the proof of equation (3.10), using (3.5) and
the fact that pk (1, . . . , 1) = n. □
´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶
n
The identities we have seen so far may not look familiar, but our
next one is much more common; some people call it the “hockey stick
identity” because the entries of Pascal’s triangle that it involves are
arranged in the shape of a hockey stick.
Proposition 3.7. For all n, k ≥ 1, we have
n+k−1 k−1
n+j−1
(3.12) ( )= ∑( ).
k−1 j=0 j
Proof. In (3.6) set xr = 1 for r ≤ n and xr = 0 for r > n and then use
(3.9) and the fact that pj (1, . . . , 1) = n to find
´¹¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹¶
n
n+k−1 k
n+k−j−1
k( ) = ∑ n( ).
k j=1 k−j
Our last identities are two forms of the binomial theorem, one of
the first deep facts we learn about the binomial coefficients.
Proposition 3.8. For all n ≥ 0, we have
n
n
(3.13) (1 + t)n = ∑ ( )tk .
k=0 k
1
1 1
2 3 1
6 11 6 1
24 50 35 10 1
Figure 3.5. The top of the Stirling triangle of the first kind
1
1 1
1 3 1
1 7 6 1
1 15 25 10 1
Figure 3.6. The top of the Stirling triangle of the second kind
Proposition 3.14. For all n ≥ 1, we have {n1 } = {nn} = 1, and for all
n ≥ 2 and all k with 2 ≤ k ≤ n − 1, we have
n n−1 n−1
(3.16) { }={ } + k{ }.
k k−1 k
Proof. This is Problem 3.15. □
Now suppose n ≥ 2 and the result holds for n − 1 and all k with
0 ≤ k ≤ n. Then for any k with 1 ≤ k ≤ n, we can use (3.1), induction
on n, and (3.15) to find
Now reindex the sum on the left with k = n − j, replace t with 1/t,
and multiply both sides by tn to obtain (3.21). □
Solution. The inversions are the pairs of positions of the entries which
are in decreasing order. For π = 7192686 these pairs of positions are
(1, 2), (1, 4), (1, 5), (1, 7), (3, 4), (3, 5), (3, 6), (3, 7), and (6, 7), so
inv(π) = 9. □
1
1 1
1 1+q 1
2 2
1 1+q+q 1+q+q 1
2 3 2 3 4 2 3
1 1+q+q +q 1 + q + 2q + q + q 1+q+q +q 1
n n−1 n−1
( ) −( ) = q n−k ( ) .
k q k q k−1 q
We can prove this by splitting Bn,k into those sequences which begin
with 0 and those which begin with 1.
1
0 1
0 q 1
2 2
0 q q+q 1
3 2 3 4 2 3
0 q q +q +q q+q +q 1
1
q(1) 1
2 q(1 + q)
q (1) 1
3 2 2 2
q (1) q (1 + q + q ) q(1 + q + q ) 1
Proof. Since Bn,0 = {0n } and Bn,n = {1n }, and 0n and 1n have no
inversions, we have (n0 )q = 1 and (nn)q = 1.
Now suppose n ≥ 1 and 1 ≤ k ≤ n − 1. Let Bn,k 0
be the set of
π ∈ Bn,k whose leftmost entry is 0, and let Bn,k be the set of π ∈ Bn,k
1
0 1
Since Bn,k is the disjoint union of Bn,k and Bn,k , we now see that
equation (3.23) holds. □
68 3. Evaluations of Symmetric Functions
Now that we have the q-Pascal relation for the q-binomial coef-
ficients, we can compare it with equations (3.1) and (3.2) to try to
find a specialization of x1 , . . . , xn which will give us the q-binomial
coefficients. Equation (3.1) turns out to be more complicated than
we would like, but if we make the guess (in analogy with (3.9)) that
we will have hk (Xn ) = (n+k−1
k
)q , then (3.2) tells us
n+k−1 k
n + k − j − 1 [nj]q
(3.27) k( ) = ∑( ) .
k q j=1 k−j q [j]q
Proof. This is similar to the proof of (3.28), using (2.11) and (3.25).
□
3.5. Problems
3.1. Prove equation (3.2). That is, prove that for all k ≥ 0 and all
n ≥ 1, we have
3.2. Find the images of each of the pairs of tiles in Figure 3.10 under
the sign-reversing involution we used to prove (3.4).
2 3∗ 5 1 1 3 4 6
2 3∗ 5 2 2 3 4 5
2∗ 3 5 3 3 3 3 3
3.6. Notes
Some authors prefer to use s(n, k) to denote the signed Stirling num-
bers of the first kind, c(n, k) to denote ∣s(n, k)∣, and S(n, k) to denote
the Stirling number of the second kind. For our purposes, Knuth’s no-
tation [Knu92], which is also used by Benjamin and Quinn [BQ03],
74 3. Evaluations of Symmetric Functions
75
76 4. Schur Functions
1 4
2 2
3 6 1 6
and
(4.2) eλ = ∑ xT .
T ∈StrictRow(λ,P)
The terms in the expansion of the product on the right are the prod-
l(λ)
ucts ∏j=1 tj , where tj is a term in eλj (Xn ). For each such product we
can construct a filling of the Ferrers diagram of λ of the given type by
placing, for 1 ≤ j ≤ l(λ), the subscripts of the variables which appear
in tj in increasing order in the jth row of the diagram. We can also
invert this construction: if we have a filling of the given type, then for
each j with 1 ≤ j ≤ l(λ) we can reconstruct tj as the product ∏k xk ,
which is over all k which appear in the jth row of the filling. There-
fore, we have a bijection between terms in eλ (Xn ) and our fillings of
the Ferrers diagram of λ. Moreover, by construction the image of a
filling T under this map is xT , and equation (4.1) follows.
The proof of (4.2) is similar to the proof of (4.1). □
4.1. Schur Functions and Semistandard Tableaux 77
4 4
1 4 6 1 2 4
1 2 4 1 4 6
Example 4.2. Find all fillings of the Ferrers diagram of (32 , 1) as-
sociated with the term x21 x2 x34 x6 in e331 .
and
(4.4) hλ = ∑ xT .
T ∈WeakRow(λ,P)
move one of these requirements from the rows to the columns, then we
can define a new type of filling of a Ferrers diagram which interpolates
between these two old types of fillings.
Definition 4.4. For any partition λ, a semistandard tableau of shape
λ is a filling of the Ferrers diagram of λ in which the entries in the
columns are strictly increasing from bottom to top and the entries
in the rows are weakly increasing from left to right. If T is a se-
mistandard tableau, then we write sh(T ) to denote the shape of T .
We write SST(λ; n) to denote the set of all semistandard tableaux of
shape λ with entries in [n], and we write SST(λ) to denote the set
of all semistandard tableaux of shape λ with entries in P.
When n and ∣λ∣ are small, we can list all of the semistandard
tableaux in SST(λ; n) by hand.
Example 4.5. Find all semistandard tableaux of shape (22 , 1) with
entries in [3].
3 3 3
2 2 2 3 2 3
1 1 1 1 1 2
(4.5) sλ (Xn ) = ∑ xT .
T ∈SST(λ;n)
sλ = ∑ xT .
T ∈SST(λ)
Solution. The polynomial s1k (Xn ) is a sum over all fillings of a single
column with distinct integers in [n], in which the entries are strictly
increasing from bottom to top. If n < k, then there are no such
fillings, so our sum has no terms, and s1k (Xn ) = 0 in this case. If
n ≥ k, then there is exactly one such filling for each subset of [n]
of size k, and the monomial corresponding to a subset J is ∏j∈J xj .
Therefore, s1k (Xn ) = ek (Xn ) in this case. Similarly, s1k = ek . □
2 2
1 1 1 2
2 2 2
1 1 1 2 1 3
3 3 3
1 1 1 2 1 3
3 3
2 2 2 3
of any such tableau must contain three distinct integers, since its
entries are strictly increasing from bottom to top. But we have only
two integers available, so there are no semistandard tableaux of the
sort we want. This means our sum has no terms, so by convention
s221 (X2 ) = 0.
To compute s221 (X3 ), we notice it is a sum over the set of semin-
standard tableaux in Example 4.5. Therefore, we have s221 (X3 ) =
x21 x22 x3 + x21 x2 x23 + x1 x22 x23 , which is m221 (X3 ).
To compute s221 (X4 ), we write down (in Figure 4.6) all 20 se-
mistandard tableaux of shape (22 , 1) with entries in [4]. Collecting
terms, we find
s221 (X4 ) = m221 (X4 ) + 2m2111 (X4 ).
3 3 3 3 3 3
2 2 2 3 2 4 2 3 2 4 2 4
1 1 1 1 1 1 1 2 1 2 1 3
4 4 4 4 4 4
2 2 2 3 2 4 2 3 2 4 2 4
1 1 1 1 1 1 1 2 1 2 1 3
4 4 4 4 4
3 3 3 4 3 3 3 4 3 4
1 1 1 1 1 2 1 2 1 3
4 4 4
3 3 3 4 3 4
2 2 2 2 2 3
Our proof that the Schur polynomials and Schur functions are
symmetric uses a collection of combinatorial maps showing these poly-
nomials and formal power series are invariant under certain permuta-
tions. We combine these with an algebraic argument that it is enough
to check the result for only these permutations. We start with the
maps, which are called the Bender–Knuth involutions.
For each j ≥ 1, the Bender–Knuth involution βj is a function
from the set of semistandard tableaux with shape λ and content
{µl }∞
l=1 to the set of semistandard tableaux with shape λ and con-
tent µ1 , . . . , µj−1 , µj+1 , µj , µj+2 , . . . . To describe βj , suppose T is a
4.1. Schur Functions and Semistandard Tableaux 83
semistandard tableau with shape λ and content {µl }∞ l=1 , and consider
the columns of T . We only care about j’s and j + 1’s, so for us there
are only four types of columns: those containing both a j and a j + 1,
those containing a j but not a j + 1, those containing a j + 1 but not a
j, and those containing neither a j nor a j +1. We call a j (resp., j +1)
in T paired whenever there is a j + 1 (resp., j) in its column, and free
otherwise.
Now consider a row R of T . Like every row of T , the row R
has a number (possibly 0) of j’s, followed immediately by a number
(also possibly 0) of j + 1’s. Immediately above the j’s in R are some
(possibly no) j + 1’s, and then some (also possibly no) larger entries.
Therefore, in R we have a number of paired j’s, followed by a number
of free j’s. Similarly, immediately below the j + 1’s in R, we have
a number (possibly 0) of entries less than j, followed by a number
(possibly 0) of j’s. Therefore, in T we have a number of free j + 1’s,
followed by a number of paired j + 1’s.
To construct βj (T ), we start with the bottom row of T and move
to the top, doing the same thing to every row. Namely, if a row has
a free j’s followed by b free j + 1’s, then we replace that sequence of
free j’s and free j + 1’s with a sequence of b free j’s followed by a free
j + 1’s.
5
4 4 5 5 7 7 7
3 3 3 3 3 3 4 4 4 4
2 2 2 2 2 2 2 2 3 3 3 4 4 4 5 7
1 1 1 1 1 1 1 1 1 2 2 2 2 3 4 6
Example 4.12. Find the image of the tableau T in Figure 4.7 under
the Bender–Knuth involution β3 .
Solution. In Figure 4.8 we have written the free 3’s and 4’s in T in
bold, and larger than the other entries. In the bottom row we have
84 4. Schur Functions
5
4 4 5 5 7 7 7
3 3 3 3 3 3 4 4 4 4
2 2 2 2 2 2 2 2 3 3 3 4 4 4 5 7
1 1 1 1 1 1 1 1 1 2 2 2 2 3 4 6
Figure 4.8. The tableau T for Example 4.12 with the free
3’s and 4’s in bold
no free 3’s and one free 4, so we replace the free 4 with a (free) 3.
In the second row from the bottom we have one free 3 and two free
4’s, so we replace them with two (free) 3’s and one (free) 4. And in
the third row from the bottom we have four free 3’s and two free 4’s,
which we replace with two (free) 3’s and four (free) 4’s. When we’re
done, we have the tableau in Figure 4.9. □
5
4 4 5 5 7 7 7
3 3 3 3 4 4 4 4 4 4
2 2 2 2 2 2 2 2 3 3 3 3 4 4 5 7
1 1 1 1 1 1 1 1 1 2 2 2 2 3 3 6
Proof. First note that applying the action of βj to a single row does
not change which j’s and j +1’s are free in any row. (As an aside, this
means we could apply the action of βj to the rows in any order, and
we would get the same tableau at the end.) Second, because βj only
changes j’s (resp., j + 1’s) with no j + 1 (resp., j) in their columns, if
a row in T has a free j’s and b free j + 1’s, then in βj (T ) it has b free
j’s and a free j + 1’s. Therefore, each row returns to its original state
if we apply βj again, so βj (βj (T )) = T . In particular, βj = βj−1 , so βj
is a bijection. □
86 4. Schur Functions
Proposition 4.17. For any partitions λ and µ with ∣λ∣ = ∣µ∣ and any
n ≥ 1, let Kλ,µ,n be the number of semistandard tableaux with shape
λ and content µ, with entries in [n]. If n ≥ ∣λ∣, then Kλ,µ,n = Kλ,µ .
In particular, if n ≥ ∣λ∣, then Kλ,µ,n is independent of n.
(12 ) (2)
(1)
(12 ) 1 0
(1) [ 1 ] [ ]
(2) 1 1
Proposition 4.18. For all k ≥ 0, the following hold for all partitions
λ, µ ⊢ k.
has exactly one term for each permutation. This, in combination with
the signs associated with each term, allows us to write this alternating
polynomial as a determinant.
Proposition 4.24. If µ is a sequence with µ1 > µ2 > ⋅ ⋅ ⋅ > µn ≥ 0 and
(4.7) aµ (Xn ) = ∑ sgn(π)xµπ(1)
1
xµπ(2)
2
⋅ ⋅ ⋅ xµπ(n)
n
,
π∈Sn
Solution. We have
4
⎛x1 x42 x43 ⎞
det ⎜x1 x2 x3 ⎟
aλ+δn (Xn ) ⎝1 1 1⎠
=
aδn (Xn ) (x1 − x2 )(x1 − x3 )(x2 − x3 )
= x21 + x22 + x23 + x1 x2 + x1 x3 + x2 x3 ,
so our ratio is equal to m2 (X3 ) + m11 (X3 ) = e11 (X3 ) − e2 (X3 ) =
h2 (X3 ) = 21 p2 (X3 ) + 12 p11 (X3 ) = s2 (X3 ). □
n a (X )
Examples 4.26 and 4.27 suggest we can express aλ+δ n
δn (Xn )
most
simply in terms of Schur polynomials or complete homogeneous sym-
metric polynomials. Computing more examples sheds more light on
the situation: if λ = (2, 1), then we find
aλ+δn (Xn )
= s21 (Xn ) = h21 (Xn ) − h3 (Xn );
aδn (Xn )
if λ = (22 ), then we find
aλ+δn (Xn )
= s22 (Xn ) = h22 (Xn ) − h31 (Xn );
aδn (Xn )
and if λ = (2, 12 ), then we find
aλ+δn (Xn )
= s211 (Xn ) = h4 (Xn ) − h31 (Xn ) − h22 (Xn ) + h211 (Xn ).
aδn (Xn )
These suggest the following result.
96 4. Schur Functions
aλ+δn (Xn )
(4.12) sλ (Xn ) = .
aδn (Xn )
Example 4.30. Find schurwtλ (π) for π = 561432 and λ = (4, 32 , 1).
4.2. Schur Polynomials as Ratios of Determinants 97
π∈Sn
2 2 2
1 1 1 2 1 2
3 3 3 3 3 1
1 4
2 7
3 1 5
left to right in each row, starting with the top row and ending with
the bottom row.
2 2 3 2 3 3 2 2
1 3 2 1 3 3 1 1 1 1 1 1
4 1 1 4 4 4 4 4 4 4 4 4
1
5
2
5
Solution. In the upper left partial filling, the rightmost entry in the
bottom row must be greater than or equal to 5 = π(2) with re-
spect to <π , so it cannot be π(1). Therefore, the reading word for
the filling will not start with π(1), so it will not be a Littlewood–
Richardson π-word, and this partial filling cannot be completed to
form a Littlewood–Richardson π-tableau.
In order to complete the upper right filling to form a Littlewood–
Richardson π-tableau, we must put a 3 = π(1) in the rightmost box
in the bottom row. And since the entries in that row must be weakly
increasing with respect to <π from left to right, all of the entries in
this row must be 3. Now the next entry in the reading word for the
filling will be 1 = π(3), so this reading word cannot be a Littlewood–
Richardson π-word, and we cannot complete the filling to form a
Littlewood–Richardson π-tableau.
As with the upper right filling, in order to complete the lower left
filling to form a Littlewood–Richardson π-tableau, we must put a 3
in every entry in the bottom row. And for a similar reason, we must
put a 5 = π(2) in every entry in the second row from the bottom. And
again for a similar reason, we must put a 1 = π(3) as the rightmost
entry in the third row from the bottom. But 2 >π 1, so we cannot
complete the filling to form a Littlewood–Richardson π-tableau.
4.2. Schur Polynomials as Ratios of Determinants 103
Our work in Examples 4.38 and 4.39 suggests several useful facts
about π-climbers and Littlewood–Richardson π-tableaux.
Proposition 4.40. Suppose n ≥ 1, π ∈ Sn , and λ is a partition with
l(λ) ≥ n.
(i) There is a unique Littlewood–Richardson π-tableau of shape
λ, which is the filling in which every entry in row j is π(j)
for all j.
(ii) If T is a semistandard π-tableau of shape λ but T is not a
Littlewood–Richardson π-tableau, then its π-climber is the
rightmost entry in its row, and it is the leftmost entry in
word(T ) which is in row j but is not equal to π(j).
Proof. Suppose T is the filling in which every box in the jth row
from the bottom contains π(j), for all j. By construction, T is a
semistandard π-tableau. And because T has partition shape, word(T )
is a Littlewood–Richardson π-word and T is a Littlewood–Richardson
π-tableau.
Now suppose T is a semistandard π-tableau and there is a j such
that at least one box in the jth row from the bottom of T does not
contain π(j). Consider the rightmost entry k = π(l) of word(T ) with
this property. We claim this entry is the rightmost entry in its row
in T , and it is the π-climber in T .
To see our entry is the rightmost entry in its row, suppose it is
in the jth row from the bottom of T . Since T is column-strict with
respect to <π , this entry must be greater than π(j) with respect to <π .
But if it is not the rightmost entry in its row, then by construction
the rightmost entry in its row is π(j) <π k, which contradicts the fact
that T is row-nondecreasing with respect to <π .
To see our entry is the π-climber in T , note that the entries to
its right in the reading word for T , reading from right to left, are λ1
104 4. Schur Functions
n
= xl xn−j+1
k xn−j
l
n−m
∏ xπ(m)
m=1
m≠j−1,j
= xl schurwt∅ (κ(π)),
which is what we wanted to prove. □
R
k
` ` k k
| {z }| {z }
a b
image of R
`
k k ` `
| {z }| {z }
b a
row above R
k
` ` k k
| {z }| {z }
a b `
image of
row above R
`
k k ` `
| {z }| {z }
b a k
2 2 3 2 3 3 2 2
1 3 2 1 3 3 1 1 1 1 1 1
4 1 1 4 4 4 4 4 4 4 4 4
2 2 1 2 3 3 2 3
4 3 2 3 3 1 1 1 1 1 1 1
1 1 4 4 4 4 4 4 4 4 4 4
l l l l
a a a a
r r r r
g g g g
e e e e
k ` k k
or or
` `
` k
s s s s
m m m m
a a a a
l l l l
l l l l
C κ(C) C κ(C)
where k(T ) and l(T ) are the number of k’s and l’s in T , respectively.
Furthermore, in each row other than the row of the π-climber, T and
κ(T ) have the same number of k’s and the same number of l’s. And
in the row of the π-climber, one k in T (the π-climber itself) becomes
an l in κ(T ), so k(T ) − 1 = k(κ(T )) and l(T ) + 1 = l(κ(T )). This
means
xT xk
κ(T )
= ,
x xl
which is what we wanted to prove. □
Lemmas 4.41(i) and (ii) and 4.43(i) and (ii) tell us that if T is not
a Littlewood–Richardson π-tableau, k = π(j) is the π-climber in T ,
and l = π(j − 1), then
xl xk
sgn(π) schurwt∅ (π)xT = − sgn(κ(π)) schurwt∅ (κ(π)) xκ(T )
xk xl
= − sgn(κ(π)) schurwt∅ (κ(π))xκ(T ) .
Since κ is an involution by Lemma 4.44, our last computation tells
us any term on the right side of (4.18) which is not indexed by a
Littlewood–Richardson π-tableau cancels with another such term.
Therefore, (4.18) is equivalent to
(4.19) sλ (Xn )aδn (Xn ) = ∑ ∑ sgn(π) schurwt∅ (π)xT ,
π∈Sn T
4.3. Problems
4.1. Find the image of the semistandard tableau T in Figure 4.22
under the Bender–Knuth involution β1 .
4.2. Find the image of the semistandard tableau T in Figure 4.22
under the Bender–Knuth involution β3 .
4.3. Find the image of the semistandard tableau T in Figure 4.22
under the Bender–Knuth involution β4 .
112 4. Schur Functions
5 5 6 7
3 4 4 5 5 5 6
2 2 3 3 4 4 4 4 5 5 5
1 1 2 2 2 2 3 3 3 3 3 4
and
k
(4.20) sn−k,1k = ∑ (−1)j+k ej hn−j .
j=0
∑ wt(T ),
T
∑ wt(T )
T
3
5 5 3
1 1 1
6 6 6 6
1 1 1 1 4 4 6 6 6 8 8 8 8 8 8
2 2 2 2 1 1 1 1 1 1 4 4 4 6 6
7 5 5 5 5 5 5 2 2 2 2 1 1 1 1
3 3 3 7 7 7 7 7 5 5 5 5 2 2 2
2
3 3 3 2
1 1 5 3 3 3 2 2 2
4 4 4 1 1 5 5 5 3 3
4.4. Notes
One of the central problems of current interest in the study of sym-
metric functions is to explain how to write a given symmetric function
as a linear combination of Schur functions. This is of particular in-
terest if the coefficients involved are nonnegative integers (when our
4.4. Notes 117
Interlude: A Rogues’
Gallery of Symmetric
Functions
119
120 5. A Rogues’ Gallery of Symmetric Functions
sλ/µ (Xn ) = ∑ xT .
T ∈SST(λ/µ;n)
sλ/µ = ∑ xT .
T ∈SST(λ/µ)
5.1. Skew Schur Functions 121
Proposition 5.7. For any partitions µ ⊆ λ and ν with ∣ν∣ = ∣λ∣ − ∣µ∣
and any n ≥ 1, let Kλ/µ,ν,n be the number of semistandard tableaux
with shape λ/µ and content ν, with entries in [n]. If n ≥ ∣λ∣ − ∣µ∣,
then Kλ/µ,ν,n = Kλ/µ,ν . In particular, if n ≥ ∣λ∣ − ∣µ∣, then Kλ/µ,ν,n is
independent of n.
Now that we know that the skew Schur polynomials and the skew
Schur functions are symmetric and that we can investigate algebraic
relationships involving skew Schur functions by looking at analogous
relationships involving skew Schur polynomials, we look at some ex-
amples to see what else we can discover about the skew Schur sym-
metric functions.
Example 5.8. Find all of the skew Schur polynomials in Λ2 (X2 ).
Solution. At first this looks daunting: there are many pairs of parti-
tions µ ⊆ λ with ∣λ∣ − ∣µ∣ = 2. But many of these pairs give us exactly
the same monomials, and therefore exactly the same skew Schur poly-
nomials. For example, (3, 1)/(12 ), (7)/(5), and (7, 32 , 1)/(5, 32 , 1)
all lead to the original Schur polynomial s2 (X2 ) = x21 + x1 x2 + x22 .
Similarly, (32 , 1)/(22 , 1), (92 )/(82 ), and (42 , 2, 13 )/(32 , 2, 13 ) all lead
to the other original Schur polynomial s11 (X2 ) = x1 x2 . And fi-
nally, (4, 3, 1)/(3, 2, 1), (8, 3)/(7, 2), and (52 , 32 , 22 , 1)/(5, 4, 32 , 2, 12 )
all look somewhat different, but they give us exactly the same mono-
mials, and their associated skew Schur polynomials are all equal to
s21/1 (X2 ) = x21 + 2x1 x2 + x22 = s1 (X2 )s1 (X2 ) = s2 (X2 ) + s11 (X2 ). □
The fact that s21/1 (X2 ) = s1 (X2 )s1 (X2 ), and therefore s21/1 = s21 ,
suggests a more general result about sλ/µ when λ/µ has more than
one piece. To make this more precise, suppose µ ⊆ λ are partitions.
We say two 1 × 1 squares in λ/µ are adjacent whenever they share an
edge. We also say a set of 1 × 1 squares in λ/µ is connected when-
ever any two squares a and b in the set are connected by a path
5.1. Skew Schur Functions 123
1 2 2 2
1 1
1 3 2 3 3 3
1 1 1
1 3 2 3 3 3
2 2 2
2 3 3 3 3 3
1 2 1 2 2 2
1 1 1
2 4 3 4 3 4 4 4 4 4
1 2 1 2 2 2 1 2 2 2
1 1 1 1 1
2 4 3 4 3 4 4 4 4 4 4 4
1 3 1 3 2 3 1 3 2 3 3 3
1 1 1 1 1 1
2 4 3 4 3 4 4 4 4 4 4 4
1 3 1 3 2 3 1 3 2 3 3 3
2 2 2 2 2 2
Now that we are warmed up, we can write down the 75 semistan-
dard skew tableaux of shape (23 )/(1) with entries in [5] and then
collect terms to find
Proof. Set n = ∣ν∣ = ∣λ∣ − ∣µ∣; we will show sλ/µ (Xn ) = sν (Xn ).
For any sequence α = α1 , . . . , αn of nonnegative integers with
n in sλ/µ (Xn ).
α1 +⋅ ⋅ ⋅+αn = n, let Aλ/µ,α be the coefficient of x1α1 ⋅ ⋅ ⋅ xαn
Proposition 5.11 raises a question: Are there any other skew Schur
functions which are also Schur functions (of partition shapes)? We
already have enough tools to show that under certain conditions the
answer is no.
Proposition 5.12. Suppose µ ⊆ λ and ν are partitions with ∣ν∣ =
∣λ∣ − ∣µ∣, and let ν r be the skew shape we obtain by rotating ν by 180
degrees. If λ′j − µ′j ≥ λ′j−1 − µ′j−1 for all j ≥ 2 and sλ/µ = sν , then
λ/µ = ν or λ/µ = ν r .
Figure 5.4. The Ferrers diagrams for (5, 42 , 3, 1)/(32 , 2) and (5, 4, 22 , 1)/(3, 12 )
where the sum is over all set-valued tableaux of shape λ with entries in
P. We call Gλ (Xn ) the stable Grothendieck polynomial in x1 , . . . , xn
indexed by λ, and we call Gλ the stable Grothendieck function indexed
by λ.
2 3 23
1 1 1 1 1 1
2 3 23 3 3
1 2 1 2 1 2 12 2 2 2
2 3 23 3 3
1 3 1 3 1 3 12 3 2 3
2 3 23
1 12 1 12 1 12
2 3 23
1 13 1 13 1 13
2 3 23 3 3
1 23 1 23 1 23 12 23 2 23
2 3 23
1 123 1 123 1 123
Solution. In Figure 5.7 we have T with the free 1’s and 2’s in large
bold. There are two free 1’s and four free 2’s in the first row, so we
5.2. Stable Grothendieck Polynomials 133
345 5 6 6
2 23 345 5 5
1 1 1 12 2 2 2 34
345 5 6 6
2 23 345 5 5
1 1 1 12 2 2 2 34
Figure 5.7. The set-valued tableau T in Example 5.19 with
the free 1’s and 2’s in bold
345 5 6 6
2 23 345 5 5
1 1 1 1 1 12 2 34
345 5 6 6
23
2 2 45 5 5
1 1 1 12 3 3 3 34
replace them with four free 1’s and two free 2’s, moving the set 12
two boxes to the right. We find β1 (T ) is the tableau in Figure 5.8.
Working in a similar way, we find β2 (T ) is the tableau in Figure
5.9, β3 (T ) is the tableau in Figure 5.10, and β4 (T ) is the tableau in
Figure 5.11. □
134 5. A Rogues’ Gallery of Symmetric Functions
345 5 6 6
2 234 45 5 5
1 1 1 12 2 2 2 34
34 45 6 6
2 23 34 4 45
1 1 1 12 2 2 2 35
and
G12 = s12 − 2s13 + 3s14 − ⋅ ⋅ ⋅ ,
so
G1 + G12 = s1 − s13 + 2s14 − ⋅ ⋅ ⋅ .
We can now use our solution to Example 5.18 to eliminate s13 , s14 , . . .
in turn, to find
∞
s1 = ∑ G1k .
k=1
Solution. If µ1 < 3, then we have a box in the first row of λ/µ, but
no positive integer is small enough for such a box. So we must have
µ1 = 3. In Figure 5.12 we have all of the elegant tableaux of shape
(3, 2, 1)/µ. □
2 2
1
2
1 1
1 1
1
n
Example 5.24. Find f11k , where k ≥ n.
Solution. In any elegant tableau of shape (1k )/(1n ), the entries are a
subset of [k − 1] of size k − n. In any subset of [k − 1] of size k − n the
smallest entry is at most (k − 1) − (k − n) + 1 = n, the second smallest
entry is at most (k − 1) − (k − n) + 2 = n + 1, and in general the jth
smallest entry is at most n + j. On the other hand, in an elegant
5.3. Dual Stable Grothendieck Polynomials 137
tableau of shape (1k )/(1n ), the entry in the jth box from the bottom
(of the skew shape) can be at most n + j. So every subset of [k − 1] of
size k − n corresponds to an elegant tableau of shape (1k )/(1n ), and
n
f11k = (k−n
k−1
) = (n−1
k−1
). □
sµ = ∑ fλµ Gλ .
λ⊇µ
Proving Proposition 5.25 would take us too far afield, but one of
its consequences is worth pointing out.
3 3 3 3 3 3
1 1 1 2 1 3 2 2 2 3 3 3
2 2 2 2 2
1 1 1 2 1 3 2 2 2 3
1 1 1
1 1 1 2 1 3
1 2 2 2
1 1 2 1
T β1 (T )
2 2 2 1
1 2 1 2
1 1 2 1
T β1 (T )
j+1 j+1
.. j + 1 .. j + 1
. .
.. .. .. ..
. . . .
j+1j+1 j+1j+1
7!
j+1 j j j+1
.. .. .. ..
. . . .
.. ..
j+1 . j .
j j+1
j+1 j
.. j .. j+1
. .
.. .. .. ..
. . . .
j+1 j j j+1
7!
j j j j
.. .. .. ..
. . . .
.. ..
j . j .
j j
Solution. Since the Ferrers diagram for (1n ) has just one column, ev-
ery term in g1n has the form xj1 xj2 ⋅ ⋅ ⋅ xjk for j1 < j2 < ⋅ ⋅ ⋅ < jk . There-
fore, g1n is a linear combination of Schur functions s1k . The coefficient
of s1k in this linear combination is the coefficient of x1 x2 ⋅ ⋅ ⋅ xk , which
is the number of reverse plane partitions of shape 1n with at least one
j for 1 ≤ j ≤ k, and no other entries. This is the number of solutions
to a1 + ⋅ ⋅ ⋅ + ak = n with aj ≥ 1 for all j, where we can interpret aj as
the number of times j appears in our plane partition. And this is the
number of sequences of k − 1 bars and n − k stars, or (n−1 k−1
). Therefore,
n
n−1
g1n = ∑ ( )s1k . □
k=1 k−1
We will prove this result at the end of Section 8.1, after we have
developed an important combinatorial tool involving semistandard
tableaux.
1 2 1 2
Example 5.34. Let P4 be the path with four vertices. Find a proper
coloring of P4 with two colors, a proper coloring with three colors,
and a proper coloring with four colors.
by setting
wtG (c) = ∏ xc(v) .
v∈V
We define the chromatic symmetric function of G, written XG , by
setting
XG = ∑ wtG (c),
c
where the sum on the right is over all proper colorings of G, with any
number of colors.
Example 5.38. Find the chromatic symmetric function for the com-
plete graph Kn in terms of any convenient set of symmetric functions.
Example 5.39. Find the chromatic symmetric function for the graph
Knc with n vertices and no edges in terms of any convenient set of
symmetric functions.
λ
Proof. By definition, the coefficient of xλ1 1 ⋅ ⋅ ⋅ xλl(λ) l(λ)
(and therefore
the coefficient of mλ in XG ) is the number of proper colorings of
G in which λj vertices have color j for all j. From our discussion
above we know we can construct each such proper coloring uniquely
by choosing a stable partition V1 , V2 , . . . of G and assigning the colors
1, 2, . . . , l(λ) to V1 , . . . , Vl(λ) in order. Therefore, the coefficient of mλ
in XG is zλ (G), which is what we wanted to prove. □
5.4. The Chromatic Symmetric Function 149
1 2 3 2 3 1
1 3 2 3 1 2
2 1 3 3 2 1
Figure 5.20. The six stable set partitions of the path with
three vertices with λ = (13 )
1 2 1
Figure 5.21. The stable set partition of the path with three
vertices with λ = (2, 1)
G XG
6m111 + m21
24m1111 + 6m211 + 2m22
24m1111 + 2m211
24m1111 + 4m211
G XG
120m11111 + 36m2111 + 12m221 + 2m311 + m32
n XPn
0 1
1 e1
2 2e2
3 3e3 + e21
4 4e4 + 2e31 + 2e22
5 5e5 + 3e41 + 7e32 + 3e221
6 6e6 + 4e51 + 10e42 + 6e33 + 4e321 + 2e222
color of the leftmost vertex in this coloring be a. Note that the gener-
ating function for the objects we have constructed is (k − 1)ek XPn−k .
To turn these objects into proper colorings of Pn , we consider two
152 5. A Rogues’ Gallery of Symmetric Functions
Proof. When we multiply (5.5) by tn and sum the result over all
n ≥ 0, we find
∞ ∞ ∞ n
∑ XPn t = ∑ en t + ∑ ∑ (k − 1)ek XPn−k t t
n n k n−k
n=0 n=0 n=0 k=2
∞ ∞ ∞
= ∑ en tn + ( ∑ (j − 1)ej tj )( ∑ XPn tn ).
n=0 j=2 n=0
5.5. Problems
5.1. Suppose k ≥ 4, λ = (k, 2), and µ = (1). For a given positive
integer n, how many semistandard skew tableaux of shape λ/µ
are there with entries in [n]?
5.2. Show that if partitions µ ≠ λ have µ ⊆ λ, then µ <lex λ and
µ ⊴ λ.
5.3. If λ = (k n ) for positive integers k and n, then how many parti-
tions µ have µ ⊆ λ?
5.4. Suppose µ ⊆ λ are partitions and n is a positive integer. Use
the Bender–Knuth involutions to show that sλ/µ (Xn ) and sλ/µ
are symmetric.
5.5. For each n ≥ 2, how many elegant tableaux are there
of shape λ/µ, where λ = (n, n − 1, n − 2, . . . , 2) and µ =
(n − 2, n − 3, n − 4, . . . , 1)?
5.6. Let T be the reverse plane partition in Figure 5.22. For each
j, compute γj (T ), where γj is the generalized Bender–Knuth
involution described in Section 5.3.
3 3
1 2 3 5 6
1 2 3 4 5
1 2 3 4 5 5 6 6
1 1 3 3 5 5 5 6
1 1 1 3 5 5 5 6
5.10. For any graph G and any edge e in G, let G/e be the graph
we obtain from G by removing e, and let G/e be the graph we
obtain from G by removing e and identifying the two vertices
it connects. Show
5.17. Find the chromatic symmetric functions for the graphs in Figure
5.23.
5.18. Find the chromatic symmetric functions for the graphs in Figure
5.24.
5.6. Notes
Propositions 5.11 and 5.13 appear in Stanley’s Enumerative Combi-
natorics, Volume 2 [Sta99, Exercise 7.56(a)], and van Willigenburg
[vW05] uses Littlewood–Richardson tableaux to prove the more gen-
eral version of Proposition 5.12. Billera, Thomas, and van Willi-
genburg [BTvW06] prove a general result explaining the fact that
s(5,42 ,3,1)/(32 ,2) = s(5,4,22 ,1)/(3,12 ) .
The stable Grothendieck polynomials Gλ were first introduced
by Fomin and Kirillov [FK96] in another context, but it was Buch
[Buc02] who first gave a combinatorial formula for Gλ in terms of
set-valued tableaux. The dual stable Grothendieck polynomials were
defined in terms of reverse plane partitions by Lam and Pylyavskyy
[LP07]. Macdonald has gathered a variety of other Schur function
variations [Mac92].
Reverse plane partitions are connected with some rich enumer-
ative combinatorics, as are several other types of plane partitions.
Many books have more information on this, including [Bre99] and
[Sta99, Sec. 7.20-7.22].
Cho and van Willigenburg [CvW16] have found numerous bases
for Λ generated by chromatic symmetric functions. In particular, they
showed that for any family {Hk }k≥1 of graphs in which Hk has exactly
k vertices for each k, the symmetric functions {XHk }k≥1 generate Λ.
Martin, Morin, and Wagner [MMW08] have made some progress on
Stanley’s tree question, but the general problem remains open. In
related work, Orellana and Scott [OS14] have constructed an infinite
family of pairs of unicyclic graphs with the same chromatic sym-
metric function. Finally, several authors have studied variations and
generalizations of Stanley’s chromatic symmetric function, including
Gebhard and Sagan [GS01], Williams [Wil07], Humpert [Hum11],
and Shareshian and Wachs [SW16].
Chapter 6
The Jacobi–Trudi
Identities and an
Involution on Λ
Now that we have several bases for the vector space of symmetric
functions, we would like to know how to express these bases in terms
of one another. We have already made some progress in this direc-
tion: we have combinatorial descriptions of the coefficients we obtain
when we express the elementary symmetric functions, the complete
homogeneous symmetric functions, the power sum symmetric func-
tions, and the Schur functions in terms of the monomial symmetric
functions. However, we do not know how to express (for example)
the power sum symmetric functions in terms of the Schur functions.
In this chapter we tackle two related questions along these lines. In
particular, we show how to use determinants to express the Schur
functions in terms of the complete homogeneous symmetric functions
and then in terms of the elementary symmetric functions. Our results
will lead us to a new symmetry on the space of symmetric functions.
157
158 6. The Jacobi–Trudi Identities and an Involution on Λ
this in some simple cases; for example, the fact that sn = hn follows
immediately from the definitions of sn and hn . To see what happens
more generally, we consider a slightly larger case.
Example 6.1. Express the Schur function s22 as a linear combina-
tion of complete homogeneous symmetric functions. Consider using
a software package like SageMath for your computations.
h h3 ⎛ h2 h3 h4 ⎞
s21 = det ( 2 ) = det ⎜ h0 h1 h2 ⎟. Moving to longer par-
h0 h1 ⎝h−2 h−1 h0 ⎠
⎛ h2 h3 h4 ⎞
titions, we can also compute s211 = det ⎜ h0 h1 h2 ⎟ and s333 =
⎝h−1 h0 h1 ⎠
⎛h3 h4 h5 ⎞
det ⎜h2 h3 h4 ⎟. Taken together, these computations suggest the
⎝h1 h2 h3 ⎠
following result.
Theorem 6.2 (The First Jacobi–Trudi Identity). For any partition
λ and any k ≥ l(λ), we have
(6.1) sλ = det (hλj +l−j )1≤j,l≤k .
Here we take hn = 0 for all n < 0.
where the sum on the right is over all multisets of n positive integers.
To visualize one of these multisets, we imagine a bar graph in which
each element j of the multiset corresponds to a 1 × j bar, each bar
sits on the x-axis, and the bars are in weakly increasing order from
left to right with no gaps between consecutive bars. In Figure 6.1 we
see the bar graph for the multiset {1, 1, 3, 4, 4, 4}.
Figure 6.2. The boundary path of the bar graph for {1, 1, 3, 4, 4, 4}
Notice that we are free to choose any integer a in (6.3); soon we will
make an especially useful choice for this parameter.
6.1. The First Jacobi–Trudi Identity 161
x4 x4 x4
x3
x1 x1
Solution. The highest row in which two paths in our sequence inter-
sect is the row at height 7. This row contains just one intersection
point, so the rightmost intersection point is the point in this row where
166 6. The Jacobi–Trudi Identities and an Involution on Λ
the red and blue paths intersect. When we swap the tails of these
paths, we obtain the sequence of paths in Figure 6.8. Note that the
permutation for β is 23145 and the partition is (5, 42 , 3, 2), while the
permutation for tswp(β) is 23415 and the partition is still (5, 42 , 3, 2).
When we compute tswp(tswp(β)), we find exactly the same in-
tersection point as we did when we computed tswp(β). So we once
again swap the tails of the red and blue paths, to find tswp(tswp(β)) =
β. □
Now we can use tswp to cancel terms on the right side of (6.1).
Lemma 6.6. Suppose k ≥ 1 and λ is a partition with l(λ) ≤ k. Then
we have
∑ (−1) wth (β) = ∑ wth (β),
inv(π)
(6.5)
β∈Hλ,k β∈IIλ,k
where IIλ,k is the set of lattice path families in Hλ,k in which no two
lattice paths intersect. (The notation II is intended to remind us of
an H with its intersections removed.)
Solution. Reading from right to left, the first path in β has east steps
at heights 1, 2, 2, and 5, so these are the entries in the bottom row
of η(β). Similarly, the entries in the second row from the bottom are
2, 4, and 4. Continuing in this way, we find η(β) is the tableau in
Figure 6.10. □
1
−4 −3 −2 −1 0 1 2 3
6 7
5 5 5
2 4 4
1 2 2 5
4 4 6
2 3 5
1 1 3 4 4
1
−4 −3 −2 −1 0 1 2 3 4
= sλ ,
find
e e3
s21 = e2 e1 − e3 = det ( 2 ),
e0 e1
e e3
s22 = e2 e2 − e3 e1 = det ( 2 ),
e1 e2
⎛e2 e3 e4 ⎞
s31 = e2 e1 e1 − e2 e2 − e3 e1 + e4 = det ⎜e0 e1 e2 ⎟ ,
⎝0 e0 e1 ⎠
and
⎛e2 e3 e4 ⎞
s32 = e2 e2 e1 − e3 e1 e1 − e3 e2 + e4 e1 = det ⎜e1 e2 e3 ⎟ .
⎝0 e0 e1 ⎠
If we look at the subscripts in the diagonals of the matrices whose de-
terminants we are getting for sλ , we recognize the conjugate partition
λ′ , and if we look across the rows we see the subscripts increase by 1
every time we move to the next column to the right. This suggests
the following result.
Theorem 6.10 (The Second Jacobi–Trudi Identity). For any parti-
tion λ and any k ≥ λ1 , we have
(6.6) sλ = det (eλ′j +l−j ) .
1≤j,l≤k
x7 x8 x9
x5
x1 x2
en = ∑ wte (α)
α∈Γa,b,n
k
det (eλ′j +l−j ) = ∑ (−1)inv(π) ∏ ∑ wte (α).
1≤j,l≤k m=1 α∈Γa,b,λ′
π∈Sk
m +π(m)−m
we have
k
det (eλ′j +l−j ) = ∑ (−1)inv(π) ∑ ∏ wte (αm ).
1≤j,l≤k α1 ,...,αk m=1
π∈Sk
αj ∈Γ
aj ,bj ,λ′ +π(j)−j
j
Solution. In Figure 6.15 we have labelled each path with its corre-
sponding αj and numbered their starting points and ending arrows
Figure 6.15. The lattice path family for Example 6.11 with
information for π included
from right to left. If we now read our paths from top to bottom, we
see α1 (in black) tells us π(1) = 1, α2 (in yellow) tells us π(2) = 2, α3
(in green) tells us π(3) = 4, α4 (in red) tells us π(4) = 5, and α5 (in
blue) tells us π(5) = 3. Therefore, π = 12453.
To find λ, we note that for each j, the number of east steps in αj
is λ′j + π(j) − j, so λ′ = (4, 34 ) and λ = (53 , 1).
Finally, the e-weights for α1 , α2 , α3 , α4 , and α5 are x3 x5 x6 x8 ,
x1 x2 x3 , x1 x2 x4 x5 , x1 x2 x3 x4 , and x3 , respectively. □
Here we have again taken advantage of the fact that specifying which
path begins at which point is equivalent to specifying π.
176 6. The Jacobi–Trudi Identities and an Involution on Λ
Now we are ready to cancel all of the negative terms on the right
side of (6.7). We might be tempted to invent a new involution to do
this, but our old map tswp will do just fine, once we know it does not
change the e-weight of a family of lattice paths.
Here Ξλ,k is the set of β ∈ Eλ,k in which no two paths intersect. (The
notation Ξ is intended to remind us of an E with its intersections
removed.)
10
1
−4 −3 −2 −1 0 1 2 3 4 5
11
10
8
6 7
5 6
4 4 5 6
= sλ′ ,
which is what we wanted to prove. □
Proposition 6.16. The following are equivalent for any linear trans-
formation ω ∶ Λ → Λ.
(i) ω(eλ ) = hλ for all partitions λ.
(ii) ω(hλ ) = eλ for all partitions λ.
(iii) ω(sλ ) = sλ′ for all partitions λ.
Moreover, there is a unique linear transformation ω ∶ Λ → Λ which
satisfies (i)–(iii) above, and ω is an involution.
= ∑ aλ bµ hλ hµ
λ,µ
= ∑ aλ bµ hλ∪µ ,
λ,µ
= (−1)n−1 pn .
We can now use Propositions 6.18 and 6.19 to find ω(pλ ) for any
partition λ.
= (−1)∣λ∣−l(λ) pλ ,
λ ω(mλ )
(2, 1) −2m3 − m21
(22 ) m4 + m22
(3, 1) 2m4 + m31
(3, 2) −2m5 − m32
(22 , 1) 3m5 + m41 + 2m32 + m221
(32 ) m6 + m33
6.4. Problems 183
6.4. Problems
6.1. Among all families of lattice paths with associated permutation
π = 5172436, what is the smallest total degree the associated
h-weight can have?
6.2. Among all families of lattice paths with associated permutation
π, what is the smallest total degree the associated h-weight can
have, in terms of the length of π and the entries of π?
184 6. The Jacobi–Trudi Identities and an Involution on Λ
6.6. Find the permutation π, the partition λ, and the e-weights cor-
responding to the family of lattices paths in Figure 6.19.
6.7. Give an example of a family of lattice paths with associated
permutation π = 35124, associated partition λ = (42 , 3, 1), and
h-weight x31 x3 x44 x26 x27 .
6.8. Give an example of a family of lattice paths with associated
permutation π = 35124, associated partition λ = (42 , 3, 1), and
e-weight x41 x23 x34 x5 x27 .
6.4. Problems 185
6.9. Find the sequence of lattice paths tswp(β) for the sequence of
lattice paths β in Figure 6.20.
6.10. Find the sequence of lattice paths tswp(β) for the sequence of
lattice paths β in Figure 6.21.
6.11. If η is the bijection in Lemma 6.7 and β is the family of lattice
paths in Figure 6.22, then find η(β).
5
4 4
2 3 4 5
1 1 2 4
6.5. Notes
The proofs of the Jacobi–Trudi identities we give here also appear
elsewhere, including in Bressoud’s book [Bre99] on alternating sign
matrices and Sagan’s book [Sag01] on the representation theory of
the symmetric group. They grow out of work of Lindström [Lin73]
and Gessel and Viennot [GV85] relating determinants to lattice path
counting problems.
We observed at the end of this chapter that computing ω(mλ )
is related to computing inverses of matrices of Kostka numbers. To
this end, Eğecioğlu and Remmel [ER90] have given a combinatorial
interpretation of the entries of these matrices.
Chapter 7
191
192 7. The Hall Inner Product
Solution. We can use the data in Figures 2.5 and 2.8 and some algebra
to find h3 = e3 −2e21 +e111 and m3 = 3e3 −3e21 +e111 . Using properties
of inner products, we find
⟨h3 , m3 ⟩e = ⟨e3 − 2e21 + e111 , 3e3 − 3e21 + e111 ⟩e
= ⟨e3 , 3e3 − 3e21 + e111 ⟩e + ⟨−2e21 , 3e3 − 3e21 + e111 ⟩e
+ ⟨e111 , 3e3 − 3e21 + e111 ⟩e
= 3⟨e3 , e3 ⟩e − 3⟨e3 , e21 ⟩e + ⟨e3 , e111 ⟩e − 6⟨e21 , e3 ⟩e
+ 6⟨e21 , e21 ⟩e − 2⟨e21 , e111 ⟩e + 3⟨e111 , e3 ⟩e
− 3⟨e111 , e21 ⟩e + ⟨e111 , e111 ⟩e
= 10.
We can perform similar computations for the other eight inner prod-
ucts or we can use technology to speed our work, but either way we
eventually reach the data in Table 7.1. Similarly, we can use the data
in Figure 2.9 to get the data in Table 7.2. And we can use the data
in Figure 4.10 to get the data in Table 7.3.
It is apparent from our data that ⟨ , ⟩s = ⟨ , ⟩h,m , but there is also
a hidden pattern. To see it, suppose we rescale ⟨ , ⟩p to eliminate the
algebra with formal power series in two infinite sets of variables. Here,
too, we will leave these issues aside, assuming the linear algebra works
out in analogy with the finite case.
As we might expect, we can combine a set of linearly independent
symmetric functions in x1 , x2 , . . . with a set of linearly independent
symmetric functions in y1 , y2 , . . . to obtain a linearly independent set
in Λk (X, Y ).
Proposition 7.3. Suppose {uj ∣ 1 ≤ j ≤ l} and {vm ∣ 1 ≤ m ≤ n} are
linearly independent sets in Λk . Then the set {uj (X)vm (Y ) ∣ 1 ≤ j ≤
l, 1 ≤ m ≤ n} is linearly independent in Λk (X, Y ).
and
(7.2) vλ = ∑ Bλµ vµ′ .
µ⊢k
= ∑ Aλα Bµα .
α⊢k
But Proposition 7.3 tells us the terms u′α (X)vβ′ (Y ) are linearly inde-
pendent, so if we compare the definition of Fk (u′ , v ′ ) with the right
side of (7.3), we see we must have
∑ Aλα Bλβ = δα,β .
λ⊢k
Proof. Since the Schur functions are a basis for Λ, there are scalars
aλ and bµ such that g1 = ∑λ aλ sλ and g2 = ∑µ bµ sµ . Using the linear-
ity of ⟨ , ⟩, we find
⟨g1 , g2 ⟩ = ⟨ ∑ aλ sλ , ∑ bµ sµ ⟩
λ µ
= ∑ ∑ aλ bµ ⟨sλ , sµ ⟩
λ µ
= ∑ aλ bλ .
λ
= ∑ ∑ aλ bµ ⟨sλ′ , sµ′ ⟩
λ µ
= ∑ aλ bλ ,
λ
following hold.
(1) a1 ≤ a2 ≤ ⋅ ⋅ ⋅ ≤ an .
(2) If aj = aj+1 , then bj ≤ bj+1 .
a a2 ⋅ ⋅ ⋅ an
For any generalized permutation π = [ 1 ] of length n
b1 b2 ⋅ ⋅ ⋅ bn
we write topwt(π) to denote the top weight of π, which is given by
n
topwt(π) = ∏ xaj ,
j=1
2
● Choose a number (possibly zero) of columns of the form [ ].
2
The generating function for these columns is 1+x2 y2 +x22 y22 +
⋅ ⋅ ⋅ = 1−x12 y2 .
Continue in this way, choosing a number (possibly zero) of columns of
j
the form [ ] for all positive integers j and k. The generating function
k
for these columns is 1−x1j yk . Once we have chosen all of our columns,
there is a unique way to assemble them into a generalized permuta-
j
tion. Choosing l columns of the form [ ] corresponds to choosing the
k
term (xj yk )l from the factor 1−x1j yk in the product ∏∞ ∞ 1
j=1 ∏k=1 1−xj yk ,
and the result follows. □
the number of dominoes with top entry k and bottom entry j. More-
over, if A is the matrix associated with π, then topwt(π) = ∏∞
λj
j=1 xj
and bottomwt(π) = ∏∞
µj
j=1 yj , where µj is the sum of the entries in
row j of A and λj is the sum of the entries in column j of A. There-
fore, by Problem 2.17(a), the coefficient of ∏∞
λj µj
j=1 xj yj on the right
side of (7.6) is Mλ,µ (h, m), which is defined in (2.10).
Meanwhile, on the left side of (7.6) we can use the definition of
Mλ,µ (h, m) in (2.10) to find
∑ mλ (X)hλ (Y ) = ∑ ∑ Mλ,µ (h, m)mλ (X)mµ (Y ).
λ λ µ
Now Proposition 7.4 tells us that if ⟨ , ⟩ is the Hall inner product and
⟪, ⟫ is the bilinear form with ⟪mλ , hµ ⟫ = δλ,µ , then ⟨ , ⟩ = ⟪, ⟫. In
particular, ⟨mλ , hµ ⟩ = ⟪mλ , hµ ⟫ = δλ,µ . □
Solution. Using the data in Figure 2.9 and the fact that hn =
∑λ⊢n mλ , or using technology, we find h0 = p0 , h1 = p1 ,
1 1
h2 = p2 + p11 ,
2 2
1 1 1
h3 = p3 + p21 + p111 ,
3 2 6
1 1 1 1 1
h4 = p4 + p31 + p22 + p211 + p1111 ,
4 3 8 4 24
and
1 1 1 1 1 1 1
h5 = p5 + p41 + p32 + p311 + p221 + p2111 + p11111 . □
5 4 6 6 8 12 120
When we compare the denominators in our solution to Exam-
ple 7.16 with our values for zλ (which we are expecting to appear
anyway), we eventually reach the following result.
Proposition 7.17. For all n ≥ 0, we have
1
(7.9) hn = ∑ pλ .
z
λ⊢n λ
The left side of (7.10) is the generating function for ordered pairs
(π, J), where π ∈ Sn and J is a multiset of n positive integers, and
7.3. Inner Products of Power Sums 203
(2) 3
(5) 3
(6 14) 2 2
(4 8 11) 2 2 2
(1 5 12 3) 7 7 7 7
(7 10 13 9) 3 3 3 3
We can now use our new interpretation of the right side of (7.10)
to prove Proposition 7.17.
3 7 2 2 3 3
(7 10 13 9) (1 5 12 3) (4 8 11) (6 14) (5) (2)
1 1 1 3 3 5 5 5 5
8 5 9 1 2 6 3 4 7
1 1 1 3 3 5 5 5 5
8 5 9 1 2 6 3 4 7
1 1 1 3 3 5 5 5 5
8 5 9 1 2 6 3 4 7
3 5 1 5 1
(12) (347) (59) (6) (8)
7.4. Problems
7.1. Evaluate
∑ fλ (X)eλ (Y ),
λ
where the sum is over all partitions.
7.2. Find and prove a formula for ⟨en , hn ⟩.
7.3. Find and prove a formula for ⟨h1n , h1n ⟩.
7.5. Notes 207
7.5. Notes
Although it is now named for Hall, the Hall inner product was first
introduced by Redfield [Red27].
Chapter 8
The Robinson–
Schensted–Knuth
Correspondence
where the sum on the left is over all partitions. This result is known
as Cauchy’s formula, and in this chapter we give it a combinatorial
proof.
The sum on the left side of (8.1) has a natural combinatorial
interpretation: it is the generating function for the ordered pairs
(P, Q) of semistandard tableaux of the same shape, with respect to
the weight xQ y P . The product on the right side of (8.1) also has
a combinatorial interpretation. To describe it, recall that a general-
a a2 ⋅ ⋅ ⋅ an
ized permutation of length n is a 2 × n array [ 1 ] of
b1 b2 ⋅ ⋅ ⋅ bn
positive integers such that a1 ≤ a2 ≤ ⋅ ⋅ ⋅ ≤ an , and if aj = aj+1 , then
bj ≤ bj+1 . We showed in Proposition 7.8 that the right side of (8.1) is
the generating function for generalized permutations π with respect
209
210 8. The Robinson–Schensted–Knuth Correspondence
where the sum on the left is over all ordered pairs (P, Q) of semi-
standard tableaux with sh(P ) = sh(Q). This means we can prove
Cauchy’s formula by giving a bijection R from the set of generalized
permutations to the set of ordered pairs of semistandard tableaux of
the same shape such that if R(π) = (P, Q), then topwt(π) = xQ and
bottomwt(π) = y P .
5
2 4
1 3 3 5
5
2 4
1 3 3 5 5
5
2 4
1 3 3 4
Figure 8.3. The tableau T from Figure 8.1 after the 4 bumps
the 5. The 5 is waiting to be inserted into the second row.
212 8. The Robinson–Schensted–Knuth Correspondence
5
2 4 5
1 3 3 4
5
5 5 4 4
2 4 3 2 3 2 3
1 2 3 5 1 2 3 5 1 2 3 5
the second row, just as we inserted the 4 in the first row. Our 5 is
greater than or equal to every entry in the second row, so we place it
in a new box at the end of the row. This gives us r4 (T ), which is the
semistandard tableau in Figure 8.4.
In our last example we ended up modifying two rows of T to get
r4 (T ). As you might imagine, we sometimes get much longer chains
of bumps. For example, suppose we want to find r2 (T ), where T is
still the semistandard tableau in Figure 8.1. In Figure 8.5 we have
the intermediate steps in this process: the 2 bumps the leftmost 3
out of the first row, that 3 bumps the 4 out of the second row, that 4
bumps the 5 out of the third row, and we put that 5 in a new box at
the end of the (previously empty) fourth row to obtain the tableau in
Figure 8.6.
5
4
2 3
1 2 3 5
4
2 2 4 2 3
P (2) P (24) P (243)
4 4
2 2 3
1 3 1 2
P (2431) P (24312)
The best way to build your intuition for the insertion process the
first time you see it is to compute a variety of examples. Try the
insertions in the next example yourself before checking your answers.
Example 8.2. Let α1 = 12132434, α2 = 21421343, α3 = 22134143,
α4 = 34231241, and α5 = 41312243. For each j, which of the semi-
standard tableaux in Figure 8.8 (if any) is P (αj )?
4
3 3
2 2 4 2 2 2 3 4
1 1 3 3 4 1 1 3 1 1 2 3 4
T1 T2 T3
4
4 3 3
3 4 2 2
1 1 2 2 3 1 1 4
T4 T5
9
7
6 6 9
2 5 8
1 3 7 8
9 9
7 7
6 6 9 6 6 9 8
2 5 8 7 2 5 7
1 3 5 8 1 3 5 8
9 9
7 9 7 9
6 6 8 6 6 8
2 5 7 2 5 7
1 3 5 8 1 3 5 8
6 6 9
3 4 6 7 7
1 2 4 4 5 5 6
6 6 9 6 6 9 6
3 4 6 7 7 5 3 4 5 7 7
1 2 4 4 4 5 6 1 2 4 4 4 5 6
9
6 6 6
3 4 5 7 7
1 2 4 4 4 5 6
9 9
6 6 6 6 6 6 7
3 4 5 7 7 5 3 4 5 5 7
1 2 4 4 4 4 6 1 2 4 4 4 4 6
9
6 6 6 7
3 4 5 5 7
1 2 4 4 4 4 6
b b
cj cj−1
cj−2 cj−2
before after
inserting inserting
cj−1 cj−1
When we insert cj−1 , we only change the jth row, and since we
put cj−1 in place of the leftmost entry greater than it, the entries in
the resulting row are weakly increasing from left to right.
Similarly, when we insert cj−1 , we only change the entries in the
column into which we insert it. In fact, we only change one entry of
that column. We consider two cases.
In the first case, suppose we insert cj−1 in the box immediately
above cj−2 . In this case the relevant entries are as in Figure 8.14. We
know by induction that cj < b, and by Lemma 8.6 we have cj−1 < cj < b.
On the other hand, Lemma 8.6 also tells us cj−2 < cj−1 , so our new
filling is a semistandard tableau.
In the second case, suppose we insert cj−1 in a box above and to
the left of cj−2 . In this case the relevant entries are as in Figure 8.15.
As in the previous case, we have cj−1 < b. We also know cj−1 was in
b b
cj cj−1
a · · · cj−2 a · · · cj−2
before inserting cj−1 after inserting cj−1
the box now occupied by cj−2 , which is to the right of a. Since that
filling was semistandard by induction, we must have cj−1 ≥ a. But
if cj−1 = a, then cj−2 would have bumped the leftmost a, not cj−1 .
Therefore, cj−1 > a and our new filling is a semistandard tableau. □
1 2 4 5
1 1 3 5 5
2 4 1 1
1 2 4 5 1 2 4 5 1 1 3 5 5
∣T1 ∣, then our shape has empty boxes; put a bold 1 in each empty box.
By the same argument we used to prove Proposition 8.9 the boxes
with nonbold entries form a semistandard tableau; this tableau is S,
and the bold entries form U .
By construction, U has shape λ/µ, where µ = sh S. By Lemma
8.8, no two boxes of U are in the same column, so U is a semistandard
skew tableau. To see that U is an elegant tableau, first note that the
bottom row of Sint is T2 . Furthermore, we can check that each entry
of T1 that we insert bumps the entry immediately above it in T2 (if
there is such an entry) out of the bottom row of Sint . Therefore, the
bottom row of our new filling is exactly T1 . In particular, this row
contains no bold 1’s, so U is an elegant tableau. In Figure 8.17 we
see the construction of S (in black) and U (in bold) from T1 , where
T is the reverse plane partition in Figure 8.16.
Now suppose T has exactly n ≥ 3 rows, T1 , . . . , Tn , so λj = ∣Tj ∣
for each j with 1 ≤ j ≤ n. To construct (S, U ), we first construct the
intermediate pair (Sint , Uint ) that we obtain by applying our bijection
to the reverse plane partition consisting of T2 , T3 , . . . , Tn . We view
Sint and Uint as being joined into a filling of shape (λ2 , . . . , λn ), with
the entries of Uint in bold.
To start, add an empty box to the top of each of the leftmost λ1
columns of our filling, add one to each bold entry, and move each bold
entry up one box in its column. This will leave exactly one empty
box in each of the leftmost λ1 columns, between the bold and nonbold
entries. Now let T̂1 be the row we obtain from T1 by removing those
222 8. The Robinson–Schensted–Knuth Correspondence
2 3 5
1 3 4 4 6
1 3 4 4 5 5 6
2
2 5 1 2 5
1 3 4 4 6 1 3 4 4 6
1 1 2
2 5 6 1 1
1 3 4 4 5 5 6
Figure 8.19. Building S and U from Sint and Uint for the
reverse plane partition in Figure 8.18
8.2. Constructing Q(π) 223
associated pair (S, U ). We saw that the bottom row of S is the bot-
tom row of T , so we start by letting T1 be the bottom row of S. To
construct (Sint , Uint ) from (S, U ), we first continue to view S and U
as being joined into a filling of shape λ, and we remove the bold 1’s
from this filling. Now some columns contain an empty box and some
do not; apparently the highest nonbold entries in the columns with
no empty boxes are in the boxes that were added in the last step of
the construction of S from Sint . And by Lemma 8.8 these boxes were
added from upper left to lower right. We can reverse RSK insertion
on these boxes from lower right to upper left (see the next section for
more discussion of reversing RSK insertion), subtract one from each
bold entry and move it down one box in its column. Now the leftmost
λ1 columns will have an empty box at the top; when we remove these
boxes we obtain (Sint , Uint ). We have seen the second row of T is
the bottom row of Sint , so we let T2 be the bottom row of S, and we
repeat the process until we have removed all of the boxes from S and
U.
6 6
4 5 5 7
3 3 4 4 4
6 6
4 5 5 7
3 3 4 4
6 6
4 5 5
3 3 4 4 7
6 6
6
4 5 5 7 4 5 6 7
5
3 3 4 4 4 3 3 4 4 4
6
4 5 6 7
3 3 4 4 5
c=4
Figure 8.23. The intermediate steps when we remove the 6
from the top row of the semistandard tableau in Figure 8.20
226 8. The Robinson–Schensted–Knuth Correspondence
right end of its row and the top of its column. We call such a box an
outer corner of the shape of T .
As Example 8.10 and the discussion before it suggest, if we specify
an outer corner of a semistandard tableau, then we can remove it
using a reverse insertion process. We start this process by removing
the specified outer corner from T . Suppose this corner is in the kth
row, and it has entry ck−1 . Now we find the rightmost entry ck−2 in
the (k − 1)th row which is less than ck−1 , and we replace it with ck−1 .
We repeat this process until we have removed an entry c0 from the
first row of T . As we note next, this process always produces a new
semistandard tableau.
Example 8.13. Find Q(π1 ), Q(π2 ), and Q(π3 ) for the general-
1 1 2 3 3 1 1 2 3 4
ized permutations π1 = [ ], π2 = [ ], and
2 4 3 1 2 2 4 3 1 2
1 2 3 4 5
π3 = [ ].
2 4 3 1 2
3 3 4
2 3 2 4 3 5
1 1 1 1 1 2
Q(π1 ) Q(π2 ) Q(π3 )
into each new box as we add it, we find Q(π1 ), Q(π2 ), and Q(π3 ) are
as in Figure 8.24. □
Now that we know how to construct Q(π), we can define the RSK
correspondence.
Definition 8.15. The RSK correspondence is the function R from
the set of generalized permutations of length n to the set of ordered
pairs of semistandard tableaux with the same shape, and n boxes,
which is defined by R(π) = (P (π), Q(π)).
5 5 4 5
3 4 2 4
2 2 5 1 3 4
P (π) Q(π)
5
3 5
2 4 5
5 4
3 5 2 4
2 4 5 1 3 4
P Q
5 2
3 5 1 3
P Q
Proof. Since the entries of Q are exactly the entries of the top row
of E(P, Q), we see the length of E(P, Q) is the number of boxes in
a a2 ⋅ ⋅ ⋅ an
Q. For convenience, set E(P, Q) = [ 1 ].
b1 b2 ⋅ ⋅ ⋅ bn
8.2. Constructing Q(π) 231
Proof. By construction P (π) and Q(π) have the same shape for
any generalized permutation π. Therefore, by Propositions 8.9 and
8.14, for each n ≥ 0 the map R is indeed a function from the set
of generalized permutations of length n to the set of ordered pairs
(P, Q) of semistandard tableaux with n boxes and the same shape.
On the other hand, by Proposition 8.18 we also know that for each
n ≥ 0 the map E is a function from the set of ordered pairs (P, Q) of
semistandard tableaux with n boxes and the same shape to the set of
generalized permutations of length n. Finally, by construction R and
E are inverse functions, so they are both bijections.
By construction, the entries of P (π) are exactly the entries in the
bottom row of π, and the entries of Q(π) are exactly the entries of
the top row of π, so the last claim also holds. □
232 8. The Robinson–Schensted–Knuth Correspondence
1 1 2 3 4 4 4
Example 8.21. Find π −1 for π = [ ].
1 3 1 3 2 2 4
2 2
1 1
0 1
2 2
;
1 1
;
0 1
; ; ;
k
λ ν
; (3) (5; 2)
2 2
; (1) (2; 1)
1 1
; ; (1)
0 1
; ; ;
; 233 23
12333
3 2 3 2 3
; 2 2
12
2 1 2 1 2
; ; 1
1 0 1 1 1
; ; ;
αj
νj−1
| {z }
αj−1
; 111 22
1 2 11122
2 2
; 1 2
1 2 12
1 1
; ; 2
1 2
0 1
; ; ;
1 2
8.4. Problems
1 1 2 2 3 4 4
8.1. Compute P (π) and Q(π) for π = [ ].
1 5 2 2 1 1 4
8.2. Find the generalized permutation π for which P (π) and Q(π)
are as in Figure 8.37.
4 5
3 4 6 4 5 7
1 3 3 2 3 3
P (π) Q(π)
Figure 8.37. The tableaux P (π) and Q(π) for Problem 8.2
5
3 5
2 4
1 2 2
k
λ
j 1 2 3 4 5 6
f (j) 1 2 2 3 1 4
8.5. Notes
The RSK algorithm was first introduced independently by Robinson
[dBR38] and Schensted [Sch61] in the context of permutations. It
was extended to generalized permutations and applied to Cauchy’s
formula (among other things) by Knuth [Knu70]. Since then, nu-
merous authors have given generalizations and analogues in various
settings; van Leeuwen has written an overview of some of these re-
sults [vL96]. The growth diagram approach to the RSK algorithm
was introduced by Fomin [Fom95].
Schensted’s invention of the RSK algorithm was motivated by his
interest in studying longest increasing subsequences in permutations.
246 8. The Robinson–Schensted–Knuth Correspondence
Much more has been discovered since Schensted’s work. You can find
out about many of these developments in Romik’s book [Rom15].
There is a beautiful formula for the number of standard tableaux
of shape λ, which involves the hook lengths of λ (see Problem B.5
for the definition of a hook in the Ferrers diagram of a partition). In
particular, the number of standard tableaux of shape λ is
n!
,
∏(j,k)∈λ hj,k
where the product in the denominator is over all boxes in the dia-
gram of λ, and hj,k is the hook length of the box (j, k). Greene,
Nijenhuis, and Wilf [GNW79] have given an elegant probabilistic
proof of this formula, while Novelli, Pak, and Stoyanovskii [NPS97]
have given a more involved bijective proof. The hook-length formula
was discovered independently by Frame and Robinson (working to-
gether) and Thrall within hours of each other. For more details on
this story, and more information on bijective proofs of the formula,
see [Sag01, Sec. 3.10].
Chapter 9
Special Products
Involving Schur
Functions
So far we have found several bases for the vector space of symmetric
functions, we have discovered how to express some of these bases in
terms of others of them, and we have studied their relationship with a
useful inner product. In short, we have pursued a variety of questions
involving linear combinations of symmetric functions. In this chapter
we turn our attention to products of symmetric functions.
As a general goal, if λ and µ are partitions, and aλ and bµ are
elements of one of our named bases of Λ, then we would like to know
how to write aλ bµ as a linear combination of elements of each of
our named bases. In Definition 6.17 and the discussion following
it we described how to do this when aλ and bµ are both complete
homogeneous symmetric functions or are both elementary symmetric
functions. Next we consider products of Schur functions.
Our eventual goal is to give a combinatorial description of the
coefficients we obtain when we write a product sλ sµ of Schur functions
as a linear combination of Schur functions. In situations like this it is
typical to first consider the special cases λ = (1n ) and λ = (n). Next
we usually look at the case in which λ is a hook, which is a partition of
247
248 9. Special Schur Function Products
the form (k, 1n−k ), since these sometimes serve to interpolate between
the partitions (1n ) and (n). These are the cases we consider in this
chapter; they will lead us to combinatorial formulas for the coefficients
we obtain when we write en sµ , hn sµ , and pn sµ as linear combinations
of Schur functions.
Theorem 9.3 (The First Pieri Rule). For any nonnegative integer
n and any partition µ, we have
(9.1) hn sµ = ∑ sλ ,
λ
where the sum on the right is over all partitions λ such that µ ⊆ λ
and λ/µ is a horizontal strip of length n.
Proof. By definition
hn sµ = ∑ xP ∏ xj ,
(P,J) j∈J
where the sum on the right is over all ordered pairs (P, J) in which
P is a semistandard tableau of shape µ and J is a multiset of n
positive integers. Therefore it is sufficient to give a bijection between
this set of ordered pairs and the set of semistandard tableaux P ′ for
which µ ⊆ sh(P ′ ), sh(P ′ )/µ is a horizontal strip of length n, and
′
xP ∏j∈J xj = xP .
To describe our bijection, suppose P is a semistandard tableau
of shape µ and J is a multiset of positive integers j1 ≤ j2 ≤ ⋅ ⋅ ⋅ ≤ jn .
Use the RSK insertion algorithm to insert j1 , j2 , . . . , jn into P , in that
order, to obtain a filling P ′ of shape λ.
By Proposition 8.9, we know P ′ is a semistandard tableau, and
′
by construction xP = xP ∏j∈J xj . We also know by construction that
µ ⊆ λ, and the fact that j1 ≤ ⋅ ⋅ ⋅ ≤ jn combined with Lemma 8.8
implies no two of the boxes added to P in the construction of P ′ will
be in the same column. In other words, λ/µ is a horizontal strip.
To describe the inverse of our bijection, suppose we are given
a semistandard tableau P ′ of shape λ, where µ ⊆ λ and λ/µ is a
horizontal strip of length n. Use the inverse of the RSK insertion
algorithm to remove the boxes in λ/µ from P ′ , starting with the
rightmost box in λ/µ and moving to the left. Lemma 8.17 tells us
that if the numbers we remove from P ′ in this way are jn , jn−1 , . . . , j1 ,
then j1 ≤ ⋅ ⋅ ⋅ ≤ jn . And by Proposition 8.11 the resulting filling P is a
semistandard tableau, which must be of shape µ. Since this process
reverses our first process step-by-step, the two maps must be inverse
bijections. □
252 9. Special Schur Function Products
where the sum on the right is over all ν ⊇ µ for which ν/µ is a horizon-
tal strip of length λ1 . But a horizontal strip of length λ1 has exactly
one filling of content λ1 , so Kν/µ,λ1 = 1, and the result holds for l = 1.
Now suppose l > 1 and the result holds for all λ of length l − 1.
In addition, notice that for any ν ⊇ µ we can group fillings of ν/µ
of content λ according to the horizontal strip formed by those boxes
which contain l. This implies that for any ν ⊇ µ, we have
Kν/µ,λ = ∑ Kξ/µ,α ,
ν⊇ξ⊇µ
where α = λ1 , . . . , λl−1 and the sum on the right is over all ξ ⊇ µ for
which ν/ξ is a horizontal strip of length λl . With this in mind, by
induction we now have
hλ sµ = sµ hλ1 ⋅ ⋅ ⋅ hλl−1 hλl
= ∑ Kξ/µ,α sξ hλl
ξ⊇µ
= ∑ ∑ Kξ/µ,α sν
ξ⊇µ ν⊇ξ
= ∑ Kν/µ,λ sν ,
ν⊇µ
1 2 3 2 2 3
1 2 1 2
1 2 1 1
It is useful to observe that we can also use the Hall inner product
to obtain Corollary 9.6. To do this, first note that since the Schur
functions form a basis for Λ, there are constants Mλ,µ (h, s) such that
hλ = ∑µ Mλ,µ (h, s)sµ . Since the Schur functions are an orthonormal
basis for Λ with respect to the Hall inner product, we have
⟨sν , hλ ⟩ = ⟨∑ Kν,ζ mν , hλ ⟩
ζ
= ∑ Kν,ζ ⟨mζ , hλ ⟩
ζ
= Kν,λ ,
and the result follows.
Our next goal is to write en sµ as a linear combination of Schur
functions. Fortunately, we can do this by applying ω to the first Pieri
rule. The resulting formula is known as the second Pieri rule.
Theorem 9.7 (The Second Pieri Rule). For any nonnegative integer
n and any partition µ, we have
(9.3) en sµ = ∑ sλ ,
λ
where the sum on the right is over all partitions λ ⊇ µ such that λ/µ
is a vertical strip of length n.
Proof. Apply ω to (9.1) and use Propositions 6.18 and 6.16(ii), (iii)
to simplify the result. □
where the sum on the right is over all ordered pairs (P, J) in which
P is a semistandard tableau of shape µ and J is a set of n positive
integers. To prove (9.3), we need to give a bijection between the
set of such ordered pairs and the set of semistandard tableaux P ′
for which µ ⊆ sh(P ′ ), sh(P ′ )/µ is a vertical strip of length n, and
′
xP ∏j∈J xj = xP . In the proof of Theorem 9.3 we used the RSK
insertion algorithm to insert the elements of J into P , from smallest
to largest. But Lemma 8.8 tells us that inserting the elements of J
in this order always produces a semistandard tableau P ′ for which
sh(P ′ )/µ is a horizontal strip, rather than the vertical strip we need.
9.1. The Pieri Rules 255
(9.4) eλ sµ = ∑ Kν ′ /µ′ ,λ sν .
ν
Proof. Apply ω to (9.2) and use Propositions 6.18 and 6.16(ii), (iii)
to simplify the result. □
eλ = ∑ Kµ′ ,λ sµ .
µ⊢n
and
p5 s32 = s82 − s64 + s3331 − s322111 + s3211111 .
These results suggest pn sµ is always a simple sum of Schur functions,
some of which appear with coefficient −1. To determine which Schur
functions actually appear in the Schur expansion of pn sµ and with
what signs, it is helpful to look at Ferrers diagrams again. In Figure
9.8 we have the Ferrers diagrams of the partitions λ for which sλ
appears in the Schur expansion of p3 s21 , in Figure 9.9 we have the
Ferrers diagrams of the partitions which appear in the Schur expan-
sion of p4 s22 , and in Figure 9.10 we have the Ferrers diagrams of the
partitions which appear in the Schur expansion of p5 s32 .
obtaining
n−1
pn sµ = ∑ (−1)j sn−j,1j sµ .
j=0
We don’t yet know how to express the product sn−j,1j sµ as a linear
combination of Schur functions, but we do know how to express the
hook Schur function sn−j,1j in terms of elementary and complete ho-
mogeneous symmetric functions. In particular, we can use (4.20) to
eliminate sn−j,1j , obtaining
n−1 j
pn sµ = ∑ ∑ (−1)k hn−k ek sµ .
j=0 k=0
the boxes in µv /µ are shaded light gray, and the boxes in µvh /µv are
shaded dark gray. In each of these pairs we are also required to have
j ≥ ∣µv /µ∣.
We would like to reorder our sum so the partition µvh appears
first, because we expect to eventually reach a sum over partitions that
we obtain by adding n boxes to µ, and this is exactly how we obtain
µvh . When we do this, we find
n−1
∣µ /µ∣
(9.5) pn sµ = ∑ ∑ ∑ (−1) v sµvh ,
µvh ⊇µ µv ⊆µvh
j=∣µv /µ∣
µv ⊇µ
13
12 11
10 9 8
7 6 5
4 3
2 1
13
12 11 10
9 8 7
6 5 4
3 2
1
Lemma 9.14. Suppose µ ⊆ µvh are partitions and µvh /µ can be sepa-
rated into an inner vertical strip and an outer horizontal strip. Then
the following hold.
(i) Each box in µvh /µ which is not the top box in its column is
in the vertical strip.
(ii) Each box in µvh /µ which is not the leftmost box in its row
(in µvh /µ) is in the horizontal strip.
Proof. (i) If such a box were not in the vertical strip, then it would
be in the horizontal strip, as would every box above it in the same
column. Since there is at least one box above it, this would mean the
horizontal strip has at least two boxes in the same column, contra-
dicting the fact that it is a horizontal strip.
(ii) This is similar to the proof of (i). □
We can combine the two parts of Lemma 9.14 to show µvh /µ can
only be separated in certain situations.
Lemma 9.15. Suppose µ ⊆ µvh are partitions and µvh /µ can be sepa-
rated into an inner vertical strip and an outer horizontal strip. Then
µvh /µ can contain no 2 × 2 square.
v
v h h h
v
v
h h
v h
v h h
v
not assign the head to a strip. Now traverse the component from box
to adjacent box, moving down and to the right. By Lemma 9.15 no
box in the component has boxes both directly below and directly to
the right, so each box has a unique next box in the component. If
we enter a box from the left, then Lemma 9.14 assigns that box to
the horizontal strip. If we enter a box from above, then Lemma 9.14
assigns that box to the vertical strip. We enter every box other than
the head of the component from exactly one of these directions, so
every box but the head is assigned to a unique strip. As we show
next, we are free to put the heads of the components in either strip.
Lemma 9.16. Suppose µ ⊆ µvh are partitions and µvh /µ can be sep-
arated into an inner vertical strip and an outer horizontal strip. Fur-
thermore, suppose we label the head of each connected component of
µvh /µ with v or h. Then there is a separation of µvh /µ in which the
heads labelled h are in the horizontal strip and the heads labelled v
are in the vertical strip.
Proof. First note that if two boxes in µvh /µ are in different connected
components, then they are not in the same row or column, so we can
assume without loss of generality that µvh /µ is connected.
Let s be the head of µvh /µ, label each box of µvh /µ which is not
at the top of its column with a v, and label each box which is not the
leftmost box in its row with an h. We make several observations.
First, no two boxes in the same column are labelled h, and no
two boxes in the same row are labelled v. Second, if a box labelled v
is in the same row (resp., column) as a box labelled h, then the box
labelled v is to the left (resp., below) the box labelled h. And finally,
every box in the same row as s is labelled h and every box in the
same column as s is labelled v.
Our last observation tells us that labelling s with h cannot give
us two boxes labelled h in the same column, and labelling s with v
cannot give us two boxes labelled v in the same row. This, combined
with our first observation, tells us that for either labelling of s, the
boxes labelled v will form a vertical strip and the boxes labelled h
will form a horizontal strip. And our second observation tells us the
resulting vertical strip will be inside the horizontal strip. □
9.2. The Murnaghan–Nakayama Rule 265
where the middle sum is over all labellings l of the heads of the com-
ponents of µvh /µ with v and h, and v(l) is the total number of boxes
labelled v in µvh /µ. We now prove our main result by explaining how
to cancel most of the terms on the right side of (9.6).
Theorem 9.17. For all n ≥ 0 and any partition µ, we have
pn sµ = ∑(−1)ht(λ/µ)−1 sλ .
λ
Here the sum is over all λ ⊇ µ such that λ/µ is a connected border
strip with exactly n boxes, and ht (λ/µ) is the height of λ/µ.
Proof. We start with (9.6), and we give our proof in two stages.
By Lemma 9.16, we are free to switch the label of the head of the
topmost component of µvh /µ between v and h. Each switch changes
v(l) by one, reversing the sign of (−1)v(l) . So all terms for which we
can make this switch cancel. The terms for which we cannot make
this switch are exactly those in which j = v(l) and the head of the
topmost component of µvh /µ is labelled h: if we make the switch in
one of these terms, then we get a term with v(l) > j, which does not
appear in our sum. Therefore, we have
(9.7) pn sµ = ∑ ∑(−1)v(l) sµvh ,
µvh ⊇µ l
where the inner sum is over all labellings of the heads of the com-
ponents of µvh /µ with v and h in which the head of the topmost
component is labelled h, and v(l) is the total number of boxes la-
belled v in µvh /µ.
Equation (9.7) is close to the result we want, but it includes
terms associated with border strips which are not connected. To see
how these terms cancel, suppose µvh /µ is a border strip which is not
connected. By Lemma 9.16 we are free to switch the label on the
head of the second connected component from the top between v and
h. As before, this reverses the sign of the term but changes nothing
else, so all terms with a second connected component cancel.
266 9. Special Schur Function Products
Now we can take the sum on the right side of (9.7) to be over the
set of connected border strips with exactly n boxes. To obtain the
result we want to prove, we just need to note that in our labellings the
top row contains no v’s, and each row below the top contains exactly
one v, so v(l) = ht (µvh /µ) − 1. □
Example 9.19. Find all border strip tableaux of shape (3, 2, 2) with
type (3, 2, 1, 1).
Solution. There are three ways to place the innermost border strip
of length 3: as a horizontal strip, as a hook of shape (2, 1), and as
a vertical strip. In the first case there are two ways to place the
next border strip (which has length 2), each of which determines the
remaining border strips. In the second case there is no way to place
the next border strip. In the third case there are two ways to place
the next border strip, one of which determines the remaining border
strips, and the other of which has two choices for the placement of the
remaining two border strips. Figure 9.16 shows the resulting border
strip tableaux. □
3 4 2 4 1 4
2 2 2 3 1 3
1 1 1 1 1 1 1 2 2
1 4 1 3
1 2 1 2
1 2 3 1 2 4
Then we have
where the sum on the right is over all border strip tableaux of shape
µ with type λ.
Proof. If l(λ) = 1, then λ = n. In this case the claim says Mn,µ (p, s)
is the sum of sgn(T ), over all border strip tableaux T of shape µ with
type n. But there is a border strip tableau of shape µ and type n if
and only if µ is a hook n−j, 1j for some j with 0 ≤ j ≤ n−1. Moreover,
in this case there is exactly one border strip tableau of shape µ, which
has sign (−1)j . Therefore, the claim says
n−1
pn = ∑ (−1)j sn−j,1j ,
j=0
pλ = pα pλl
= ∑ ∑ sgn(Tα )sν pλl
ν⊢n−λl Tα ∈BST (ν,α)
= ∑ ∑ sgn(Tα ) ∑(−1)ht(µ/ν)−1 sµ ,
ν⊢n−λl Tα ∈BST (ν,α) µ
where the inner sum on the last line is over all partitions µ such that
ν ⊆ µ and µ/ν is a border strip of size λl . When we combine Tα
with µ/ν, we obtain a border strip tableau T of shape µ and type λ,
and by construction we have sgn(T ) = sgn(Tα )(−1)ht(µ/ν)−1 . Every
border strip tableau of shape µ and type λ is uniquely constructed in
this way, so we have
pλ = ∑ ∑ sgn(T )sµ ,
µ⊢n T ∈BST (µ,λ)
Now we can use the Hall inner product and Theorem 9.20 to
write the Schur functions as linear combinations of the power sum
symmetric functions.
Then we have
Mλ,µ (s, p) = zµ−1 ∑ sgn(T ),
T
where the sum on the right is over all border strip tableaux of shape
λ and type µ.
9.3. Problems
9.1. How many border strips are contained in the partition (k n ),
where k and n are positive integers?
9.2. If we write h4 s221 as a linear combination of Schur functions,
then for which partitions λ does sλ have a nonzero coefficient?
9.3. If we write e4 s221 as a linear combination of Schur functions,
then for which partitions λ does sλ have a nonzero coefficient?
9.4. Suppose µ is a partition. For each n ≥ 0, let fh (n) be the sum of
the coefficients when we write hn sµ as a linear combination of
Schur functions. Similarly, for each n ≥ 0, let fe (n) be the sum
of the coefficients when we write en sµ as a linear combination of
Schur functions. Characterize those µ for which fh (n) = fe (n)
for all n ≥ 0.
9.5. If we write p4 s221 as a linear combination of Schur functions,
then for which partitions λ does sλ have a nonzero coefficient?
9.6. Find a skew shape λ/µ for which we have sλ/µ = e3 h4 h2 e5 h1 .
9.7. Show that for all partitions µ ⊢ n and ν ⊢ n, we have
∑ Mµ,λ (p, s)Mν,λ (p, s) = δµ,ν zµ .
λ⊢n
9.8. Show that for all partitions λ ⊢ n and µ ⊢ n, we have
−1
∑ zν Mν,λ (p, s)Mν,µ (p, s) = δλ,µ .
ν⊢n
270 9. Special Schur Function Products
The
Littlewood–Richardson
Rule
(10.1) sµ sν = ∑ cλµ,ν sλ ,
λ
where the sum on the right is over all partitions λ with ∣λ∣ = ∣µ∣ + ∣ν∣.
Similarly, for any partitions µ ⊆ λ, the skew Schur function sλ/µ is a
symmetric function, so it too must be a linear combination of Schur
functions. That is, there must be constants dλµ,ν with
where the sum on the right is over all partitions ν with ∣λ∣ = ∣µ∣ + ∣ν∣.
The Pieri rules give us combinatorial descriptions of cλµ,ν when µ = (n)
or µ = (1n ). And in Chapter 5 we found conditions on λ and µ which
guarantee dλµ,ν = 0 except for a unique ν, for which dλµ,ν = 1. In this
271
272 10. The Littlewood–Richardson Rule
2 3 2 3 2 3 3 3
∗r 11 11 12 12 13 13 22 23
2 22 23 22 23 22 23 23 23
11 1111 1111 1112 1112 1113 1113 1122 1123
3 3 33 3 33 3 33 33 33
11 2 1111 2 1112 2 1113 1122 1123
1111 1112 1113
2 222 223 22 223 22 22 23 23
12 111 111 1122 112 1123 1133 1222 1223
3 3 3 3 3 3 3 33 33
12 22 23 2 23 2 2 1222 1223
111 111 1122 112 1123 1133
2 3 233 3 233 3 23 233 23
13 22 111 22 112 22 1133 122 1233
111 113 113
3 3 333 3 333 3 33 333 33
13 23 111 23 112 23 1133 122 1233
111 112 113
3 3 3 3 3 3 3 33 33
22 22 22 2 23 2 2 2222 2223
112 113 1222 122 1223 1233
3 33 3 3 3 3 3 333 33
23 22 23 23 23 23 2 222 2233
11 113 122 123 123 1333
But we have no reason to think the tableaux in Table 10.1 will have the
shapes (4, 2), (3, 3), (3, 2, 1), and (2, 2, 2) appearing on the right side
of (10.3), even though their weights are the terms in the associated
Schur functions. So it is a pleasant surprise to see that when we group
the tableaux in Table 10.1 by their shapes, we get exactly the sets of
tableaux for which the Schur functions in (10.3) are the generating
functions. In other words, ∗r appears to be a combinatorial model not
just of multiplication of Schur functions, but actually of multiplication
of the individual terms in those Schur functions. We will soon see
these observations hold in general.
We think we have a combinatorial model for the multiplication
happening in (10.1), but we might also want a combinatorial model
for whatever is going on in (10.2). The terms on the left side of
(10.2) are indexed by semistandard tableaux of a specific skew shape,
while the terms on the right are indexed by semistandard tableaux of
274 10. The Littlewood–Richardson Rule
Solution. If we choose the top box in the left column of µ and perform
the sliding operation, we obtain the objects in Figure 10.2. Here we
have colored the blank box light gray at each step. In Figure 10.3 we
see each of the intermediate tableaux which occur in the computation
of rect(T ), for a particular sequence of choices of outer corner. In each
tableau, we have shaded the boxes in the sliding path that produced
that tableau. □
10.1. Products of Tableaux 275
2
1 5 6
4 4
2 3
1 1 3
2 2
1 5 6 5 6 2 5 6
4 4 1 4 4 1 4 4
2 3 2 3 2 3
1 1 3 1 1 3 1 1 3
2 5 6 2 5 6
1 4 4 1 4 4
2 3 2 3
1 1 3 1 1 3
5 6 5 6
2 4 4 4 4
1 2 3 2 2 3
1 1 3 1 1 1 3
4
2 2
W
4 2 2
2 4 2
2 2 4
S1 S2 S3
T1
T2
2 3 2 3 2 3 3 3
∗j 11 11 12 12 13 13 22 23
2 22 23 22 23 22 23 23 23
11 1111 1111 1112 1112 1113 1113 1122 1123
3 3 33 3 33 3 33 33 33
11 2 1111 2 1112 2 1113 1122 1123
1111 1112 1113
2 222 223 22 223 22 22 23 23
12 111 111 1122 112 1123 1133 1222 1223
3 3 3 3 3 3 3 33 33
12 22 23 2 23 2 2 1222 1223
111 111 1122 112 1123 1133
2 3 233 3 233 3 23 233 23
13 22 111 22 112 22 1133 122 1233
111 113 113
3 3 333 3 333 3 33 333 33
13 23 111 23 112 23 1133 122 1233
111 112 113
3 3 3 3 3 3 3 33 33
22 22 22 2 23 2 2 2222 2223
112 113 1222 122 1223 1233
3 33 3 3 3 3 3 333 33
23 22 23 23 23 23 2 222 2233
11 113 122 123 123 1333
278 10. The Littlewood–Richardson Rule
(1) There are entries x, y, and z such that x < y ≤ z, and there
are words w1 and w2 such that one sequence has the form
w1 yxzw2 ,
w1 yzxw2 .
That is, the two sequences are identical, except that the
consecutive entries x and z have been swapped.
(2) There are entries x, y, and z such that x ≤ y < z, and there
are words w1 and w2 such that one sequence has the form
w1 xzyw2 ,
280 10. The Littlewood–Richardson Rule
w1 zxyw2 .
That is, the two sequences are identical, except that the
consecutive entries x and z have been swapped.
Example 10.9. Let G be the graph whose vertices are the permu-
tations of [4], in which two vertices are connected whenever they are
related by an elementary Knuth transformation. Sketch G.
1234
2413 2143
3142 3412
4321
a x b1 bk
c1 ··· cm d1 dk
a b1 bk
c1 ··· c m x d1 dk
1223 1 2 2 3
2132 2 3
2312 1 2
1232
3
1322
1 2 2
3122
2231
2
2213
1 2 3
2123
2321 3
3221 2
3212 1 2
P
1223 1 2 2 3
word
2132 2 3
2312 1 2
1232
3
1322
1 2 2
3122
2231
2
2213
1 2 3
2123
2321 3
3221 2
3212 1 2
Now suppose there are words u and v, and entries x ≤ y < z, such
that w1 = uxzyv and w2 = uzxyv. As before, it is sufficient to show
ry (rz (rx (T ))) = ry (rx (rz (T ))) for any semistandard tableau T . We
can check this directly when T is empty, so suppose T has at least
one row. We argue by induction on the number of rows in T .
The rest of the proof is an examination of several cases, most
of which we leave to the reader. We outline the cases involved and
handle two of them in detail.
Our first case division involves x and y: either x = y or x < y.
Suppose x = y, and the first row of T has entries a1 , . . . , aj+k+l ,
where
Theorem 10.16 tells us the map P induces a function from the set
W (A)/ ∼K of equivalence classes of ∼K to T (A). Abusing notation,
we write P to denote this function. We also note that by composing
word with the function mapping a word in W (A) to its Knuth equiv-
alence class, we get a function from T (A) to W (A)/ ∼K . We abuse
notation here, too, writing word to denote this function.
Corollary 10.13 tells us the composition word ○P is the identity
on W (A)/ ∼K , and Figure 10.11 suggests P ○ word is the identity on
T (A). Next we show this holds in general.
Proof. For each j, let rj be the jth row of T , so word(T )=rl rl−1 ⋅ ⋅ ⋅ r1 ,
where l is the number of rows in T . Since the entries of rl are weakly
increasing, P (rl ) is a copy of the top row of T . Because T is column-
strict, each entry of rl−1 is strictly less than the corresponding entry of
rl . And since T is weakly increasing across rows, each entry of rl−1 is
greater than or equal to the entry to its right in rl−1 . Therefore, each
entry of rl−1 bumps the corresponding entry of rl up to the second
row, and P (rl rl−1 ) is a copy of the top two rows of T . This occurs
for each row: for each j, P (rl ⋅ ⋅ ⋅ rl−j ) is a copy of the top j + 1 rows
of T . In particular, P (word(T )) = P (rl ⋅ ⋅ ⋅ r1 ) = T , which is what we
wanted to prove. □
Now we know exactly how the maps P and word are related.
3
4
2 3
2 2
1 1 2
W
T
Figure 10.12. The semistandard tableaux T and W in Ex-
amples 10.26 and 10.28
3 2 3
1 3 1 2 1 2
U1 V1 U2 = V2
4 2
2 4
2 2
S1 S2
3
3 3 3 3 2 3
1 2 1 2 2 1 1 2
4
2 2 2
2 2
same tableau for both U S and U ′ S. For U S this will be P (a1 ⋅ ⋅ ⋅ a∣µ∣ )
and for U ′ S this will be P (d1 ⋅ ⋅ ⋅ d∣µ∣ ), so these tableau must be equal.
Therefore, U is independent of the filling we use to construct U S.
4
2 2
1 1 2
equivalent. □
a a2 ⋅ ⋅ ⋅ an
Lemma 10.30. Suppose π = [ 1 ] is a generalized per-
b1 b2 ⋅ ⋅ ⋅ bn
mutation and T is a semistandard tableau. Let
U = rbn (rbn−1 (⋅ ⋅ ⋅ rb1 (T ))),
and let S be the semistandard skew tableau we obtain by placing
a1 , . . . , an in the boxes we add to T in the construction of U so that for
each j we place aj in the jth box we add to T . Then rect(S) = Q(π).
Proof. Before we get to the details of the proof, we need to set some
additional notation.
First, for any generalized permutation α, we write bottom(α) to
denote the bottom row of α.
Second, let T − be the filling we obtain by subtracting the largest
entry of T from every entry of T . Note that T − is a semistan-
dard tableau, even though none of its entries are positive. Now
c c2 ⋅ ⋅ ⋅ cm
let σ = [ 1 ] be the generalized permutation with
d1 d2 ⋅ ⋅ ⋅ dm
R(σ) = (T, T − ), so P (σ) = T and Q(σ) = T − . Finally, let τ be
10.4. The Littlewood–Richardson Rule 297
c ⋅ ⋅ ⋅ cm a1 ⋅ ⋅ ⋅ an
the concatenation of σ and π, so τ = [ 1 ].
d1 ⋅ ⋅ ⋅ dm b1 ⋅ ⋅ ⋅ bn
Observe that by construction Q(τ ) is T − with S attached, and we
have P (τ ) = rbn (rbn−1 (⋅ ⋅ ⋅ rb1 (T ))).
To start the proof, recall from Definition 8.20 that τ −1 is the gen-
eralized permutation we obtain by reordering the columns of the array
d ⋅ ⋅ ⋅ dm b1 ⋅ ⋅ ⋅ bn
[ 1 ]. This means it is sufficient to show
c1 ⋅ ⋅ ⋅ cm a1 ⋅ ⋅ ⋅ an
word(Q(π)) ∼K bottom(π −1 ) ∼K word(S) ∼K word(rect(S)),
since the result will then follow from Corollary 10.18.
To prove word(Q(π)) ∼K bottom(π −1 ), first recall from Theorem
8.29 that Q(π) = P (π −1 ). Combining this with Corollary 10.13 and
the fact that P (π −1 ) = P (bottom(π −1 )), we find
word(Q(π)) = word(P (π −1 )) ∼K bottom(π −1 ).
s s2 ⋅ ⋅ ⋅ s∣ν∣
Proof. Suppose (U, V ) ∈ T (µ, ν, T ), let σ = [ 1 ] be
b1 b2 ⋅ ⋅ ⋅ b∣ν∣
the generalized permutation with P (σ) = V , and let Q(σ) = W . Insert
b1 , . . . , b∣ν∣ into U , and insert s1 , . . . , s∣ν∣ into the corresponding boxes
of the diagram of λ/µ to obtain S = Ψ(U, V ).
10.4. The Littlewood–Richardson Rule 299
v ⋅ ⋅ ⋅ v∣µ∣ s1 ⋅ ⋅ ⋅ s∣ν∣
[ 1 ], then P (π ′ ) = P (σ ′ ) ∗ P (σ) = U ∗ V = T
d1 ⋅ ⋅ ⋅ d∣µ∣ b1 ⋅ ⋅ ⋅ b∣ν∣
and Q(π ′ ) = U S. But we also have P (π) = T and Q(π) = U S, so
we must have π = π ′ . Therefore, U = P (σ ′ ) = P (a1 ⋅ ⋅ ⋅ a∣µ∣ ) = U1 and
V = P (σ) = P (c1 ⋅ ⋅ ⋅ c∣ν∣ ) = V1 .
Now suppose S ∈ S(λ/µ, W ), fill the boxes of µ in S with en-
tries which are smaller than all entries of S to create a semistandard
u ⋅ ⋅ ⋅ u∣µ∣ t1 ⋅ ⋅ ⋅ t∣λ∣−∣µ∣
tableau U S of shape λ, and let π = [ 1 ]
a1 ⋅ ⋅ ⋅ a∣µ∣ c1 ⋅ ⋅ ⋅ c∣λ∣−∣µ∣
be the generalized permutation with P (π) = T and Q(π) = U S. Set
U = P (a1 ⋅ ⋅ ⋅ a∣µ∣ ) and V = P (c1 ⋅ ⋅ ⋅ c∣λ∣−∣µ∣ ).
s s2 ⋅ ⋅ ⋅ s∣ν∣
To construct Ψ(Ω(S)) = Ψ(U, V ), find σ = [ 1 ]
b1 b2 ⋅ ⋅ ⋅ b∣ν∣
with P (σ) = V and Q(σ) = W . Construct rb∣ν∣ (⋅ ⋅ ⋅ rb1 (U )), and let
S1 = Ψ(Ω(S)) be the semistandard skew tableau of shape λ/µ that
we obtain by inserting s1 , . . . , s∣ν∣ into the new boxes of λ/µ as they
are added. We claim S = S1 .
t t ⋅ ⋅ ⋅ t∣λ∣−∣µ∣
To prove our claim, suppose τ = [ 1 2 ]. Then
c1 c2 ⋅ ⋅ ⋅ c∣λ∣−∣µ∣
P (τ ) = P (c1 ⋅ ⋅ ⋅ c∣λ∣−∣µ∣ ) = V = P (σ). In addition, by Lemma 10.30
and our construction, we have
Therefore, τ = σ.
300 10. The Littlewood–Richardson Rule
u u2 ⋅ ⋅ ⋅ u∣µ∣
Now we can construct both S and S1 by using [ 1 ]
a1 a2 ⋅ ⋅ ⋅ a∣µ∣
to build U , then constructing rb∣ν∣ (⋅ ⋅ ⋅ rb1 (U )) and building a semi-
standard skew tableau of shape λ/µ by inserting s1 , . . . , s∣ν∣ into the
new boxes of λ/µ as they are added. Therefore, S = S1 .
We have now shown Ω○Ψ is the identity on T (µ, ν, T ) and Ψ○Ω is
the identity on S(λ/µ, W ), so Ψ and Ω are inverses of one another. □
Last but not least, we can also connect our expressions for sµ sν
and sλ/µ as linear combinations of Schur functions.
Example 10.37. Find all words which are Knuth equivalent to the
word 332211.
We can now gather the main results of this section into a single
theorem. Our proof of this result is short, but it provides an outline
of the ideas we have developed in this section.
Theorem 10.40 (The Littlewood–Richardson Rule for Symmetric
Functions). For any partitions λ, µ, and ν, let cλµ,ν be the number of
semistandard skew tableaux of shape λ/µ and content ν whose reading
10.5. Problems 303
and
sλ/µ = ∑ cλµ,ν sν .
ν
Proof. Since the Schur functions form a basis for the space of all
symmetric functions, for all partitions µ, ν, and λ there are constants
cλµ,ν and dλµ,ν with
sµ sν = ∑ cλµ,ν sλ
λ
and
sλ/µ = ∑ dλµ,ν sν .
ν
Let T be a semistandard tableau of shape λ, and let W be the semi-
standard tableau of shape ν in which every entry in the jth row is j.
By Corollary 10.34 we have cλµ,ν = ∣T (µ, ν, T )∣, by Corollary 10.35 we
have dλµ,ν = ∣S(λ/µ, W )∣, and by Theorem 10.33 we have cλµ,ν = dλµ,ν .
By Theorem 10.39 we know S(λ/µ, W ) is the set of semistandard
skew tableaux of shape λ/µ and content ν whose reading words are
Littlewood–Richardson words. □
10.5. Problems
10.1. Suppose a1 < a2 < ⋅ ⋅ ⋅ < am are integers. For any word w in
a1 , . . . , am , let µj (w) be the number of times aj appears in w,
and write µ(w) to denote the sequence µ1 (w), . . . , µm (w). We
say a word w = b1 ⋅ ⋅ ⋅ bn in a1 , . . . , am is a Littlewood–Richardson
word whenever µ(bj ⋅ ⋅ ⋅ bn ) is a partition for all j. Show that if
w is a Littlewood–Richardson word, then w is Knuth equivalent
to
am ⋅ ⋅ ⋅ am ⋅ ⋅ ⋅ a1 ⋅ ⋅ ⋅ a1 .
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¶
µm (w) µ1 (w)
We use this result in our proof of Theorem 10.39.
10.2. How many permutations in Sn are Knuth equivalent to no other
permutations?
304 10. The Littlewood–Richardson Rule
● All of the partial sums along rows and diagonals from left
to right, from southeast to northwest, and from northeast
to southwest are nonnegative.
In Figure 10.18 we see which rows and diagonals sum to which
values, and the directions in which the partial sums in the last
condition are computed. Show that the Littlewood–Richardson
coefficient cλµ,ν is the number of BZ patterns of type (r, l, m, n),
where r ≥ max(l(λ), l(µ), l(ν)), lj = λr−j − λr−j+1 for all j, mj =
µj − µj+1 for all j, and nj = νj − νj+1 for all j.
m3
m2 `1
m1 `2
`3
n3 n2 n1
Figure 10.18. The values of the sums and the directions of
the partial sums in the definition of a BZ pattern
0 0 1 1
1 0
0 1
0 1
0 1
0 0 1 1
0 1
1 1 0 0
0 1
with copies of the edge-labeled tiles in Figure 10.19 (in the given
orientations: no rotations or reflections are allowed) such that
the following hold.
● If two tiles share an edge, then that edge has the same
label in both tiles.
● The labels on the edges from left to right along the north-
west boundary are m1 , . . . , mr .
● The labels on the edges from left to right along the north-
east boundary are n1 , . . . , nr .
● The labels on the edges from left to right along the south
boundary are l1 , . . . , lr .
Show that the Littlewood–Richardson coefficient cλµ,ν is the
number of puzzles of type (r, l, m, n), where λ is the partition
associated with (l1 , . . . , lr ), µ is the partition associated with
(m1 , . . . , mr ), and ν is the partition associated with (n1 , . . . , nr ).
10.12. An increasing edge-vertex labeling of a triangular lattice is a
labeling of the vertices and edges of the lattice with nonnegative
integers such that the following hold.
● The labels of the vertices are weakly increasing from left
to right along each row and each diagonal.
10.6. Notes 307
a b c d b d
c d b a c
10.6. Notes
Our proof of the Littlewood–Richardson rule follows Fulton [Ful96],
but there are many other proofs available. These include proofs by
Berenstein and Zelevinsky [BZ88], Remmel and Shimozono [RS98],
Gasharov [Gas98], and Stembridge [Ste02]. As Problems 10.10,
10.11, and 10.12 suggest, there are also several combinatorial in-
terpretations of the Littlewood–Richardson coefficients. The inter-
pretations in these problems are based on work of Berenstein and
Zelevinsky [BZ92], Knutson and Tao [KT99], and Knutson, Tao,
and Woodward [KTW04].
Appendix A
Linear Algebra
309
310 A. Linear Algebra
Many familiar algebraic systems are fields: the set R of real num-
bers, the set Q of rational numbers, and the set C of complex numbers
are all fields under the usual addition and multiplication.
√ √ But there
are other fields, too. For example, the set Q[ 2] = {a+b 2 ∣ a, b ∈ Q}
is a field under the usual addition and multiplication, as is the set of
real numbers which are roots of polynomials with rational coefficients.
The set Q(t) of rational functions with rational coefficients is also a
field under the usual addition and multiplication.
All of the fields we have mentioned so far are infinite, but there
are also finite fields. For example, for any prime p, the set {0, 1, 2, . . . ,
p − 1} is a field under addition and multiplication modulo p. In fact,
for every prime p and every positive integer n, there is a field of order
pn , which is unique up to isomorphism. (However, when n ≥ 2 this
field is not isomorphic to the set {0, 1, 2, . . . , pn − 1} under addition
and multiplication modulo pn .) It turns out these are all of the finite
fields.
It is possible to study symmetric functions over any field, and
everything we do in this book will work over any field containing Q.
Indeed, many recent symmetric function results involve one or two
parameters, so they are most easily stated as results over Q(q) or
Q(q, t). For concreteness, though, we will do all of our work over Q.
Once we have a field, we can study vector spaces over that field.
A.1. Fields and Vector Spaces 311
Many algebraic objects are like vector spaces, in that they are
sets with some additional structure, usually coming from operations
of various sorts. For these objects, we’re usually interested in subsets
with the same structure.
We can show that the span of any finite set of vectors is nonempty
(because it contains ⃗0) and closed under addition and scalar multipli-
cation, so the span of any finite set of vectors is a subspace.
A.2. Bases and Linear Transformations 313
But v⃗1 , . . . , v⃗n are linearly independent, so the coefficients on the right
are all 0. Therefore, aj = bj for all j. □
314 A. Linear Algebra
Most vector spaces have numerous bases, but one can use facts
about ranks of rectangular matrices and solutions of the associated
systems of linear equations to show that any two bases for a finite-
dimensional vector space will be equinumerous. So it makes sense to
make the following definition.
Definition A.11. Suppose V is a finite-dimensional vector space over
a field F . We say V has dimension n, or V is n dimensional, and we
write dim V = n, whenever V has a basis with exactly n elements.
⃗ + v⃗ = (a1 + b1 )⃗
u v1 + ⋅ ⋅ ⋅ + (an + bn )⃗
vn .
316 A. Linear Algebra
Therefore, we have
u + v⃗) = (a1 + b1 )w
f (⃗ ⃗1 + ⋅ ⋅ ⋅ + (an + bn )w
⃗n
⃗1 + ⋅ ⋅ ⋅ + an w
= a1 w ⃗n + b1 w
⃗1 + ⋅ ⋅ ⋅ + bn w
⃗n
= f (⃗
u) + f (⃗
v ),
so f respects addition. The proof that f respects scalar multiplication
is similar.
The proof that f is unique is Problem A.9. □
To verify condition (3), note that for any f (x), g(x) ∈ V and any
constant c, we have
1
⟨cf (x), g(x)⟩ = ∫ cf (x)g(x) dx
−1
1
= c∫ f (x)g(x) dx
−1
= c⟨f (x), g(x)⟩.
To verify conditions (4) and (5), first note that if f (x) = 0, then
1
∫−1 (f (x)) dx = 0, so ⟨f (x), f (x)⟩ = 0. Conversely, if f (x) ∈ V and
2
f (x) ≠ 0, then (f (x))2 ≥ 0 throughout the interval [−1, 1], and there
is a subinterval on which (f (x))2 > 0. Therefore, ⟨f (x), f (x)⟩ =
1
∫−1 (f (x)) dx > 0. □
2
1
⟨f (x), g(x)⟩ = ∫ f (x)g(x) dx.
−1
Solution. We first construct an orthogonal basis v⃗1 , v⃗2 , v⃗3 , and then
we divide each vector by its length to obtain an orthonormal basis
⃗1 , u
u ⃗2 , u
⃗3 .
We start with v⃗1 = 1, and we obtain v⃗2 via
⟨⃗
v1 , x⟩
v⃗2 = x − v⃗1
v1 , v⃗1 ⟩
⟨⃗
= x.
Similarly, we have
⟨⃗
v1 , x2 ⟩ ⟨⃗
v2 , x2 ⟩
v⃗3 = x2 − v⃗1 − v⃗2
v1 , v⃗1 ⟩
⟨⃗ v2 , v⃗2 ⟩
⟨⃗
1
= x2 − .
3
√ √
⃗1 =
Therefore, u √1 ,
2
⃗2 =
u √3 x,
2
⃗3 =
and u 3√ 5
2 2
(x2 − 13 ). □
n n
⟨⃗
v , w⟩ ⃗j , ∑ bk uk ⟩
⃗ = ⟨ ∑ aj u
j=1 k=1
n n
= ∑ ∑ aj bk ⟨⃗ ⃗k ⟩
uj , u
j=1 k=1
n
= ∑ aj bj .
j=1
An orthonormal basis u ⃗1 , . . . , u
⃗n for a vector space is a basis which
has a certain relationship with itself, namely ⟨⃗ ⃗k ⟩ = δjk . It turns
uj , u
out that two different bases for a given vector space can have this rela-
tionship with each other. For example, if u ⃗1 = ⟨1, 1, 0⟩, u⃗2 = ⟨−1, 1, 1⟩,
⃗3 = ⟨1, 0, −1⟩, v⃗1 = ⟨1, 0, 1⟩, v⃗2 = ⟨−1, 1, −1⟩, and v⃗3 = ⟨−1, 1, −2⟩, then
u
⃗j ⋅ v⃗k = δjk . This situation happens often enough that we give it a
u
name.
A.4. Problems
A.1. Suppose V is a finite-dimensional vector space over a field F .
Show that if v⃗1 , . . . , v⃗k ∈ V are linearly independent, then there
exist vectors v⃗k+1 , . . . , v⃗n for which v⃗1 , . . . , v⃗n is a basis for V .
A.2. Suppose V is a finite-dimensional vector space over a field F
and W is a subspace of V . Show W is finite-dimensional and
dim W ≤ dim V .
A.3. Suppose V is a finite-dimensional vector space over a field
F . Show that if v⃗1 , . . . , v⃗n span V , then there is a subset of
v⃗1 , . . . , v⃗n which is a basis for V .
A.4. Prove Proposition A.12.
A.5. Suppose V is a vector space over a field F and that W1 and W2
are subspaces of V . Prove or disprove: W1 ∪ W2 is a subspace
of V .
A.6. Suppose V is a vector space over a field F and that W1 and W2
are subspaces of V . Prove or disprove: W1 ∩ W2 is a subspace
of V .
A.7. Suppose V and W are vector spaces over a field F and
f ∶ V → W is a linear transformation. The kernel of f , writ-
ten ker f , is the set of v⃗ ∈ V with f (⃗ ⃗ The image of f ,
v ) = 0.
written Im f , is the set of w ⃗ ∈ W for which there exists v⃗ ∈ V
with f (⃗
v ) = w.⃗ Show ker f is a subspace of V and Im f is a
subspace of W .
A.4. Problems 321
n
⟨⃗ ⃗ = ∑ aj cj .
v , w⟩
j=1
Show conditions (1), (2), and (3) of Definition A.15 hold for
⟨ , ⟩.
A.13. Let V = Q2 , v⃗1 = ⟨1, 0⟩, v⃗2 = ⟨0, 1⟩, w
⃗1 = ⟨1, 1⟩, and w
⃗2 = ⟨1, −1⟩.
Define a function ⟨ , ⟩ ∶ V × V → Q as in Problem A.12.
(a) Show that for any v⃗ = ⟨a, b⟩ ∈ Q2 , we have
1 1
v , v⃗⟩ = a2 + ab − b2 .
⟨⃗
2 2
(b) Show ⟨ , ⟩ satisfies neither condition (4) nor condition (5)
of Definition A.15.
A.14. Let V be the vector space of continuous functions from the
closed interval [−π, π] to R, and define ⟨ , ⟩ ∶ V × V → R by
1 π
⟨f, g⟩ = ∫ f (x)g(x) dx.
π −π
(a) Show ⟨ , ⟩ is an inner product on V .
322 A. Linear Algebra
Partitions
323
324 B. Partitions
writing each of its distinct parts once, with the multiplicity of each
part in that part’s exponent. In this notation we write the partition
(5, 5, 5, 4, 3, 3, 3, 3, 3, 1, 1, 1, 1) as (53 , 41 , 35 , 14 ).
It is often useful to think about partitions geometrically. For this
we generally use their Ferrers diagrams, which some call Young dia-
grams. There are several competing conventions; we use the French
convention. That is, for us the Ferrers diagram of a partition λ is a
stack of rows of 1 × 1 boxes in which each row is left-justified, the bot-
tom row has λ1 boxes, the next row up has λ2 boxes, and in general
the jth row has λj boxes. We number the rows in our Ferrers diagrams
from the bottom, so the first row is always the bottom row. In Figure
B.1 we have the Ferrers diagram of the partition (5, 3, 3, 3, 2, 1, 1).
n 0 1 2 3 4 5 6 7 8
p(n) 1 1 2 3 5 7 11 15 22
B.2. Problems
B.1. Find an expression, as an infinite product, for the generating
function for partitions using only odd parts.
B.2. For which partitions is the infinite product ∏∞
j=1 (1 + x + x )
j 3j
x∈λ
Appendix C
Permutations
327
328 C. Permutations
1 2 3 4 5
4 1 5 2 3
1 7 3
2 4 6 5
det(A) = A11 A22 A33 − A11 A23 A32 − A12 A21 A33
+ A12 A23 A31 + A13 A21 A32 − A13 A22 A31 .
332 C. Permutations
This determinant has 6 = 3! terms, and we can turn each term into a
different permutation in two-line notation by writing the subscripts
of its factors as the columns in our permutation. Alternatively, if we
view a permutation π as a bijection from [n] to [n], then each term in
our determinant has the form ±A1π(1) A2π(2) A3π(3) for some π ∈ S3 .
For example, the term A12 A23 A31 corresponds to the permutation
231. Using a similar computation, we can see that the same pattern
holds for the determinant of a general 4 × 4 matrix A, so we can
completely describe these determinants in terms of permutations if
we can describe their coefficients in terms of permutations. To do
this, we recall the sign of a permutation.
n
(C.2) det A = ∑ sgn(π) ∏ Ajπ(j) .
π∈Sn j=1
n
(C.3) D(A) = ∑ sgn(π) ∏ Ajπ(j) ;
π∈Sn j=1
n n
= ∑ A1k ∑ sgn(π) ∏ Ajπ(j) .
k=1 π∈Sn j=2
π(1)=k
C.3. Problems
C.1. For each n ≥ 1, set σj = (j, j + 1) for 1 ≤ j ≤ n − 1. Show that
for all j with 1 ≤ j ≤ n − 2, we have σj2 = 1 and σj+1 σj σj+1 =
σj σj+1 σj = (j, j + 2). These identities are known as the braid
relations.
C.2. The cycle type of a permutation π ∈ Sn is the partition λ ⊢ n
such that in cycle notation π consists of a λ1 -cycle, a λ2 -cycle,
etc. For example, if π ∈ S10 is given by π = (47)(5132)(68),
then the cycle type of π is (4, 2, 2, 1, 1). How many permuta-
tions in S11 have cycle type (3, 3, 3, 2)?
C.3. List, in one-line notation, all 18 permutations π ∈ S6 with cycle
type (4, 2) which also have π(1) = 3.
C.4. For each partition λ ⊢ n, let cλ be the number of permutations
in Sn with cycle type λ. Then there exists a constant zλ such
that cλ zλ = n!. Find and prove a formula for zλ in terms of the
multiplicities nj of the parts of λ. The numbers zλ will appear
in Proposition 7.14.
C.5. Fill in the blank, and prove the resulting statement.
For all n ≥ 0, a permutation π ∈ Sn is an invo-
lution (that is, satisfies π = π −1 ) if and only if
the cycle type of π .
C.6. For each n ≥ 0, let in be the number of involutions in Sn . Show
that for all n ≥ 2, we have in = in−1 + (n − 1)in−2 .
C.7. For each n ≥ 2, show that the number of even permutations in
Sn is equal to the number of odd permutations in Sn .
C.8. For each k with 0 ≤ k ≤ (n2 ), find a bijection between the
set of permutations π ∈ Sn with inv(π) = k and the set of
permutations in Sn with inv(π) = (n2 ) − k.
C.9. Find and prove a relationship between inv(π) and inv(π −1 ) for
π ∈ Sn .
C.10. Show that for all π, σ ∈ Sn , we have sgn(πσ) = sgn(π) sgn(σ).
C.11. For each n ≥ 2, let Bn be the set of permutations in Sn in
which 1 and 2 are in different cycles, and let Cn be the set
of permutations in Sn in which 1 and 2 are in the same cycle.
C.3. Problems 335
337
338 Bibliography
341
342 Index
number sequence
Kostka, 87–89, 113–114, 116, bumping, 215–218
182, 189, 192, 207, 253–255 strip
skew Kostka, 121–122, 252–253, border, 258
255 horizontal, 249, 260–265
Stirling of the first kind, 60–64 vertical, 249, 260–265
Stirling of the second kind, 60, subsequence
62–64 longest increasing, 243, 245
order tableau
dominance, 49 border strip, 266–269
lexicographic, 30–32, 87–89 elegant, 136–137, 220–223
semistandard, 78–81, 120,
partition 168–170, 176–177, 187,
integer, 323–325 209–231, 251, 254, 272–278
reverse plane, 137–138, 140–142, semistandard skew, 120, 124
220 set-valued semistandard,
set, 60 129–130, 132–133
path standard, 244–245
bumping, 215–218 tournament, 114–115
permutation, 327–333 tree, 146, 152
generalized, 197–199, 214, 226,
word
232
Littlewood–Richardson, 301–303
polynomial
reading, 99–100
alternating, 89–96, 115
chromatic, 145–147, 153–154
complete homogeneous
symmetric, 39–40, 53–54,
57–58, 62–63, 69, 71–72, 77
dual stable Grothendieck,
139–144
elementary symmetric, 24–27,
53–54, 57–58, 62–63, 69–70,
72, 76–77
Selected Published Titles in This Series
91 Eric S. Egge, An Introduction to Symmetric Functions and Their
Combinatorics, 2019
90 Nicholas A. Scoville, Discrete Morse Theory, 2019
89 Martin Hils and François Loeser, A First Journey through Logic, 2019
88 M. Ram Murty and Brandon Fodden, Hilbert’s Tenth Problem, 2019
87 Matthew Katz and Jan Reimann, An Introduction to Ramsey Theory,
2018
86 Peter Frankl and Norihide Tokushige, Extremal Problems for Finite
Sets, 2018
85 Joel H. Shapiro, Volterra Adventures, 2018
84 Paul Pollack, A Conversational Introduction to Algebraic Number
Theory, 2017
83 Thomas R. Shemanske, Modern Cryptography and Elliptic Curves, 2017
82 A. R. Wadsworth, Problems in Abstract Algebra, 2017
81 Vaughn Climenhaga and Anatole Katok, From Groups to Geometry
and Back, 2017
80 Matt DeVos and Deborah A. Kent, Game Theory, 2016
79 Kristopher Tapp, Matrix Groups for Undergraduates, Second Edition,
2016
78 Gail S. Nelson, A User-Friendly Introduction to Lebesgue Measure and
Integration, 2015
77 Wolfgang Kühnel, Differential Geometry: Curves — Surfaces —
Manifolds, Third Edition, 2015
76 John Roe, Winding Around, 2015
75 Ida Kantor, Jiřı́ Matoušek, and Robert Šámal, Mathematics++,
2015
74 Mohamed Elhamdadi and Sam Nelson, Quandles, 2015
73 Bruce M. Landman and Aaron Robertson, Ramsey Theory on the
Integers, Second Edition, 2014
72 Mark Kot, A First Course in the Calculus of Variations, 2014
71 Joel Spencer, Asymptopia, 2014
70 Lasse Rempe-Gillen and Rebecca Waldecker, Primality Testing for
Beginners, 2014
69 Mark Levi, Classical Mechanics with Calculus of Variations and Optimal
Control, 2014
68 Samuel S. Wagstaff, Jr., The Joy of Factoring, 2013
67 Emily H. Moore and Harriet S. Pollatsek, Difference Sets, 2013
45 5 4
33 ӷ 24 = 3
2
35
34
12 13 1123
STML/91