Over the last few weeks I’ve been writing about several little gemstones that I have seen in symmetric function theory. But one of the main overarching beauties of the entire area is that there are at least five natural bases with which to express any symmetric functions: the monomial ($m_\lambda$), elementary ($e_\lambda$), power sum ($p_\lambda$), complete homogeneous ($h_\lambda$), and Schur ($s_\lambda$) bases. As a quick reminder, here is an example of each, in three variables $x,y,z$:
$m_{(3,2,2)}=x^3y^2z^2+y^3x^2z^2+z^3y^2x^2$
$e_{(3,2,2)}=e_3e_2e_2=xyz(xy+yz+zx)^2$
$p_{(3,2,2)}=p_3p_2p_2=(x^3+y^3+z^3)(x^2+y^2+z^2)^2$
$h_{(2,1)}=h_2h_1=(x^2+y^2+z^2+xy+yz+zx)(x+y+z)$
$s_{(3,1)}=m_{(3,1)}+m_{(2,2)}+2m_{(2,1,1)}$
Since we can usually transition between the bases fairly easily, this gives us lots of flexibility in attacking problems involving symmetric functions; it’s sometimes just a matter of picking the right basis.
There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group.
What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds.
In the last few posts (see here and here), I’ve been talking about various bases for the symmetric functions: the monomial symmetric functions $m_\lambda$, the elementary symmetric functions $e_\lambda$, the power sum symmetric functions $p_\lambda$, and the homogeneous symmetric functions $h_\lambda$. As some of you aptly pointed out in the comments, there is one more important basis to discuss: the Schur functions!
When I first came across the Schur functions, I had no idea why they were what they were, why every symmetric function can be expressed in terms of them, or why they were useful or interesting. I first saw them defined using a simple, but rather arbitrary-sounding, combinatorial approach:
Time for another gemstone from symmetric function theory! (I am studying for my Ph.D. qualifying exam at the moment, and as a consequence, the next several posts will feature yet more gemstones from symmetric function theory. You can refer back to this post for the basic definitions.)
Start with a polynomial $p(x)$ that factors as \[p(x)=(x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_n).\] The coefficients of $p(x)$ are symmetric functions in $\alpha_1,\ldots,\alpha_n$ - in fact, they are, up to sign, the elementary symmetric functions in $\alpha_1,\ldots,\alpha_n$.
Last week I posted about the Fundamental Theorem of Symmetric Function Theory. Zarathustra Brady pointed me to the following alternate proof in Serge Lang’s book Algebra. While not as direct or useful in terms of changing basis from the $e_\lambda$`s to the $m_\lambda$`s, it is a nice, clean inductive proof that I thought was worth sharing: