I came across an exercise in Ahlfors’ Complex Analysis the other day that got me thinking. The exercise asked to prove that the complex numbers $a$, $b$, and $c$ form the vertices of an equilateral triangle if and only if $a^2+b^2+c^2=ab+bc+ca.$ It struck me as quite a nice, simple, and symmetric condition.
My first instinct, in going about proving this, was to see if the condition was translation invariant, so that one of the points can be moved to the origin. Indeed, if we subtract a constant $z$ from each of $a,b,c$ the equation becomes $(a-z)^2+(b-z)^2+(c-z)^2=(a-z)(b-z)+(b-z)(c-z)+(c-z)(a-z),$ which simplifies to the original equation after expanding each term. So, we can assume without loss of generality that $a=0$, and we wish to show that $0$, $b$, and $c$ form the vertices of an equilateral triangle if and only if $b^2+c^2=bc$.
I have exciting news today: The first ever joint paper by Monks, Monks, Monks, and Monks has been accepted for publication in Discrete Mathematics.
These four Monks’s are my two brothers, my father, and myself. We worked together last summer on the notorious $3x+1$ conjecture (also known as the Collatz conjecture), an open problem which is so easy to state that a child can understand the question, and yet it has stumped mathematicians for over 70 years.
Over the last few weeks I’ve been writing about several little gemstones that I have seen in symmetric function theory. But one of the main overarching beauties of the entire area is that there are at least five natural bases with which to express any symmetric functions: the monomial ($m_\lambda$), elementary ($e_\lambda$), power sum ($p_\lambda$), complete homogeneous ($h_\lambda$), and Schur ($s_\lambda$) bases. As a quick reminder, here is an example of each, in three variables $x,y,z$:
$m_{(3,2,2)}=x^3y^2z^2+y^3x^2z^2+z^3y^2x^2$
$e_{(3,2,2)}=e_3e_2e_2=xyz(xy+yz+zx)^2$
$p_{(3,2,2)}=p_3p_2p_2=(x^3+y^3+z^3)(x^2+y^2+z^2)^2$
$h_{(2,1)}=h_2h_1=(x^2+y^2+z^2+xy+yz+zx)(x+y+z)$
$s_{(3,1)}=m_{(3,1)}+m_{(2,2)}+2m_{(2,1,1)}$
Since we can usually transition between the bases fairly easily, this gives us lots of flexibility in attacking problems involving symmetric functions; it’s sometimes just a matter of picking the right basis.
There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group.
What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds.
In the last few posts (see here and here), I’ve been talking about various bases for the symmetric functions: the monomial symmetric functions $m_\lambda$, the elementary symmetric functions $e_\lambda$, the power sum symmetric functions $p_\lambda$, and the homogeneous symmetric functions $h_\lambda$. As some of you aptly pointed out in the comments, there is one more important basis to discuss: the Schur functions!
When I first came across the Schur functions, I had no idea why they were what they were, why every symmetric function can be expressed in terms of them, or why they were useful or interesting. I first saw them defined using a simple, but rather arbitrary-sounding, combinatorial approach: