Time for another gemstone from symmetric function theory! (I am studying for my Ph.D. qualifying exam at the moment, and as a consequence, the next several posts will feature yet more gemstones from symmetric function theory. You can refer back to this post for the basic definitions.)
Start with a polynomial $p(x)$ that factors as \[p(x)=(x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_n).\] The coefficients of $p(x)$ are symmetric functions in $\alpha_1,\ldots,\alpha_n$ - in fact, they are, up to sign, the elementary symmetric functions in $\alpha_1,\ldots,\alpha_n$.
In particular, if $e_i$ denotes $\sum_{j_1<\cdots<j_i} \alpha_{j_1}\alpha_{j_2}\cdots\alpha_{j_i}$, then \[p(x)=x^n-e_1x^{n-1}+e_{2}x^{n-2}-\cdots+(-1)^n e_n.\] (These coefficients are sometimes referred to as Vieta’s Formulas.)
Since $p(\alpha_i)=0$ for all $i$, we can actually turn the equation above into a symmetric function identity by plugging in $\alpha_1,\ldots,\alpha_n$: \[\begin{eqnarray*} \alpha_1^n-e_1\alpha_1^{n-1}+e_{2}\alpha_1^{n-2}-\cdots+(-1)^n e_n&=&0 \\ \alpha_2^n-e_1\alpha_2^{n-1}+e_{2}\alpha_2^{n-2}-\cdots+(-1)^n e_n&=&0 \\ \vdots\hspace{3cm}& & \end{eqnarray*}\] and then summing these equations: \[(\alpha_1^n+\cdots+\alpha_n^n)-e_1(\alpha_1^{n-1}+\cdots+\alpha_n^{n-1})+\cdots+(-1)^n\cdot ne_n=0\] …and we have stumbled across another important basis of the symmetric functions, the power sum symmetric functions. Defining $p_i=\alpha_1^i+\cdots+\alpha_n^i$, every symmetric function in $\{\alpha_j\}$ can be uniquely expressed as a linear combination of products of these $p_i$`s (we write $p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_n}$.)
So, we have \[p_n-e_1p_{n-1}+e_2 p_{n-2}-\cdots+(-1)^n\cdot ne_n=0.\] This is called the $n$th Newton-Girard identity, and it gives us a recursive way of expressing the $p$`s in terms of the $e$`s.
Well, almost. We have so far only shown that this identity holds when dealing with $n$ variables. However, we can plug in any number of zeros for the $\alpha_i$`s to see that the identity also holds for any number of variables $k<n$. And, if we had more than $n$ variables, we can compare coefficients of each monomial individually, which only can involve at most $n$ of the variables at a time since the equation is homogeneous of degree $n$. Setting the rest of the variables equal to zero for each such monomial will do the trick.
So now we have it! The Newton-Girard identities allow us to recursively solve for $p_n$ in terms of the $e_i$`s. Wikipedia does this nicely and explains the computation, and the result is: \[p_n=\det\left(\begin{array}{cccccc} e_1 & 1 & 0 & 0 &\cdots & 0 \\ 2e_2 & e_1 & 1 & 0 & \cdots & 0 \\ 3e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ (n-1)e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ ne_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 \end{array}\right)\] For instance, this gives us $p_2=e_1^2-2e_2$, which is true: \[x^2+y^2+z^2=(x+y+z)^2-2(xy+yz+zx).\] The Wikipedia page derives a similar identity for expressing the $e$`s in terms of the $p$`s. It also does the same for expressing the complete homogeneous symmetric functions $h_n$ in terms of the $p$`s and vice versa. However, it does not explicitly express the $e$`s in terms of the $h$`s or vice versa. In the name of completeness of the Internet, let’s treat these here.
Fix some number of variables $x_1,\ldots,x_n$. For any $d$, define $h_d$ to be the sum of all monomials of degree $d$ in $x_1,\ldots,x_n$. This is clearly symmetric, and we define \[h_\lambda = h_{\lambda_1}h_{\lambda_2}\cdots h_{\lambda_k}\] for any partition $\lambda$, as we did for the elementary symmetric functions last week. The $h_\lambda$`s, called the complete homogeneous symmetric functions, form a basis for the space of symmetric functions.
It’s a fun exercise to derive the following generating function identities for $e_n$ and $h_n$: \[H(t)=\sum_n h_n t^n = \prod_{i=1}^n \frac{1}{1-x_i t}\] \[E(t)=\sum_n e_n t^n = \prod_{i=1}^n (1+x_i t)\] (The first requires expanding out each factor as a geometric series, and then comparing coefficients. Try it!)
From these, we notice that $H(t)E(-t)=1$, and by multiplying the generating functions together and comparing coefficients, we find the identities \[h_n=h_{n-1}e_1-h_{n-2}e_2+\cdots+(-1)^{n-1}e_n\] Just as before, this gives us a recursion for $h_n$ in terms of the $e_i$`s. With a bit of straightforward algebra, involving Cramer’s Rule, we can solve for $h_n$:
\[h_n=\det\left(\begin{array}{cccccc} e_1 & 1 & 0 & 0 &\cdots & 0 \\ e_2 & e_1 & 1 & 0 & \cdots & 0 \\ e_3 & e_2 & e_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ e_{n-1} & e_{n-2} & e_{n-3} & e_{n-4} & \ddots & 1 \\ e_n & e_{n-1} & e_{n-2} & e_{n-3} & \cdots & e_1 \end{array}\right)\] We can also use the same equations to solve for $e_n$ in terms of $h_n$:
\[e_n=\det \left(\begin{array}{cccccc} h_1 & 1 & 0 & 0 &\cdots & 0 \\ h_2 & h_1 & 1 & 0 & \cdots & 0 \\ h_3 & h_2 & h_1 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ h_{n-1} & h_{n-2} & h_{n-3} & h_{n-4} & \ddots & 1 \\ h_n & h_{n-1} & h_{n-2} & h_{n-3} & \cdots & h_1 \end{array}\right)\]
I find these two formulas to be more aesthetically appealing than the standard Newton-Gerard formulas between the $p_n$`s and $e_n$`s, since they lack the pesky integer coefficients that appear in the first column of the matrix in the $p$-to-$e$ case. While perhaps not as mainstream, they are gemstones in their own right, and deserve a day to shine.