# Lie's theorem

Lie's theorem In mathematics, specifically the theory of Lie algebras, Lie's theorem states that,[1] over an algebraically closed field of characteristic zero, if {displaystyle pi :{mathfrak {g}}to {mathfrak {gl}}(V)} is a finite-dimensional representation of a solvable Lie algebra, then there's a flag {displaystyle V=V_{0}supset V_{1}supset cdots supset V_{n}=0} of invariant subspaces of {displaystyle pi ({mathfrak {g}})} with {displaystyle operatorname {codim} V_{i}=i} , meaning that {displaystyle pi (X)(V_{i})subseteq V_{i}} for each {displaystyle Xin {mathfrak {g}}} and i.

Put in another way, the theorem says there is a basis for V such that all linear transformations in {displaystyle pi ({mathfrak {g}})} are represented by upper triangular matrices.[2] This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices generate an abelian Lie algebra, which is a fortiori solvable.

A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that {displaystyle pi ({mathfrak {g}})} is contained in some Borel subalgebra of {displaystyle {mathfrak {gl}}(V)} .[1] Contents 1 Counter-example 2 Proof 3 Consequences 4 See also 5 References 6 Sources Counter-example For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

Proof The proof is by induction on the dimension of {displaystyle {mathfrak {g}}} and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of {displaystyle {mathfrak {g}}} is positive. We also assume V is not zero. For simplicity, we write {displaystyle Xcdot v=pi (X)(v)} .

Step 1: Observe that the theorem is equivalent to the statement:[3] There exists a vector in V that is an eigenvector for each linear transformation in {displaystyle pi ({mathfrak {g}})} .

Indeed, the theorem says in particular that a nonzero vector spanning {displaystyle V_{n-1}} is a common eigenvector for all the linear transformations in {displaystyle pi ({mathfrak {g}})} . Conversely, if v is a common eigenvector, take {displaystyle V_{n-1}} to its span and then {displaystyle pi ({mathfrak {g}})} admits a common eigenvector in the quotient {displaystyle V/V_{n-1}} ; repeat the argument.

Step 2: Find an ideal {displaystyle {mathfrak {h}}} of codimension one in {displaystyle {mathfrak {g}}} .

Let {displaystyle D{mathfrak {g}}=[{mathfrak {g}},{mathfrak {g}}]} be the derived algebra. Since {displaystyle {mathfrak {g}}} is solvable and has positive dimension, {displaystyle D{mathfrak {g}}neq {mathfrak {g}}} and so the quotient {displaystyle {mathfrak {g}}/D{mathfrak {g}}} is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in {displaystyle {mathfrak {g}}} .

Step 3: There exists some linear functional {displaystyle lambda } in {displaystyle {mathfrak {h}}^{*}} such that {displaystyle V_{lambda }={vin V|Xcdot v=lambda (X)v,Xin {mathfrak {h}}}} is nonzero. This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: {displaystyle V_{lambda }} is a {displaystyle {mathfrak {g}}} -invariant subspace. (Note this step proves a general fact and does not involve solvability.) Let {displaystyle Yin {mathfrak {g}}} , {displaystyle vin V_{lambda }} , then we need to prove {displaystyle Ycdot vin V_{lambda }} . If {displaystyle v=0} then it's obvious, so assume {displaystyle vneq 0} and set recursively {displaystyle v_{0}=v,,v_{i+1}=Ycdot v_{i}} . Let {displaystyle U=operatorname {span} {v_{i}|igeq 0}} and {displaystyle ell in mathbb {N} _{0}} be the largest such that {displaystyle v_{0},ldots ,v_{ell }} are linearly independent. Then we'll prove that they generate U and thus {displaystyle alpha =(v_{0},ldots ,v_{ell })} is a basis of U. Indeed, assume by contradiction that it's not the case and let {displaystyle min mathbb {N} _{0}} be the smallest such that {displaystyle v_{m}notin langle v_{0},ldots ,v_{ell }rangle } , then obviously {displaystyle mgeq ell +1} . Since {displaystyle v_{0},ldots ,v_{ell +1}} are linearly dependent, {displaystyle v_{ell +1}} is a linear combination of {displaystyle v_{0},ldots ,v_{ell }} . Applying the map {displaystyle Y^{m-ell -1}} it follows that {displaystyle v_{m}} is a linear combination of {displaystyle v_{m-ell -1},ldots ,v_{m-1}} . Since by the minimality of m each of these vectors is a linear combination of {displaystyle v_{0},ldots ,v_{ell }} , so is {displaystyle v_{m}} , and we get the desired contradiction. We'll prove by induction that for every {displaystyle nin mathbb {N} _{0}} and {displaystyle Xin {mathfrak {h}}} there exist elements {displaystyle a_{0,n,X},ldots ,a_{n,n,X}} of the base field such that {displaystyle a_{n,n,X}=lambda (X)} and {displaystyle Xcdot v_{n}=sum _{i=0}^{n}a_{i,n,X}v_{i}.} The {displaystyle n=0} case is straightforward since {displaystyle Xcdot v_{0}=lambda (X)v_{0}} . Now assume that we have proved the claim for some {displaystyle nin mathbb {N} _{0}} and all elements of {displaystyle {mathfrak {h}}} and let {displaystyle Xin {mathfrak {h}}} . Since {displaystyle {mathfrak {h}}} is an ideal, it's {displaystyle [X,Y]in {mathfrak {h}}} , and thus {displaystyle Xcdot v_{n+1}=Ycdot (Xcdot v_{n})+[X,Y]cdot v_{n}=Ycdot sum _{i=0}^{n}a_{i,n,X}v_{i}+sum _{i=0}^{n}a_{i,n,[X,Y]}v_{i}=a_{0,n,[X,Y]}v_{0}+sum _{i=1}^{n}(a_{i-1,n,X}+a_{i,n,[X,Y]})v_{i}+lambda (X)v_{n+1},} and the induction step follows. This implies that for every {displaystyle Xin {mathfrak {h}}} the subspace U is an invariant subspace of X and the matrix of the restricted map {displaystyle pi (X)|_{U}} in the basis {displaystyle alpha } is upper triangular with diagonal elements equal to {displaystyle lambda (X)} , hence {displaystyle operatorname {tr} (pi (X)|_{U})=dim(U)lambda (X)} . Applying this with {displaystyle [X,Y]in {mathfrak {h}}} instead of X gives {displaystyle operatorname {tr} (pi ([X,Y])|_{U})=dim(U)lambda ([X,Y])} . On the other hand, U is also obviously an invariant subspace of Y, and so {displaystyle operatorname {tr} (pi ([X,Y])|_{U})=operatorname {tr} ([pi (X),pi (Y)]|_{U}])=operatorname {tr} ([pi (X)|_{U},pi (Y)|_{U}])=0} since commutators have zero trace, and thus {displaystyle dim(U)lambda ([X,Y])=0} . Since {displaystyle dim(U)>0} is invertible (because of the assumption on the characteristic of the base field), {displaystyle lambda ([X,Y])=0} and {displaystyle Xcdot (Ycdot v)=Ycdot (Xcdot v)+[X,Y]cdot v=Ycdot (lambda (X)v)+lambda ([X,Y])v=lambda (X)(Ycdot v),} and so {displaystyle Ycdot vin V_{lambda }} .

Step 5: Finish up the proof by finding a common eigenvector.

Write {displaystyle {mathfrak {g}}={mathfrak {h}}+L} where L is a one-dimensional vector subspace. Since the base field is algebraically closed, there exists an eigenvector in {displaystyle V_{lambda }} for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of {displaystyle {mathfrak {h}}} , the proof is complete. {displaystyle square } Consequences The theorem applies in particular to the adjoint representation {displaystyle operatorname {ad} :{mathfrak {g}}to {mathfrak {gl}}({mathfrak {g}})} of a (finite-dimensional) solvable Lie algebra {displaystyle {mathfrak {g}}} over an algebraically closed field of characteristic zero; thus, one can choose a basis on {displaystyle {mathfrak {g}}} with respect to which {displaystyle operatorname {ad} ({mathfrak {g}})} consists of upper triangular matrices. It follows easily that for each {displaystyle x,yin {mathfrak {g}}} , {displaystyle operatorname {ad} ([x,y])=[operatorname {ad} (x),operatorname {ad} (y)]} has diagonal consisting of zeros; i.e., {displaystyle operatorname {ad} ([x,y])} is a strictly upper triangular matrix. This implies that {displaystyle [{mathfrak {g}},{mathfrak {g}}]} is a nilpotent Lie algebra. Moreover, if the base field is not algebraically closed then solvability and nilpotency of a Lie algebra is unaffected by extending the base field to its algebraic closure. Hence, one concludes the statement (the other implication is obvious):[4] A finite-dimensional Lie algebra {displaystyle {mathfrak {g}}} over a field of characteristic zero is solvable if and only if the derived algebra {displaystyle D{mathfrak {g}}=[{mathfrak {g}},{mathfrak {g}}]} is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability: If V is a finite-dimensional vector space over a field of characteristic zero and {displaystyle {mathfrak {g}}subseteq {mathfrak {gl}}(V)} a Lie subalgebra, then {displaystyle {mathfrak {g}}} is solvable if and only if {displaystyle operatorname {tr} (XY)=0} for every {displaystyle Xin {mathfrak {g}}} and {displaystyle Yin [{mathfrak {g}},{mathfrak {g}}]} .[5] Indeed, as above, after extending the base field, the implication {displaystyle Rightarrow } is seen easily. (The converse is more difficult to prove.) Lie's theorem (for various V) is equivalent to the statement:[6] For a solvable Lie algebra {displaystyle {mathfrak {g}}} over an algebraically closed field of characteristic zero, each finite-dimensional simple {displaystyle {mathfrak {g}}} -module (i.e., irreducible as a representation) has dimension one.

Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional {displaystyle {mathfrak {g}}} -module V, let {displaystyle V_{1}} be a maximal {displaystyle {mathfrak {g}}} -submodule (which exists by finiteness of the dimension). Then, by maximality, {displaystyle V/V_{1}} is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true over any base field since in this case every vector subspace is a Lie subalgebra.[7] Here is another quite useful application:[8] Let {displaystyle {mathfrak {g}}} be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical {displaystyle operatorname {rad} ({mathfrak {g}})} . Then each finite-dimensional simple representation {displaystyle pi :{mathfrak {g}}to {mathfrak {gl}}(V)} is the tensor product of a simple representation of {displaystyle {mathfrak {g}}/operatorname {rad} ({mathfrak {g}})} with a one-dimensional representation of {displaystyle {mathfrak {g}}} (i.e., a linear functional vanishing on Lie brackets).