Lie Algebras

Lie algebras are encountered all the time in physics as a way to formalize the intuition of infinitesimal transformations. Here we explore them on their own merit, developing tools for the classification and calculation of representations with hints and connections to physical applications.

Basic Definitions

In an effort to study Lie algebras by themselves here is an isolated definition.

Definition: A Lie algebra is a vector space g over a Field F with a map [,]:g×gg called the Lie bracket that satisfies the following axioms for any X,Y,Zg

  1. Bilinearity: For any a,bF we have that

    [aX+bY,Z]=a[X,Z]+b[Y,Z][X,aY+bZ]=a[X,Y]+b[X,Z]
  2. Antisymmetry: [X,Y]=[Y,X]

  3. Jacobi identity: [X,[Y,Z]]+[Z,[X,Y]]+[Y,[Z,X]]=0

While this is a correct definition, it is stilted because it is devoid of the context in which we encounter Lie Algebras. Here is a Lemma that can help us make the connection clearer.

Lemma: Let M be a manifold and U a chart. Then the Lie derivative of vector fields L:X(U)×X(U)X(U) satisfies the properties of the Lie bracket.

Proof: The Lie derivative is defined for any two vector fields X,YX(U) and any function fC(U) as

LXYf=X(Yf)Y(Xf).

Treating X,Y as derivations we can prove the statement directly using properties of partial derivatives.

The point of playing with this lemma is that vector fields seem to be the natural origin of the abstract definition of Lie algebras. A vector field on a manifold, is used to describe flows on it using infinitesimal generators for the direction and speed of each point on the manifold. The Lie algebra axioms are modeled to emulate the properties of these vector fields.

Example: Given a manifold M and a chart UM the set of vector fields X(U) form a Lie algebra.

Yet, the origin of Lie algebras actually goes back to Lie groups.

Origin from Lie groups

A Lie group is essentially a smooth group. More info about Lie groups is here here. The idea of a Lie group though is that it is both a group and a manifold. Therefore it has a very nice smooth structure attached to it. For example, the multiplication map is smooth.

Let's find some special vector fields in a Lie group.

Definition: Let XX(G) be a vector field in G. Then X is Left invariant if under the left multiplication map Lg:GG which takes hgh, the pushforward satisfies

LgX=X

for all g.

In other words left invariant vector fields are ones that are obtained by left translating a single vector around the group.

Theorem: The set of all left invariant vector fields of a Lie group G is a Lie algebra with the Lie derivative as Lie bracket. We usually call it the Lie algebra of the group G and it is denoted by g.

So in the most fundamental sense. Lie algebras originate from describing the flow of multiplication of a Lie group. This is how they find so many applications in physics.

Example: Given a vector space V, which is also a Lie group, its Lie algebra is the vector space itself with the bracket [X,Y]=0 for any X,YV. However, there is a much more interesting Lie algebra we can assign to V which is its Endomorphism algebra End(V) that contains all the linear endomorphisms (all the matrices in the finite dimensional case) with Lie bracket the matrix commutator.

Simple Lie Algebras

What we are aiming to do here is to describe how to calculate stuff with Lie algebras, not necessarily how they are motivated from Lie groups. For example, things like infinite dimensional Lie algebras appear all the time in places like QM and QFT. Being able to classify their representations is super interesting.

We start by studying simple Lie algebras. The word simple here is a category theory word which roughly means an object with no non-trivial quotient object. Now if one is interested in a more rigorous version of this, it can be found here. That said, we can escape category Language by defining a simple Lie algebra using bases.

Definition: Let g be a lie algebra and Jg be a basis for g. If for any such basis there exists no subset LJ where [L,J]spanL then g is simple.

Notice that this means that there is no proper ideal (which is a quotient object in the category of Lie algebras).

Cartan Basis

One useful way to classify simple Lie algebras is by classifying how "commuting" they are. Here is what we mean more precisely.

Definition: Given a simple Lie algebra g, its Cartan subalgebra h is the maximal subalgebra of g such that

[h,h]={0}.

Essentially the Cartan subalgebra is the algebra formed using the maximum number of commuting generators. The question remains for what we can do with the rest of the generators.

Proposition: Let g be a Lie algebra over a closed field (from now on we will use C) and h be its Cartan subalgebra with a basis H={Hi}i=0dimh. Then there exists a basis E of g such that HE where given HH

[H,E]=αE(H)E,

for all EEH, where αEh, the dual space of h. This is known as the Cartan-Weyl basis.

Proof: This looks similar to the construction of ladder operators. The way to show this is the following. Pick any basis J such that HJ then we know that for any HH and JJH

[H,J]=KJfH,JKK=KJHfH,JKK+KHfH,JKK.

Then we can pick E to be

EJ=JH,KHfH,JKK.

What we did is that we subtracted the part of J that was in h. Therefore the new basis formed by H{EJ}JJH is still a basis. Using a similar reasoning we can then diagonalize to obtain

[H,E]=αH,EE,

where αH,EC. Since this gives a complex number for each H in a linear way we can define αEh as

αE(H)=αH,E,

for any HH.

We call αEh a root of g.

Adjoint Representation

There is a particularly nice way to understand these roots in the adjoint representation.

Definition: Given a Lie algebra g its adjoint representation is the representation of the Lie algebra onto itself defined by

ad:gAut(g)XadX=[X,].

Proposition: The nonzero eigenvalues of adH for any H in the commuting part of the Cartan-Weyl basis of h are given by α\bull(H).

Proof: The Cartan-Weyl basis is, by construction, an eigenbasis for any H since for any E

adHE=[H,E]=αE(H)E.

Therefore αE(H) is an eigenvalue of H.

Notice how we can completely define each E using a set of roots αE. In other words there is a one-one and onto map between the roots and the remaining generators E of the algebra. So in some sense, that we will make precise later, fixing the roots and the Cartan subalgebra defines our simple Lie algebra!

So we can perhaps refer to the generator with corresponding root α as Eα or even just α instead. These are also common notations.

Commutation Relations

One last thing that is worth highlighting for calculation purposes is the following commutation relations of the Cartan-Weyl basis

Proposition: Let g be a simple Lie algebra and consider the Cartan-Weyl basis, H,Hh be generators of the Cartan subalgebra and E,Egh be generators of the remaining algebra with α,αh the corresponding roots. Then the following identities are true.

  1. [H,H]=0
  2. [H,E]=α(H)E,
  3. If α+α=0 then [E,E]=2αHα,α, where αH=i=0dimhαiHi.
  4. If α+α is also a root corresponding to generator E¯ then there exists a λC such that [E,E]=λE¯.
  5. If α+α is none of the above, then [E,E]=0.

Killing Form

Since we have a representation of the algebra onto itself, it would be nice to find an "inner product" on the Lie algebra that is invariant under the action of itself. Just like we have orthogonal transformations and we find the Euclidean inner product that is invariant under them.

Notice that since the Lie algebras we are considering are all complex they carry a Hermitian inner product. So talking about lengths and stuff is always possible. However finding an ad invariant inner product isn't. So we have to relax something.

Turns out that if we want something to be linear symmetric and invariant under Lie algebra automorphisms in a simple Lie algebra we don't really have many choices.

Theorem: Let g be a simple Lie algebra. Then any Lie algebra automorphism invariant symmetric bilinear form K:g×gC is given by

K=λtr(adXadY)

where λC is any number.

By the way, invariance of a bilinear form means that for any Lie algebra automorphism f:gg

K(f(X),f(Y))=K(X,Y),

for any X,Yg.

Proof: First we need to clarify how the trace tr of a linear endomorphism f is defined. What we do is pick a basis J and then we note that for any JJ

f(J)=kJfJKK,

for some fJKC. Then

trf=jJfJJ.

This is known as the Euclidean trace, which implies that, as we will soon see, there are other type of traces with respect to different isomorphisms between the vector space and its dual. Now for the actual proof consider B:g×gC be a symmetric invariant bilinear form. Then consider a linear map B:gg defined by for any Xg by B(X)=B(X,).

Since for any Xg the representation adX is a Lie algebra automorphism we have that

BadX=adXB,

where ad:gAut(g) is the dual representation given for any Xg and ωg by

adX(ω)=ωadX.

Notice that for matrix representations this is simply the statement

adX(ω)=adX.

As a result, B is an intertwiner between the adjoint representation and its dual. However, by Schur's lemma since g is simple, then hom(g,g)=C. So for any two such maps B,C there exists λC such that B=λC.

So now all we need to show is that tr(adXadY) is invariant under automorphisms. Consider an automorphism fAut(g) and any map h:gg then we have that tr(fhf1)=tr(h) by the properties of trace. Additionally for any X,Yg

adf(X)Y=[f(X),Y]=f([X,f1(Y)])=f(adXf1Y),

or in other words adf(X)=fadXf1, which kind of justifies the relation between the adjoint representation and conjugation. Anyway, plugging this to the trace we see that it is invariant.

This is fantastic! The best we can do is this form tr(adXadY). So might as well give it a name.

Definition: The Killing form is a symmetric bilinear form K on g given for each X,Yg by

K(X,Y)=tr(adXadY).

Later we will introduce the Killing form normalized with a different factor so for now we will keep the notation abstract as K.

Lemma: The Killing form is nondegenerate on a simple Lie algebra.

Proof: Notice that kerK is an ideal of g. That is because if there exists some XkerK then for all Yg we have that [X,Y] is in the ideal because for any Zg

K([X,Y],Z)=K(X,[Y,Z])=0,

since X is in the kernel. However we know that g is simple so the kernel is either 0 or g. Since the kernel is commuting it can't be g in general.

Since the Killing form is nondegenerate we can finally define an orthonormal basis for the Lie algebra which is fantastic! We can also use it to define an isomorphism between g and g just like we do using any nondegenerate bilinear form of a vector space.

A couple of interesting uses of the Killing form are here.

Lemma: The Cartan subalgebra is orthogonal to the rest with respect to the Killing form.

Proof: We can pick a generator E in the rest of the algebra and show that for all H,H generators of the Cartan subalgebra

0=K(adHH,E)=αE(H)K(H,E).

Which implies K(H,E)=0.

Proposition: Given a simple Lie algebra g over C and a root αh, then αh is also a root.

Proof: Let E,E be Cartan-Weyl generators with associated roots α,α. Then consider any element of the basis of the Cartan subalgebra H. Therefore we have

α(H)K(E,E)=K(adHE,E)=K(E,adHE)=α(H)K(E,E)(α+α)K(E,E)=0.

This means that either α=α or K(E,E)=0. If we assume that α is not a root then K(E,)=0 for all elements in the Cartan-Weyl basis (we used the previous lemma where E is perpendicular to the Cartan subalgebra). Therefore K is degenerate. Since we know it is nondegenerate we have that α must be a root.

Weights

Now it is time to play with the representations of the simple Lie algebras. In our attempt to classify them we will generalize the idea we introduced as roots. But let's start simple.

Definition: Given a Lie algebra g a Lie algebra representation of g on a vector space V is a Lie algebra homomorphism ρ:gEnd(V). In other words for any X,Yg

[ρ(X),ρ(Y)]=ρ([X,Y]).

We often abuse notation and call the representation V, in which case we refer to the g module defined by the representation ρ. Sometimes we even use the notation Vρ. The representation is unitary if V is a complex vector space with a positive semidefinite Hermitian form where representations of generators with opposite roots are Hermitian conjugates of each other.

Example: The adjoint representation of the Lie algebra to itself is (or can always be made) unitary.

Lemma: Let g be a simple Lie algebra and h be its Cartan subalgebra. Then for every representation V of g there exists a basis that simultaneously diagonalizes the Cartan basis.

Proof: Since elements in the Cartan subalgebra commute so do their representations. Therefore they are simultaneously diagonalizable.

Definition: Given a representation ρ:gEnd(V) of a Lie algebra g and ψV be a simultaneous eigenvector of the representation of the generators of the Cartan subalgebra h. Then a weight is an element λh such that for any generator Hh

ρ(H)ψ=λ(H)ψ.

Notice that the roots are the weights of the Adjoint representation. This basis is quite nice because it has the following property.

Proposition: Given a representation ρ:gEnd(V) of a Lie algebra g, let ψV be an eigenvector with weight λh, H an element of the Cartan basis, and E a Cartan-Weyl basis element with root αh. Then ρ(E)ψ is an eigenvector of ρ(H) with weight α+λ.

Proof: This follows from the commutation relation between H and E.

ρ(H)ρ(E)ψ=ρ(E)ρ(H)ψ+ρ([H,E])ψ=(λ+α)ρ(E)ψ.

So we have found a expression for Ladder operators! This is super fun. This leads us to generalize a lot of our intuition from the angular momentum representations. Also to keep notation clear, we will now work in terms of modules where essentially given a representation ρ:gEnd(V) we have defined a product g×VV using the representation. So we will no longer write ρ explicitly.

Theorem: Let V be a finite dimensional unitary representation of a simple Lie algebra g, Eg a generator with root α, and ψV a basis element with weight λ. Then there exist integers m,nN such that Emψ=(E)nψ=0.

Proof: We first notice that the vectors Epψ and Eqψ for integers pq are orthogonal by considering any Hh

(pα(H)+λ(H))Epψ,Eqψ=HEpψ,Eqψ=Eaψ,HEbψ=(qα(H)+λ(H))Eaψ,Ebψ.

Therefore we have that

(pq)α(H)Eaψ,Ebψ=0.

Since α(H) cannot be zero for all Cartan generators and pq we have the the two vectors are orthogonal. As a result, we can create a sequence of orthogonal vectors of the form {Ekψ}k=1m. However since V is finite dimensional such a sequence must have at most dimV elements. Therefore it must be that there exists an mN such that Emψ=0. The proof for the conjugate is identical.

One more interesting thing is that the above theorem imposes a cool result for the weights.

Corollary: let α be a root of g and λ be a weight in a finite dimensional unitary representation of g. Then for any H in the basis of the Cartan subalgebra of g there exists an integer kZ such that

2(λ,α)(α,α)Z,

where (,):h×hC is the inner product in the dual space induced by the killing form on g.

Proof: The proof relies on the fact that in any finite dimensional unitary representation V of g given any generator Eh with root αh and H:=12[E,E], the set {H,E,E} forms a representation of su(2) by taking any vector ψV with weight λh.

That representation will contain a state with maximum and minimum z-component of angular momentum. So there exist a pN such that Epψ is (without loss of generality) the maximum vector with eigenvalue j and and integer qN such that (E)qψ has the minimum eigenvalue j where j is half integer. So we can write

j=λ(H)+pα(H)j=λ(H)qα(H).

Summing the two proves the theorem.

Notice that this corollary applies to the adjoint representation as well! So we can constrain the relations between the roots of a Lie algebra.

Lemma: If α,β are roots, then

β2(β,α)(α,α)α,

is a also a root.

Proof: Consider the adjoint representation of the Lie algebra, and construct a subrepresentation of su(2) as we did above using a generator with root β as the highest weight vector. Then, we know that since β(H)qα(H) is a weight, then so is

β(H)(qp)α(H)=β(H)2(β,α)(α,α)α(H).

However, this is true for enough H to form a basis for h. Therefore we have shown the claim.

Here is a super interesting application.

Corollary: Let α±β be weights of a simple Lie algebra. Then they are not parallel.

Proof: By the previous corollary if they were parallel they could only be parallel by a half integer since

2(β,α)(α,α)Z.

Let β=λα therefore we have that both 2λ and 2λ must be integers. This implies that λ{±12,±1,±2}. We have assumed that λ±1 so without loss of generality we can pick λ=±2. Now let Eα,Eβ be the corresponding generators. We have that

[E±α,E±α]Eβ,

since α+α=β. However we know that [E±α,E±α]=0 because of antisymmetry. Therefore there is no such generator Eβ which implies that β is not a root.

Weyl Group

Playing with the reflections sα:hh associated with a root αΔ we have stumbled upon a group! That is the group of reflections associated with roots where the generators are sα and the group operation composition. This is called the Weyl group.

Proposition: Let W be the Weyl group associated with a root system Δ with simple roots SΔ. Then Δ=WS under the defining group action of W on h.

Simple Roots

Let's dive in more into the characterization of roots and their properties by using them to construct a basis for the dual space of the Cartan subalgebra h of a Lie algebra g.

Lemma: Given a set Δh of roots of a simple Lie algebra g there always exists a subset Δ+Δ such as Δ=Δ+Δ+.

Proof: We know that 0 can't be a root, because if it was the associated basis element would be in the Cartan subalgebra. We also know that for each root αΔ we have that αΔ. Therefore we form Δ+ such that for any αΔ+ the element α is not there.

While this doesn't seem that cool, here is a cool thing.

Proposition: A Euclidean inner product on h defines such a subset Δ+ for a set of roots Δ.

Proof: To construct it pick any αh such that α,β0 for any root βΔ. Then take

Δ+={βΔα,β>0}.

Note: We always have such a Euclidean inner product since h is a finite dimensional complex vector space.

We call such a set Δ+ a set of positive roots. An interesting thing to notice is that while we will always be able to have positive roots, we have multiple choices for them. All of the choices will contain precisely half the available roots and as we will see, we consider them equivalent since each set will either contain either α or α for any root. So from now on, without loss of generality we will assume that we have fixed a set of positive roots. Also we call Δ:=Δ+.

The reason why we introduced them is the following.

Lemma: The number of positive roots in a simple Lie algebra is always greater than the dimension of the Cartan subalgebra.

Proof: Assume the converse. Assume that the Cartan-Weyl generator with root α is Eα and the set of positive roots is Δ+. Then consider a subset L of the Cartan-Weyl basis J that contains all Eα for every αΔ and every element of the Cartan subalgebra given by

[Eα,Eα]=2αHα,αh.

That is a subset of the Cartan Weyl basis such that [L,J]spanL. Which implies that the algebra is not simple. Therefore |Δ+| must always be greater or equal to dimh.

This means that there might be a chance that we would be able to find a basis for h made out of positive roots! As we will see, we will always be able to do that. Let's try to show this.

Definition: A positive root αΔ+is simple if there exist no two positive roots β,γΔ+ such that α=β+γ.

Corollary: Any positive root is a sum of simple roots.

Proof: If it is simple we are done, if it is not, it is a sum of two positive roots. And so on until we reach a sum of simple roots.

Lemma: If for any two positive roots (α,β)>0 then αβ is a root.

Proof: We notice that by trigonometry in the case where (α,β)>0 it must hold that

α2(α,β)(β,β)β=β+2(α,β)(α,α)α.

This implies that

2(α,β)(β,β)=2(α,β)(α,α)=1.

Therefore αβ is a root.

I'm cooking lemme prove another lemma and then we will work.

Lemma: Any two distinct simple roots have (α,β)0.

Proof: If the roots are positive, then αβ or βα would be a positive root. However, all positive roots are given by positive sums of simple roots, therefore ±(αβ) are not positive which is a contradiction since one of them has to be.

Finally we are ready for the super amazingly cool theorem about simple roots.

Theorem: The simple roots are a basis for h for any simple Lie algebra.

Proof: We first show that they span h. They do so because they span all positive roots, and all positive roots span h. Then we need to show that they are linearly independent. If they were linearly dependent then we could find coefficients aiR such that

aisi=0,

where si are the simple roots. Now let's split the coefficients into positive and negative ones to obtain two vectors α=bisi, where bi are the positive coefficients, and β=cisi which are the negative coefficients. We know that αβ=0(α,β)>0.

However we have that

(α,β)=bicj(si,sj)0,

since each coefficient is positive and (si,sj) has to be non-positive. So the simple roots are a linearly independent spanning set.

This is an incredibly powerful result in the classification of simple Lie algebras! We can now use simple roots as a basis for our Cartan Algebra, along with the fact that there are as many simple roots as there are Cartan generators.

Cartan Matrix and Coroots

Another really useful construction are the coroots, which are essentially the dual description of the roots. There is a very pretty lattice picture that comes with this, but we won't introduce it until we talk about the Weyl group.

Definition: Given a root αΔ for some simple Lie algebra g the dual root or coroot αh is given by

α:=2α(α,α).

Using this we can define the Cartan matrix which will be a very useful tool.

Definition: The Cartan Matrix of a simple Lie algebra with simple roots {ai}i=0dimh is the change of basis transformation between the roots and the dual roots. In other words it is the matrix with coefficients

Aij=(ai,aj).

Let's discover some of its properties.

Proposition: The Cartan matrix is an integer matrix with diagonal elements equal to 2 and nondiagonal elements in {0,1}.

Proof: I mean we have shown a million times why that inner product must be integers. For the diagonal we literally plug it in. For the rest, we need to use our previous lemmas. First of all by the triangle inequality

AijAji<4.

We also know that (ai,aj)0. Therefore if Aij2 then AijAji4. As a result, 1Aij0.

Now we will show even fancier ways to describe the roots.

Theorem: Let α,βh be roots of a simple Lie algebra. Then if (α,β)0 the ratio of their lengths satisfies

(α,α)(β,β){1,2,3},

where without loss of generality α is longer than β.

Proof: We know that by the triangle inequality

(α,β)2<(α,α)(β,β).

We can rearrange this to obtain that for some integers k,mZ (recall that the expressions must be integers from our previous lemma)

k=2|(α,β)|(α,α)<2(β,β)|(α,β)|m=2|(α,β)|(β,β)<2(α,α)|(α,β)|

Therefore we conclude that mk<4.

(α,α)(β,β)=mk,

assuming that α is larger than beta the only solutions are (m,k){(1,1),(2,1),(3,1)}, which proves the claim.

This is amazing! In fact this restricts the structure of the roots of the Lie algebra so much. We can do even better. We will show that for a given simple Lie algebra there can be, at most, two different length ratios.

Corollary: Given a simple Lie algebra the ratio of the lengths of any two roots can have, at most, 2 different values.

Proof: Assume that there are two roots α,βh with length ratio 2 and two roots β,γ with length ratio 3. Then we have that

(α,α)(γ,γ)=(α,α)(β,β)(β,β)(γ,γ)=23,

which is not a valid length ratio according to the previous theorem.

This is amazing because it will lead us directly to an elegant classification of simple Lie algebras using their roots that extends far beyond Lie algebras. They are called Dynkin Diagrams but no spoilers yet.

An Inner Product in h

This time we have been using the Hermitian inner product in h but we only said that it is "induced by the Killing form." Let's close that loophole so that I can sleep at night.

We have derived before that in a simple Lie algebra the Killing form is nondegenerate. Therefore, as a map from Lie algebra to its dual it is a Lie algebra isomorphism.

Definition: The dual Cartan algebra h of a simple Lie algebra g is a complex Lie algebra. The real subalgebra hR is a real Lie algebra such that

hChR.

This is a bit pedantic, but a lot of our proofs were based on the fact that (,) is a Euclidean inner product. However, it is not. Since the Killing form is, in general, not definite then there is no reason to expect that the induced inner product would be positive definite. However, we are in luck, because its restriction on hR is! And not only that, but also we can always find an hR such that our roots live there.

Lemma: The roots of a simple Lie algebra live in hR.

Proof: We can construct hR by taking the real span of a basis for the complex vector space, and then complexifying it. But wouldn't you know it? Not only the simple roots are a basis for the complex vector space, but the rest of the roots are in its real span! So we can define hR=spanRΔ.

Now we are ready to play a bit more and construct things even further.

Proposition: The Killing form defines a symmetric nondegenerate definite bilinear form on hR that is Lie algebra automorphism invariant.

Proof Sketch: We first define the symmetric nondegenerate bilinear form (,):h×hC for any α,βh by

(α,β)=K(K1(α),K1(β)),

where K1:hh is he inverse map between the Cartan subalgebra to its dual. As a result this is already symmetric, nondegenerate, bilinear, and Lie algebra automorphism invariant by the properties of K. However, we also crave definiteness. We restrict to the real subalgebra as of the Cartan algebra defined above, and multiply by i if the output is a complex number. Then the new bilinear form is definite.

Now the last thing we are missing is a normalization. And here is where the famous Coxeter numbers come in.

Definition: Given a simple Lie algebra, any root θh can be written as integer sums of the simple roots

θ=miαi,

where miZ. Then θ is the highest root if imi is maximum. Namely we call mi marks and the coefficients in the dual basis (coroot basis)

mi=(αi,αi)2mi,

are called comarks. Then the Coxeter number and dual Coxeter number are defined by

g=1+imig=1+imi.

We will from now on normalize the Killing form like so

K(X,Y)=12gtr(adXadY).

This normalizes our induced inner product nicely without changing anything.

Reconstructing the Simple Lie Algebra from its Roots

Now we are finally ready to show the next amazing result, which is what we have all been waiting for in order to start the classification. Let's create a basis for our Simple Lie algebra given its roots. For a simple root αh we define the following element of the Cartan subalgebra h

hα=K1(α).

Then we define eα,fα as the generators with roots ±α respectively. These follow the corresponding commutation relations

[hα,hβ]=0[hα,eβ]=Aαβeβ[hα,fβ]=Aαβeβ[eα,fβ]=δαβhβ.

where A is the Cartan matrix. But you will say: "Wait! There must be more ladder operators!" You would be right. We obtain the remaining ones from Serre Relations that end up being very useful in proving stuff

(adeα)1Aαβeβ=0(adfα)1Aαβfβ=0.

Dynkin Diagrams

Finally the next step in our classification journey. What we have shown so far is that knowing the simple roots and the Cartan Matrix we can reconstruct the simple Lie algebra. We can put all this information in diagrams that can help us quickly codify them.

The rule: Given the simple roots of a Lie algebra, as well as the Cartan matrix we obtain a graph by assigning a simple root to each node where nodes with the same length are the same color (we only need two types of nodes). Then we connect nodes α,β with Aαβ2=(α,β){0,1,2,3} lines.

That's it! From that we can obtain the algebra! For example the Dynkin diagram for su(2) is a single dot.

There are four families of diagrams associated to simple Lie algebras, as well as 5 exceptional cases (lol). Here is a table

Dynkin LabelMatrix Group Label
Ansu(n+1)
Bnso(2n+1)
Cnsp(2n)
Dnso(2n)
E6,E7,E8,F4,G2--