Coursework will count for 15% of the final grade.

Lie algebras should be thought of as the infinitesimal analogue of groups. Lie theory is an important and very active branch of mathematics with many links to other areas: geometry, representation theory, mathematical physics....

The only prerequisites for this course are linear algebra and some knowledge about groups and rings. However, it would be good to get some idea about the natural companion to Lie algebras: Lie groups. For this one needs some idea of what a smooth manifold is: smooth functions on a manifold, the tangent space at a point, a vector field on a manifold. Everything about Lie groups is NON-EXAMINABLE and only to help your own understanding.

The pdf document*Lie_algebras_are_infinitesimal_groups* below explains why Lie algebras should be thought of as infinitesimal groups. It is probably best to ignore all formalities and just think of $\delta$ and $\eta$ as ordinary numbers and calculate upto the first order.

The formal definitions of a Lie group and of an algebraic group, the definition of the Lie algebra of a Lie group, and a small guide to the literature can be found in the pdf document*further_notes* below. Of course this material is much more advanced and you will probably have to do some further reading before you really understand it.

Lie algebras should be thought of as the infinitesimal analogue of groups. Lie theory is an important and very active branch of mathematics with many links to other areas: geometry, representation theory, mathematical physics....

The only prerequisites for this course are linear algebra and some knowledge about groups and rings. However, it would be good to get some idea about the natural companion to Lie algebras: Lie groups. For this one needs some idea of what a smooth manifold is: smooth functions on a manifold, the tangent space at a point, a vector field on a manifold. Everything about Lie groups is NON-EXAMINABLE and only to help your own understanding.

The pdf document

The formal definitions of a Lie group and of an algebraic group, the definition of the Lie algebra of a Lie group, and a small guide to the literature can be found in the pdf document

On successful completion of this module, students will be able to:

- Give the definitions of: Lie algebra, homomorphism of Lie algebras, subalgebra, ideal, derivation, centre, representation of a Lie algebra, submodule, irreducible module, homomorphism of g-modules, the Killing form of a Lie algebra and the trace form of a classical Lie algebra, the derived and descending central series of a Lie algebra, nilpotent Lie algebra, solvable Lie algebra, solvable radical, semisimple and simple Lie algebra, maximal toral subalgebra, root system, irreducible root system.
- Give the definitions of and calculate with the classical Lie algebras.
- Describe the construction of the irreducible representations of sl_2.
- State Engel's Theorem, Lie's Theorem and Cartan's Criterion.
- Describe the direct sum decomposition into simple ideals and the Jordan-Chevalley decomposition for semisimple Lie algebras.
- Indicate how root systems correspond to semisimple Lie algebras and give the root space decomposition and root system of sl_n.

- Monday 17-10-2011: A simple example of a generalised eigenspace decomposition. Take $V=\mathbb{F}^2$ and $\mathfrak{t}\subseteq{\rm Mat}_2(\mathbb{F})$ the diagonal matrices. We consider $2\times 2$ matrices as endomorphisms of $V$ by letting them act on column vectors by matrix multiplication. Then $V=V_{\lambda_1}\oplus V_{\lambda_2}$ and $V_{\lambda_i}=\mathbb{F}e_i$, where $\lambda_i\in\mathfrak{t}^*$ is defined by $\lambda_i(D)=d_i$ for $D=\left[\begin{matrix} d_1 & 0 \\ 0 & d_2 \end{matrix}\right]\in\mathfrak{t}$ and $e_i$ is the $i$-th standard basis vector of $V$.
- Tuesday 18-10-2011: I forgot to give the explicit formula for the action of $\mathfrak{sl}_2(\mathbb{F})$ on $\mathbb{F}[x,y]$. It follows immediately from the formulas for the action on $x$ and $y$ and the formula in Lemma 1(i). For $X=\left[\begin{matrix} a & b \\ c & -a \end{matrix}\right]\in\mathfrak{sl}_2(\mathbb{F})$ and $P\in\mathbb{F}[x,y]$ we have $$X\cdot P=(ax+cy)\frac{\partial P}{\partial x}+(bx-ay)\frac{\partial P}{\partial y}\ .$$

About the notion of complete reducibility. I called a $\mathfrak{g}$-module*completely reducible*(also:*semisimple*) if it is a direct sum of irreducible (also: simple) $\mathfrak{g}$-modules. This is equivalent to: every submodule has a direct complement, i.e. there exists a submodule such that the whole module is the direct sum of the two. One direction is easy to prove. All this also applies to representations of groups or associative algebras (in case you know what that is).

Direct sum decompositons of a semisimple module into irreducibles are not unique. You should think of them as somewhat comparable to bases of a vector space. For example, if the sum of a family of submodules of a semisimple module $M$ is direct, then we can extend this family to a family of irreducible submodules of $M$ such that $M$ is their direct sum. This is similar to the fact that every independent set in a vector space can be extended to a basis. Of course all $1$-dimensional vector spaces are isomorphic, but this is not true for irreducible modules. - Tuesday 15-11-2011: The proof of Theorem 9 should have ended as follows.

Finally, let $\mathfrak{a}$ be an arbitrary simple ideal of $\mathfrak{g}$. Then $[\mathfrak{a},\mathfrak{g}]\ne\{0\}$, since $[\mathfrak{a},\mathfrak{a}]\ne\{0\}$. So $[\mathfrak{a},\mathfrak{g}_j]\ne\{0\}$ for some $j$, since $\mathfrak{a}=\bigoplus_{i=1}^k\mathfrak{g}_i$. But $\mathfrak{g}_j$ and $\mathfrak{a}$ are simple, so $\mathfrak{a}=[\mathfrak{a},\mathfrak{g}_j]=\mathfrak{g}_j$. Assertion (ii) now follows. - Tuesday 29-11-2011: A
*hyperplane*or*maximal subspace*of a finite dimensional vector space $V$ is a subspace of codimension one, i.e. of dimension ${\rm dim}(V)-1$. Sometimes people do not require a hyperplane to go through the origin, so these hyperplanes are of the form $v+H$ where $v\in V$ and $H$ is a hyperplane in our sense. The hyperplanes of $V$ can be characterised as the kernels of the nonzero linear functionals on $V$.

Now assume that $V$ is an inner product space over $\mathbb{R}$ and let $v\in V$ be nonzero. Then $H_v:=v^\perp=\{w\in V\,|\,(w,v)=0\}$ is a hyperplane of $V$ and $V=H_v\oplus\mathbb{R}v$. The*(orthogonal) reflection in the hyperplane $H_v$*is the linear map $r_v:V\to V$ defined by $$r_v(w)=w_{H_v}-w_{\mathbb{R}v}=w-2w_{\mathbb{R}v}\,,$$ where $w_H$ and $w_{\mathbb{R}v}$ are the unique vectors with $w_H\in H_v$, $w_{\mathbb{R}v}\in\mathbb{R}v$ and $w=w_H+w_{\mathbb{R}v}$. Note that $r_v$ fixes $H_v$ and sends $v$ to $-v$. Since $w_{\mathbb{R}v}=\frac{(w,v)}{(v,v)}v$, we obtain the formula from the lectures $$r_v(w)=w-(w,v^\vee)v\,,$$ where $v^\vee=\frac{2}{(v,v)}v$.

- sheet 1, solutions 1.
- sheet 2 (assignment; deadline: Friday October 14), solutions 2.
- sheet 3 (assignment; deadline: Friday October 21), solutions 3.
- sheet 4, solutions 4.
- sheet 5 (assignment; deadline: Friday November 4), solutions 5.
- sheet 6 (assignment; deadline: Friday November 11), solutions 6.
- sheet 7 (assignment; deadline: Friday November 25), solutions 7.
- sheet 8 (final assignment; deadline: Friday December 2), solutions 8.
- sheet 9 (final exercise sheet), solutions 9.