Wednesday, December 31, 2014

Happy New Year 2015!


The exploration of physics and math will continue in 2015. But now is a time to celebrate the new year. Happy New Year!


Thursday, December 25, 2014

The Hopf Algebra


Continuing the discussion, a bialgebra is a structure which is both an algebra and a coalgebra subject to a compatibility condition. A Hopf algebra H is a bialgebra with yet another property: the antipode. The antipode is a map from H to H and is usually named S.

If the bialgebra is a graded space, then the antipode comes for free and by an abuse of notation people call bialgebras Hopf algebras.

The antipode must be compatible with the existing structures of multiplication and comultiplication and so the diagram bellow commutes.



One thing I forgot to mention last time is that the unit "u" has a dual: the counit \( \epsilon\) which maps elements from the algebra to the field \( k \). Most of the time the counit maps everything to zero and occasionally to one.

Let us verify the commutativity of the diagram with our friend \(kG \). Here the antipode is the group inverse: \( S(g) = g^{-1}\):

Start from the left side: \( g\) 

Moving up and right: \( g \rightarrow g\otimes g\) then moving to the right and down: \( g \rightarrow g\otimes g \rightarrow\ g^{-1} \otimes g \rightarrow 1_H\) 

Now move from left to right on the middle line. If \( \epsilon \) maps all elements to \(1_K \) we have:

\( g \rightarrow 1_K \rightarrow 1_H \) and the diagram commutes. 

For a graded bialgebra the antipode is given by the following explicit formula:

\( S = \sum_{n \geq 0} {(-1)}^n m^{n-1} {\pi}^{\otimes n} {\Delta}^{n-1}\)

where \( \pi = I - u \epsilon \)

Next time we will see Hopf algebra application to renormalization in quantum field theory. If you want to read about Hopf algebras, the standard book is a 1969 book "Hopf Algebras" by Moss Sweedler who is known for the so-called Sweedler notation. Personally I do not like the style of the book because you get lost into irrelevant details and miss the forest because of the trees, but it is a good reference.

Wednesday, December 17, 2014

Coalgebras


Last time we introduced the coproduct which is the essential ingredient of a co-algebra. How can we understand it? If we think of the product as a machine which eats two numbers and generates another one we understand the coproduct as the same machine working in reverse. A Xerox machine can be understood as a coproduct, but a coproduct can be understood not only a a cloning machine but as an action which breaks up an elements into sub-elements. For example a complex number can be decomposed into a real and an imaginary part and each of those are nothing but other kinds of complex numbers.

One funny example comes from shuffling cards: cutting a deck of cards in two is the coproduct, while putting it back together in all possible ways is the product. Renormalization techniques in quantum field theory generates coproducts. Here is a partial list of well studied mathematical examples. The coproduct is usually expressed with the symbol \( \Delta \) and the product is represented by the symbol \(m\).

The first (and most trivial) example come from group theory. Consider finite linear combinations of group elements:

\( kG = \{ \sum_{i=1}^n \alpha_i g_i | \alpha_i \in k, g_i \in G\}\)

\(\Delta g = g\otimes g\)

This is nothing but a basic cloning operation. A bit more complex example comes from polynomial rings:

\( \Delta (x^n) = \sum_{i=0}^n (n~choose~i) x^i \otimes x^j \)
\(m(x^i \otimes x^j) = x^{i+j}\)

Much fancier example come from the cohomology ring of a Lie group, or the universal enveloping algebra of a Lie algebra which gives rise to the so-called quantum groups which have major physical applications.

For now the question is: can we generate a coproduct given a product, and a product given a coproduct? The answer is rather surprising. The answer is yes in both cases for finite dimensional cases, but in general one can only generate a product given a coproduct.

Then can we have a mathematical structure which has both a product and a coproduct? If such a structure exists, it is called a bi-algebra and this respects a compatibility relation where tau is transposition of the terms in the tensor product. 



Let's take the group example. Start from the upper left corner with \( g_1 \otimes g_2\) and move it horizontally:

\(g_1 \otimes g_2 \rightarrow g_1 g_2 \rightarrow g_1 g_2 \otimes g_1 g_2\)

Then take it down, across and up and see you get the same thing meaning the diagram commutes:

\(g_1 \otimes g_2 \rightarrow g_1\otimes g_1 \otimes g_2 \otimes g_2 \rightarrow g_1\otimes g_2 \otimes g_1 \otimes g_2 \rightarrow g_1 g_2 \otimes g_1 g_2\)

Usually this kind of commutative diagram are fancy ways of expressing mathematical identities. For the polynomial ring the commutativity of the diagram means that this holds:

(m+n choose k) = sum over i, j with  i+j = k of (m choose i) (n choose j)

Also Hopf algebras are special kinds of bialgebras and no wonder they have major applications in combinatorics.

Next time we'll talk about Hopf algebras. Please stay tuned.

Wednesday, December 10, 2014

Fun with k-Algebras


Continuing from last time, suppose we have a bilinear map \( f\) from \(V \times W\) to \(L\) where V, W, and L are vector spaces. Then there is a universal property function \(\Phi \) from \(V \times W\)  to \(V \otimes W \) and there is a unique linear map \( g\) from \(V \otimes W\) to \(L\) such that the diagram below commutes:

               \(\Phi\)
\( V \times W \)-------> \(V \otimes W\)
    \                        |
       \                     |
           \                 |
       \(f\)    \              | \( g \)
                  \          |
                     \       |
                      _|   \/
                           \( L \)

The proof is trivial: "f" is used to define a function from the free vector space \( F (V \times W) \) to \( L \) and then we make a descent by modding by the usual equivalence relation of the tensor product to define the map \( g \).

This all looks a bit pedantic, but the point is that any multiplication rule in an algebra \( A \) is a bilinear map from \( A \times A \) to \( A\) and we can now put it in tensor formalism.

In particular consider the algebra \( A \) of matrices over a field \( k \). Matrix multiplication is associative, and we also have a unit of the algebra: the diagonal matrix with the where the elements are the identity of \( k \).  This is a prototypical example of what is called a k-algebra.

Can we formalize the associativity and the unit using the tensor product language? Indeed we can and here is the formal definition:

A k-algebra is a k-vector space \( A \) which has a linear map \(m : A\otimes A \rightarrow A\) called the multiplication and a unit \(u: k \rightarrow A \) such that the following diagrams commute:
                     \( m \otimes 1\)
\( A \otimes A \otimes A \) ----------> \(A \otimes A\)
              |                             |
              |                             |
  \( 1 \otimes m\) |                             |  \( m \)
              |                             |
              |                             |
             \/                            \/
          \(A \otimes A\)   ---------->    \(A \)
                           \( m\)

and

                          \(A \otimes A\)
                    _                    _
                      |         |         |
\( u\otimes 1\)       /              |                \ \( 1\otimes u\)
              /                 |                    \
\( k \otimes A \)                      | \(m \)               \(A \otimes k\)
              \                 |                     /
                  \              |                 /
                       _|      \/          |_
                               \( A \)

Please excuse the sloppiness of the diagrams, it is a real pain to draw them.

So what are those commuting diagrams really saying? 

The first one states that:

\( m(m (a\otimes b) \otimes c) = m(a \otimes m(b \otimes c)) \)

In other words: associativity of the multiplication: (a b) c = a (b c)

The second one defines the algebra unit:

\( u(1_k ) a = a u(1_k )\)

which means that \( u (1_k) = 1_A \)

So why do we torture ourselves with this fancy pictorial way of representing trivial properties of algebra? Because now we can do a very powerful thing: reverse the direction of all the arrows. What do get when we do that? We get a brand new concept: the coproduct. Stay tuned next time to explore the wondeful properties of this new mathematical concept.

Wednesday, December 3, 2014

A fresh look at the tensor product

(the very first lesson in category theory)


Recently I was reviewing Hopf algebras and their applications in physics. This is a very interesting and straightforward topic on par with linear algebra which students learn in first year in college, but unfortunately not well known in the physics community. Starting with this post I will present a gentle introduction and motivation and we'll get all the way to the application in renormalization theory for quantum field theory.

The place to start is to understand what a tensor product really is. In physics one encounters tensors every step of the way and the usual drill is about covariant and contravariant tensors, but this is not what tensors are about. 

We want to start with two vector spaces V and W over the real numbers and attempt to combine them, The easiest way to do that is to have the cartesian product VxW which are the pairs of elements (v, w) each of them in their vector space. If those spaces are finite dimensional, say of dimensions m and n, what is the dimension of VxW? The dimension is m+n but we want to combine them in a tighter way such that the resulting object dimension is m*n not m+n. How can we get from the Cartesian product to the tensor product?

The mathematical answer is a bit dry so let's simply state it. We start with a free vector space over out field or real numbers F(V) and this is nothing but formal sums of elements in V such as:

\(\alpha_1 v_1 + \alpha_2 v_2 + \cdots \alpha_n v_n\)

with \(\alpha_i \in R\) and \(v_i\) in V.

Then of course we can consider F(VxW) and now let's ask what the dimension of this object is? Its dimension equals the number of elements in VxW which is infinite and so we constructed a big monstrosity. We want the dimension of the tensor product to be m*n so to get from \( F(V \times W)\) to \(V\otimes W\) we want to cut down the dimension of \( F(V \times W)\) by using appropriate equivalence relations which capture the usual behavior of tensor products.

To recap, we started with \(V \times W\) but this is too small. We expand it to \(F(V\times W)\) but this is too big, and now we'll cut it down to "Goldilocks" size by equivalence relations.

What are the properties of \(v\otimes w\)? Not too many:

  • \(\lambda (v\otimes w) = (\lambda v)\otimes w\)
  • \(\lambda (v\otimes w) = v\otimes (\lambda w)\)
  • \(v_1\otimes w + v_2\otimes w = (v_1 + v_2)\otimes w\)
  • \(v\otimes w_1 + v\otimes w_2 = v\otimes (w_1 + w_2 )\)

Then \(F(V\times W) \) modulo the equivalence relationship above is the one and only tensor product: \(V\otimes W\) with dimension m*n.

So what? What is the big deal? Stay tuned for next time when this humble tensor product will transform the way we look at products in general.