## Happy New Year!

Wishing everyone a Happy New Year! Let 2016 bring you hope, happiness, and prosperity.

No physics post today, only a brainteaser in honor of the new Star Wars: find the panda in the picture below.

## Can quantum mechanics coexist with classical physics?

Continuing the discussion about quantum mechanics interpretations, today I want to look in depth on what it means to have a composite quantum-classical system. In standard Copenhagen interpretation, the measurement apparatus is considered to be described by classical physics. In physics there are only two theories of nature known and possible: quantum and classical mechanics and usually classical mechanics is described as the limit of quantum mechanics when $$\hbar \rightarrow 0$$. How can we introduce a fundamental theory of nature by using its limit case which is of a different character (obeys local realism)? This is one of the usual criticism of Copenhagen and a motivation for people to look for local realistic models of quantum mechanics. More important I think is to decide if there can be any consistent quantum-classical description of a physical system. But are there real world examples of such composite systems? I saw once this example given at a physics conference: a transistor. We do not see transistors in a superposition state and its inner workings are definite quantum mechanical.

So now that the stage is set, we can provide some answers. The framework is yet again the categorical approach to quantum mechanics, and part of what I will state today was discovered by one of Emile Grgin's colleague at Yeshiva University, Debendranath Sahoo: http://arxiv.org/pdf/quant-ph/0301044v3.pdf

So let's start at the beginning: is quantum mechanics defined by using one of its limits? The answer is no, but this is only a recent development with the complete derivation in the finite dimensional case of quantum mechanics from physical principles. Quantum mechanics stands on its own without classical mechanics help.

Is classical mechanics defined by the limit $$\hbar \rightarrow 0$$? Surprisingly, no again! This limit is mathematically sloppy, the proper limit is for $$\hbar$$ to become a nilpotent element: $$\hbar^2 = 0$$. If you remember the map $$J$$ between observables and generators, its dimension  is actually $$\hbar$$ and while in quantum mechanics $$J^2 = -1$$, in classical physics $$J^2 = 0$$, meaning $$\hbar^2 = 0$$ in classical physics. Another way to see this is by looking at deformation quantization approaches and convince yourself this is the proper limit. But if we are not sticklers for math and we adopt a physical point of view, $$\hbar \rightarrow 0$$ is good enough.

Can we combine consistently quantum and classical mechanics? No again because quantum and classical mechanics belong in disjoint composability classes. But what would happen if we try? This question was answered by Debendranath Sahoo in the paper above. Let's recall the fundamental composition relationships (in either quantum or classical mechanics):

$$\alpha_{12} = \alpha_1 \otimes \sigma_2 + \sigma_1 \otimes \alpha_2$$
$$\sigma_{12} = \sigma_1 \otimes \sigma_2 + J^2 \alpha_1 \otimes \alpha_2$$

If $$\alpha_1$$ is the commutator, $$\alpha_2$$ is the Poisson bracket, $$\sigma_1$$ is the Jordan product, and $$\sigma_2$$ is the regular function multiplication, what would $$\alpha_{12}$$ and $$\sigma_{12}$$ be? In quantum mechanics one can have superselection rules and there is nothing which prevents us to combine the 4 ingredients in a marriage of convenience. The penalty however is that we get something which lacks invariance under tensor composition! As such there cannot be any possible generalization of the commutator in the quantum-classical case. And people did try to invent such things but no such proposal withstood scrutiny. But this is not the only penalty!

What Sahoo found (working the problem Grgin-Petersen style) is that there is a lack of backreaction from the quantum to classical system! I double-checked the proof, it is correct, and because it involves a lot of Latex typing I will not repeat it here but you can read it in the paper. The result has two main implications:
• gravity has to be quantized: you cannot get a self-consistent theory of gravity in a mixed quantum-classical setting.
• The measurement devices should not be treated classically because they will not be able to measure anything: there is no information transfer from the quantum to the classical system.
If we were to solve the measurement problem we must do it solely in the quantum world. MWI is not the answer and it is no longer the only pure unitary quantum game in town. The categorical approach shows the way and provides a brand new solution. Despite its classical appearance, the transistor is still only a quantum object. The main problem is not interpreting quantum mechanics to appease our classical intuition but to explain the emergence of classical behavior. Decoherence only provides a partial answer. Please stay tuned.

PS: today is December 24

## Can anyone defend the Many Worlds Interpretation?

Quantum mechanics has many interpretations or classes of interpretations with internal splits: Copenhagen, Bohmian, spontaneous collapse, many worlds, transactional, etc. Because my take on the matter falls within the neo-Copenhagen family, I do not follow very closely interpretations which fall outside my interest. But although I disagree with non-Copenhagen interpretations, I do understand the approaches they take with only one exception: the many worlds interpretation (MWI). Not for the lack of trying but as far as I dug into it, MWI did not make any sense whatsoever to me (except Zurek's approach which technically is not MWI). So here is my challenge: can anyone defend MWI in a way that will answer the issue I will raise below?

The ground rule of any interpretation is first and foremost to recover the standard quantum mechanics predictions, otherwise it cannot call itself a "quantum mechanics interpretation". Quantum mechanics has this novel feature called the Born rule. Let me digress for a bit and expand on why this does not occur in classical physics. If you recall from prior posts, in configuration space in classical mechanics one encounters the Hamilton-Jacobi equation, while in quantum mechanics one has the Schrodinger equation. In classical physics in phase space we need both the position and momenta of a particle to specify the trajectory, and therefore it should come as no surprise that in configuration space where we only have positions there can be crossing trajectories in the Hamilton-Jacobi case. Therefore the information content attached to a configuration point is ambiguous in classical physics: no Born rule in classical physics. However in the quantum case in configuration space we can attach an information interpretation to the Schrodinger wavefunction known as the Born rule. Born rule shows that quantum mechanics is probabilistic and initial conditions are not required. (In the Bohmian case you can add initial conditions only in a contextual (parochial) way respecting an additional constrain called quantum equilibrium otherwise you violate Born rule).

Is MWI compatible with Born rule?

But what is MWI, and why is it considered? MWI supposes to solve the measurement problem without resorting to the collapse of the wavefunction.

Suppose we have two outcomes, say spin up and down. Once spin is measured up, a quick subsequent measurement confirms the result, and the same for down. Since the wavefunction respects the superposition principle we can derive a superposition of up and down with the measurement device pointing up for the spin up, and pointing down for spin down. In other words, we arrive at the famous half dead half alive Schrodinger cat which does not occur in nature. Everett noticed that there is a correlation between how the measurement device points and the spin value and he proposed that the world splits between different outcomes: in each outcome the observer is only aware of his own unique measurement result. One proponent of this narrative was Sydney Coleman!!! (I have a big respect for the late Sidney Coleman, but in this instance I think he was shooting from the hip.) I grant that MWI is an appealing idea, but does it stand up to close scrutiny?

People naturally objected to the idea of split personality or "the I problem" to which supporters can fire back with "you do not take quantum mechanics seriously enough to trust what it shows". Also there is a "preferred basis problem" because the split can happen on an infinite number of basis. But to me the most important problem is the treatment of probabilities and agreement with Born rule. I think it is safe to say that anyone agrees that the original Everett argument of why MWI obeys Born rule is not satisfactory. If Everett's derivation were correct, then there will not be that many new "derivations" of Born rule in the MWI framework. However I found no satisfactory derivation to date of Born rule in MWI. Moreover, the only thing that makes sense to me is branch counting and this is definitely violating Born rule - also no disagreement here.

But why I am not convinced by the proposals of deriving Born rule? A common criticism is that those derivations are circular. I assert something stronger: when not circular, Born rule derivations in MWI are mathematically incorrect. Let me show why.

I have discovered long ago that the simplest problems are the hardest, and I will use this here. Instead of muddling the water with convoluted arguments and examples, let's streamline the basic system to the max. So consider a source of electrons which fire only one particle say once a minute. Pass this through a Stern Gerlach device and select only the spin up branch. In other words, we prepare a source of single electrons with a known vertical spin. Then we pass our electron through a second Stern Gerlach device and we measure spin on say the x axis. Half the time we will get the positive spin x and half the negative spin x. In MWI both outcomes occur and I am split into two "me" each observing one definite outcome. So far so good, but now rotate one of the two devices by some angle theta. The statistics changes!!! but what does MWI predict? The world still splits in two and in one world I detect up and in the other one down. In other words, no changes and this is the root cause of why MWI makes no sense.

Now supporters of MWI are well aware of this fact and attempt to derive Born's rule regardless starting from more or less natural assumptions. Let's dig deeper into their claims.

First we need to collect more data to make up a meaningful statistics. For the first electron we have two branches: one up and one down: u d. For the second electron we have 4 branches: uu ud du dd, for the 3rd electron we have 8: uuu ... Now let's count in those branches how many spins are up and how many are down regardless of the actual order of the events:

1st run:                                           1u    1d
2nd run:                                     1uu  2ud    1dd
3rd run:                          1uuu    3uud     3udd       1ddd
4th run:                1uuuu    4uuud   6uudd    4uddd      1dddd

We get Pascal's triangle and the binomial coefficients. This is nothing like Born rule, and the frequentist approach in statistics is rejected by the MWI supporters. Instead they adopt the Bayesian approach. For simple problems like this, the frequentist and Bayesian approaches predict the same things so something else must be thrown in the mix: "the rational observer".  A "rational observer" would have expectations of probabilities before the actual experimental outcome is obtained, and MWI supporters contend that to a rational observer making rational decisions, while branching is incompatible with Born rule, the sane way for such a person to behave is as Born rule appears to be true. Something like: the Earth moves around the Sun, but to us it appears that the Sun moves around the Earth. This line of reasoning was introduced by Deutsch and continued by Wallace.

Several natural sounding principles were proposed to justify this apparent emergence of Born rule in the MWI world. Now Born rule deals with the complex coefficients in front of the ket basis, and those coefficients are simply ignored by  branching because this is what it means to to have a relative correlation between the wavefunction and the measurement device. To derive Born rule you must deal with those coefficients and moreover you must do it in an indirect way. The only indirect way possible is for your "natural sounding principle" to say something nontrivial about a superposition. And in the best case scenario what you actually say about the superposition is nothing but the Born rule in disguise and you have a circular argument

But it gets worse if you claim you broke the circularity: you become mathematically inconsistent. Here is why:

1. To prove Born rule in MWI you need to reject branch counting.
Why? Because Born rule's prediction changes with changing complex coefficients, but branch counting does not.

2. Branch counting arises as a particular case of Born rule. When? In the particular case when the complex coefficients are equal.

So the very act of proving even an apparent Born rule inherently contains a contradiction. All mathematically consistent proposals of deriving Born rule in MWI I am aware of are circular arguments and all their "natural sounding principle" respects branch counting as well.

In summary, coming back to my physical example with the electron source and the two S-G devices, because branching happens the same way regardless of the orientation of the two devices, there is a one to (uncountable infinite) many degeneracy problem which MWI cannot hope to solve by relative arguments alone. In the frequentist approach it is impossible to derive Born rule which acts as a removal of this degeneracy, and MWI supporters pin their hopes on derivations of an apparent Born rule by using some "natural principles". However all the derivations I studied so far are circular, and I know one by Tippler which is mathematically incorrect-maybe I should write a rebuttal to that one, it was published last year. Moreover they cannot reject branch counting because this follows from Born rule when all the scalar coefficients are equal. If you claim you reject  branch counting you are killing your "apparent" Born rule too.

I am challenging MWI supporters to present a valid non-circular derivation of Born rule (either real or apparent). I don't have the time to closely follow MWI developments and maybe there is a recent proposal I missed which can stand up to scrutiny. However I contend it can't be done for the reasons outlined above.

## The algebraic structure of Quantum Mechanics

Today we will conclude the mini-series on quantum mechanics reconstruction as we have all the required ingredients. We only need to perform one last computation to arrive at an associative product, the regular complex number multiplication in the Hilbert space representation.

But first, let me point out some strange notation I was using in all the prior posts about $$J$$ which obeys $$J^2 = -1$$. Why not simply call it $$i = \sqrt{-1}$$, the imaginary unit? The reason is that $$J$$ is more than the imaginary unit, and it is in fact a tensor of rank (1,1): $$J^{I}_{J}$$ and represents a map between the Jordan algebra to the Lie algebra. It also forms an "almost complex structure" which when multiplied (in the flat case) with the symplectic form gives rise to a metric tensor generating a Kahler manifold. So there is much more to it but I will not expand on this because we are led down the hard path of Poison manifold quantization. To put it in perspective, the proof of Poison manifold quantization got Maxim Kontsevich the Fields medal.

Only for the simplest 1-dimensional case we can consider $$J = i$$. In the finite dimensional case, by some algebraic magic (the Artin-Wedderburn theorem) one can show what the number system must be that of complex numbers in the case of transition probabilities (and I won't expand on this either, it is a large topic on itself too).

So why do we need associativity? When you run say an electron through s Stern Gerlach device you can pass the same electron through another Stern Gerlach device and concatenate two experiments: the output from one is the input to the next one. Now imagine three such devices (a,b,c) and ask yourself: what constitutes an experiment? The experiment separation occurs only in your mind, and to consistently reason about such a setup and define states you need associativity: $$(ab)c = a(bc)=abc$$.

But so far we only have two non-associative products: $$\alpha$$ and $$\sigma$$. Can they be combined to form an associative product? Up to $$\hbar /2$$ normalization factors we know that the $$\alpha$$ and $$\sigma$$ associators are equal:

$$[A, B, C {]}_{\sigma} + J^2 [A, B, C {]}_{\alpha} = 0$$

So how about a product beta:

$$\beta = \sigma \pm J \alpha$$

Its associator involves two beta products and it is not immediately obvious it is zero but it can be computed and it is not hard to show that:

$$[A, B, C {]}_{\beta} = [A, B, C {]}_{\sigma} + J^2 [A, B, C {]}_{\alpha} \pm$$
$$J ((A \sigma B) \alpha C + (A \alpha B) \sigma C − A \sigma (B \alpha C) − A \alpha (B \sigma C))$$

which is indeed zero after using Leibniz for the first two terms in the last line.

The sign of J does not matter, and traditionally the representation with plus is used by the usual Hilbert space representation, and the one with minus is used by the Moyal brackets in the phase space representation, but those are only historical accidents.

So now we can present the algebraic structure of quantum mechanics which is:

a real vector space equipped with two bilinear maps $$\sigma$$ and $$\alpha$$ such that the following conditions apply:

• $$\alpha$$ is a Lie algebra,
• $$\sigma$$ is a Jordan algebra,
• $$\alpha$$ is a derivation for $$\sigma$$ and $$\alpha$$,
• $$[A, B, C {]}_{\sigma} + J^2 / 4 \hbar^2 [A, B, C {]}_{\alpha} = 0$$
where J → (−J) is an involution, $$1 \alpha A = A \alpha 1 = 0$$, $$1 \sigma A = A\sigma 1 = A$$, and $$J^2 = −1, 0, +1$$.

Quantum mechanics corresponds to $$J^2 = -1$$, and classical mechanics corresponds to $$J^2 = 0$$.

From the associator identity it is immediately clear that in the classical case the product sigma is associative and in fact it is nothing but the usual function multiplication. It is not straightforward to prove, but the result it is available in books about Poisson manifolds, that alpha in this case is nothing but the usual Poisson bracket. And so in fact the two product algebra from above obtained by invariance under composition leads in a straightforward manner to Poisson manifolds and classical physics. To get to quantum mechanics we have three roads:

1. deform the associator identity changing J from $$J^2 = 0$$ to $$J^2 = -1$$ and use Kontsevich's result.
2. use positivity and GNS theorem to find a representation in a Hilbert space of the two product algebra
3. use hindsight and "guess" the usual representation as a commutator and the symmetrized product in the usual Hilbert space representation and verify they satisfy all the conditions of the two product algebra.

None of the three roads are satisfactory however, and more needs to be done. The third option gets you to the usual Hilbert space representation but does not say anything about uniqueness. The first option does not work in the case of spin, and the second option while better than the third, still has an open uniqueness problem in the infinite dimensional case.

Now we can look back and see what was obtained:

The categorical (algebraic) part of the reconstruction works because of the existence of an universal property which translates the tensor product physical principle of invariance under composition into algebraic consequences. The non-algebraic part is an open problem in the infinite dimensional case, and different quantization techniques corresponds to the second de Rham cohomology making the classification of representations a hard problem.

The classification of Jordan algebras is however well known and this restricts the possible number system representations over the reals, complex, and quanternions in the case of transition probability. There is no such thing as octonionic quantum mechanics because octonions are not associative, and the product beta is.

If the questions we ask nature do not result in a probability, but in a probability current then we actually get Dirac's equation by an unusual Hodge theory route. The quantum mechanics number system is SL(2,C) in this case and the Hilbert space generalizes to a Hilbert module. This shows again that the complete classification of the two-product algebra representations is not at all trivial.

## The Jordan algebra of observables

In quantum mechanics the observables are represented by Hermitian operators: $$O^{\dagger} = O$$. In general Hermitian operators need not commute and this corresponds to incompatible observables like position and momenta.  Given two observables A and B, can we generate another observable out of it?

Let's try the simplest idea: AB. Is this self-adjoint? Let's see:

$${(AB)}^{\dagger} = B^{\dagger} A^{\dagger} = BA \ne AB$$

So the next idea to try is a symmetrized product:  $$1/2 (AB + BA)$$ .

This is called the Jordan product and the product $$\sigma$$: $$A \sigma B = 1/2 (AB + BA)$$ gives rise to the Jordan algebra of observables.

 Pascual Jordan

This product has two property:

1) symmetry: $$A \sigma B = B \sigma A$$
2) Jordan identity: $$(A \sigma B) \sigma (A \sigma A) = A \sigma (B \sigma (A \sigma A))$$

Can we derive those properties from categorical arguments like we derived the properties of α last time? Drum-roll....Yes we can!!!

Last time we have proven that α is antisymmetric: $$A \alpha B = - B \alpha A$$ and from the fundamental relationship:

$${\alpha}_{12} = \alpha \otimes \sigma + \sigma \otimes \alpha$$

it is trivial to show that σ must be symmetric thus respecting the first property of the Jordan algebra. Proving the second property is unfortunately much more involved project, about two orders of magnitude harder than what I left out last time as a simple exercise. Let me set the stage to where we want go. Both the products α and σ are not associative. To quantify the violation of associativity we introduce what is called the associator:

$$[A, B, C]_{o} = (A o B) o C - A o (B o C)$$

where "o" can be either $$\alpha$$ or $$\sigma$$. Then the Jordan and Lie algebras of quantum mechanics obey this identity:

$$[A, B, C]_{\sigma} + \frac{J^2 \hbar^2}{4} [A, B, C]_{\alpha} = 0$$

where $$J \hbar /2$$ is a one-to-one map between observables and generators obeying $$J^2 = -1$$the imaginary unit which when multiplying a Hermitian operators generates an anti-Hermitean operator.

You can convince yourself this is true using the usual realizations of the Jordan and Lie products:

$$A\sigma B = \frac{1}{2} (AB + BA)$$
$$A\alpha B = \frac{J}{\hbar} (AB - BA)$$

Now if this associator identity is true, proving the second Jordan identity is trivial because the identity can be rewritten as: $$[A, B, A \sigma A {]}_{\sigma} = 0$$

What is the alpha associator: $$[A, B, A \sigma A {]}_{\alpha}$$ ?  I can leave this as an exercise to show this is indeed zero by expanding the associator and using the Leibniz identity for α.

Now our goal is to derive $$[A, B, C {]}_{\sigma} + J^2 \hbar^2 / 4 [A, B, C {]}_{\alpha} = 0$$ from invariance under composability and time evolution.

But first where is the $$b_{11} = J^2 \hbar^2 / 4$$ coming from? If you recall two posts ago I derived the following fundamental composition relationship as a coproduct:

$$\Delta (\alpha ) = \alpha \otimes \sigma + \sigma \otimes \alpha$$
$$\Delta (\sigma) = \sigma \otimes \sigma + b_{11}\alpha \otimes \alpha$$

$$b_{11}$$ was a free parameter which can be normalized to -1 for quantum mechanics, 0 for classical mechanics and +1 for the unphysical "hyperbolic quantum mechanics". However I simply want to normalize it as:  $$b_{11} = J^2 \hbar^2 / 4$$. Why? To recover the usual definition of the commutator and the Jordan product. In other words, convenience.

If last time we used $$\Delta (\alpha ) = \alpha \otimes \sigma + \sigma \otimes \alpha$$, now it is time to employ:$$\Delta (\sigma) = \sigma \otimes \sigma + \frac{J^2 \hbar^2}{4}\alpha \otimes \alpha$$ as well.

The proof is rather long and was first obtained by Grgin and Petersen in 1976 and I won't present its gory bookkeeping detail, but I will only give the starting point:

$$(A_1 \otimes A_2 ) \alpha_{12} ((B_1 \otimes B_2)\sigma_{12} (C_1 \otimes C_2))$$

Use the bipartite Leibniz identity along with the two delta coproducts and the Jacobi identity and you'll reach the associator identity (from Jacobi). The first time I double checked Grgin's paper it took me one full week to work out the result given the intermediate steps in the paper, but now I can do it in about an hour of careful and boring bookkeeping.

We now have all the ingredients to put together the C* algebras of quantum mechanics and arrive at the usual Hilbert space formulation. Please stay tuned.

Historical notes:

I am very grateful to Emile Grgin for introducing me to his approach which I managed to expand into a full blown reconstruction of quantum mechanics. The root idea came from Bohr himself who passed his intuition to Aage Petersen, his personal assistant. Later on Petersen developed those ideas in collaboration to Emile Grgin at Yeshiva University. Unfortunately Grgin left academia and his ideas were forgotten. I got in contact with him after he retired and later on I realized the categorical origin of the approach. Completely independent from me, Anton Kapustin of Caltech noticed the same 1976 paper and we both came out with almost identical papers, mine written from the physics point of view, and his from the math point of view. I uploaded my paper on the archive 3 weeks before him and I did not notice his paper. John Preskill made me aware of Kapustin's paper when we met at a conference. He was the very first person who understood my research.

## The Lie algebra of generators

Last time we derived the fundamental composition relationship which is the root cause of why complex numbers play a  key role in quantum mechanics. However the pattern:

Real          = real * real           -  imaginary * imaginary
Imaginary = imaginary * real + real * imaginary

can also be understood as:

Symmetric       = symmetric * symmetric       -  antisymmetric * antisymmetric
Antisymmetric = antisymmetric * symmetric + symmetric * antisymmetric

and when we look at it from this point of view we arrive at a Lie algebra.

Now let me start by giving a high level overview of Lie algebras. First, the name comes from Sophus Lie, a Norwegian mathematician (no, he was not Chinese :) ) who pursued to do in partial differential equation what Galois did for polynomial equations.

 Sophus Lie

Galois was using the permutation of solutions to determine if a particular polynomial equation is solvable. Instead of discrete symmetries, Lie was using continuous symmetries to simplify partial differential equations. From this came the idea of a continuous group (like the rotation group). In physics one encounters the gauge groups of the Standard Model as examples of Lie groups. In special relativity one encounters SO(1,3).

Linearizing the Lie group around the identity gives rise to what is called a Lie algebra. The elements of a Lie algebra are sometimes called generators. From the Lie algebra one may reconstruct its Lie group by exponentiation, but this does not always work as there can be distinct Lie groups with the same Lie algebra.

Now back on $$\alpha$$, is this a Lie algebra? It respects bilinearity and the Leibniz identity, but what about antisymmetry and Jacobi identity? It turns out that all we need is antisymmetry! We get the Jacobi identity for free because Leibniz + antisymmetry = Jacobi:

$$A \alpha (B\alpha C) = (A\alpha B) \alpha C + B\alpha (A \alpha C)$$

by antisymmetry:

$$A \alpha (B\alpha C) +C \alpha (A\alpha B) - B\alpha (A \alpha C)= 0$$
$$A \alpha (B\alpha C) +C \alpha (A\alpha B) + B\alpha (C \alpha A)= 0$$ q.e.d.

But how can we prove antisymmetry? By itself $$\alpha$$ is only a Loday algebra. However, invariance under composition comes to the rescue! So how can we do it? First , let's observe that in the Leibniz identity above the order of $$A$$ and $$B$$ terms occurs as $$AB$$ and as $$BA$$ and we want to take advantage of this. Second, by invariance under composition we can define the bipartite Leibniz identity:

$$(A_1 \otimes A_2) \alpha_{12} ((B_1 \otimes B_2) \alpha_{12} (C_1 \otimes C_2)) = ((A_1 \otimes A_2 ) \alpha_{12} (B_1 \otimes B_2)) \alpha_{12} (C_1 \otimes C_2)$$
$$+ (B_1 \otimes B_2 ) \alpha_{12} ((A_1 \otimes A_2) \alpha_{12} (C_1 \otimes C_2))$$

and now we want to use the fundamental composition relationship we derived last time:

$$\alpha_{12} = \alpha \otimes \sigma + \sigma \otimes \alpha$$

to expand the mess above. However, a great simplification occurs if we pick: $$B_1 = C_2 = 1$$ :

$$0 = (A_1 \alpha C_1) \otimes (A_2 \alpha B_2 + B_2 \alpha A_2)$$

which is valid for any $$A_1 \alpha C_1$$ Do it as an exercise to convince yourself it is true. Let me only show that the left hand side is zero:

$$((B_1 \otimes B_2) \alpha_{12} (C_1 \otimes C_2)) = B_1 \alpha C_1 \otimes B_2 \sigma C_2 + B_1 \sigma C_1 \otimes B_2 \alpha C_2$$

but because $$B_1 = 1$$: $$B_1 \alpha C_1 = 0$$ and because  $$C_2 = 1$$: $$B_2 \alpha C_2 = 0$$  and the LHS is zero.

Hence:

$$A\alpha B = - B\alpha A$$

So $$\alpha$$ is a Lie algebra. Moreover by the fundamental composition relationship $$\sigma$$ must be a symmetric product to preserve the antisymmetry of $$\alpha$$ under composition.

Now we are getting somewhere. We know that quantum mechanics is described by C* algebras which can be decomposed into Lie and Jordan algebras. We got the Lie part and we are almost there for the Jordan side. Please stay tuned.

## Where are the complex numbers coming from in Quantum Mechanics?

So last time we produced the ingredients used to build quantum mechanics and today we will show how they combine just like Lego. And the main building pattern will turn out to be nothing but to complex number multiplication.

The mathematical structure will be that of a trigonometric coalgebra.

So let us start by recalling the two products from last time: $$\alpha$$ and $$\sigma$$. $$\alpha$$ respects the Leibniz identity (because it stems from infinitesimal time evolution) and apart from that we have no other information at this time about the two products. If we assume the real numbers to be the mathematical field corresponding to actual physical measurement values, the Lego operation of combining two physical systems into one is called a coproduct. Here is how we do it:

Let C be a R-space with {$$\alpha$$, $$\sigma$$} as a basis. We define the coproduct $$\Delta : C\rightarrow C \otimes C$$ as:

$$\Delta (\alpha ) = a_{11} \alpha \otimes \alpha + a_{12} \alpha \otimes \sigma + a_{21} \sigma \otimes \alpha + a_{22} \sigma \otimes \sigma$$
$$\Delta (\sigma) = b_{11} \alpha \otimes \alpha + b_{12} \alpha \otimes \sigma + b_{21} \sigma \otimes \alpha + b_{22} \sigma \otimes \sigma$$

There is also another operation called counit but that is only important for the mathematical point of view to complete what mathematicians call a coalgebra (S. Dascalescu, C. Nastasescu, and S. Raianu, Hopf Algebra: An Introduction, Chapman & Hall/CRC Pure and Applied Mathematics (Taylor & Francis, 2000)).

To get this abstraction more down to earth we need to see it in action and show what it means to construct the bi-partite products $$\alpha_{12}$$ and $$\sigma_{12}$$ from $$\sigma$$ and $$\alpha$$. If we have $$f_1, g_1$$ elements belonging to physical system 1, and $$f_2, g_2$$ elements belonging to physical system 2 we basically have the following:

$$(f_1\otimes f_2) \alpha_{12} (g_1\otimes g_2) = a_{11} (f_1\alpha g_1) \otimes (f_2 \alpha g_2) + a_{12} (f_1 \alpha g_1) \otimes (f_2 \sigma g_2)$$                                                       $$+ a_{21}(f_1 \sigma g_1) \otimes (f_2 \alpha g_2) + a_{22} (f_1 \sigma g_1) \otimes (f_2 \sigma g_2)$$

$$(f_1\otimes f_2) \sigma_{12} (g_1\otimes g_2) = b_{11} (f_1\alpha g_1) \otimes (f_2 \alpha g_2) + b_{12} (f_1 \alpha g_1) \otimes (f_2 \sigma g_2)$$                                                       $$+ b_{21}(f_1 \sigma g_1) \otimes (f_2 \alpha g_2) + b_{22} (f_1 \sigma g_1) \otimes (f_2 \sigma g_2)$$

Basically we take all possible combinations of the original products, so what does this lead us? We have seem to make no progress whatsoever. However, there is hope to determine the 4+4  constants $$a$$ and $$b$$!!! Why? Because $$\alpha$$ respects the Leibniz identity.

Because $$\alpha$$ respects Leibniz identity (invariance of the laws of nature under infinitesimal time evolution) it is basically a derivation. And the derivation of the unit element (corresponding to "no physical system") is zero. So what if we take $$f_1 = g_1 = 1$$? [As a side note, because $$\alpha$$ is distinct from $$\sigma$$ (otherwise we find ourselves in the trivial case discussed last time), it can be normalized to respect: $$1\sigma f = f\sigma 1 = f$$]

So now let's plug in $$f_1 = g_1 = 1$$ first in the $$\alpha_{12}$$ equation above. On the left hand side we get:

$$(1\otimes f_2) \alpha_{12} (1\otimes g_2) = f_2\alpha g_2$$

and in the right hand side we get:
$$a_{21}(f_2 \alpha g_2) + a_{22} (f_2 \sigma g_2)$$

which demands $$a_{21}= 1$$ and $$a_{22} = 0$$.

We can play the same game with $$f_2 = g_2 = 1$$ and get $$a_{12} = 1$$ This cannot reveal anything about $$a_{11}$$ at this time, but at least we have trimmed the possible combinations.

Same game on $$\sigma_{12}$$ yields: $$b_{21} = 0$$, $$b_{22} = 1$$, and $$b_{12} = 0$$. Again nothing on $$b_{11}$$.

So now we have the reduced pattern:

$$\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha + a_{11} \alpha \otimes \alpha$$
and
$$\Delta (\sigma) = \sigma\otimes \sigma + b_{11} \alpha \otimes \alpha$$

It turns out that $$b_{11}$$ is a free parameter which can be normalized to +1, 0 , -1 resulting in 3 composition classes: hyperbolic (hyperbolic quantum mechanics), parabolic (classical mechanics), and elliptic (quantum mechanics). But we can eliminate $$a_{11}$$ by applying Leibniz identity on the $$\alpha_{12}$$.  This is tedious and I will skip it here, but take my word on it because in the bipartite identity $$\alpha_{12}$$ will occur squared and by linearity of $$\alpha$$ it must vanish.

If we now consider only the quantum mechanics case of $$b_{11} = -1$$ we have:

$$\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha$$
and
$$\Delta (\sigma) = \sigma\otimes \sigma - \alpha \otimes \alpha$$

Does this remind you of something? How about:

Imaginary =  imaginary real  + real imaginary
Real          =  real real             - imaginary imaginary

This is how complex numbers start occurring naturally in quantum mechanics and this is why observables are Hermitean operators and generators are anti-Hermitean operators. The 1-to-1 map between observables and generators known as "dynamic correspondence" in literature is the 1-to-1 map between $$\alpha$$ and $$\sigma$$. It is this dynamic correspondence which is at the root of Noether theorem! Noether's theorem is baked in the quantum and classical formalism but you need to know where and how to look to uncover it.

So today we made good progress on the road to reconstruct quantum mechanics but there is quite a way to go. We do not know anything yet about the properties of the  $$\alpha$$ and $$\sigma$$ products, and without it we cannot hope to finds their concrete representation. But we'll get there. Please stay tuned.

## One product, two products?

### As I was posting this a large terror attack was unfolding in Paris. I stand in   solidarity with the victims of this barbarity. Je suis Parisienne.

Coming back to physics, I will now start a series of posts where I will rigorously derive quantum mechanics from physical principles. To prevent "abstraction indigestion", I will chop off this proof into easily manageable segments and today I want to discuss the minimum number of ingredients needed to cook the quantum soup.

The first physical principle needed is that of the invariance of the laws of nature under time evolution. This is a no-brainer, otherwise the search for physical laws would make no sense. But what can we get out of this? If there are algebraic operations which make sense in nature (and yes, there are plenty), those operations commute with time translation. Let us denote such operation with a generic symbol *, and let us use uppercase letters for the objects on which * operates. If we call T(A) the time translated version of the abstract object A, invariance under time evolution demands:

$$T(A*B) = T(A)*T(B)$$

So what? We did not get too far, unless... Unless we can do something more. Who says T has to be large? We can take as small time steps as we want. Infinitesimal steps. In other words we linearize $$T$$ into the identity and a small correction:

$$T= I + \epsilon D$$

So what do we get?

$$(I + \epsilon D)(A*B) = ((I +\epsilon D)A)*((I+\epsilon D)B)$$

which to first order yields the trivial $$A*B=A*B$$ but to first order in $$\epsilon$$ we get:

$$D(A*B) = D(A)*B + A*D(B)$$

Recognize this? It is the chain rule of differentiation. But we can do more. We can transform $$D$$ into a product which we will call $$\alpha$$:

$$A D B = A\alpha B$$

To put this abstraction in perspective, $$\alpha$$ will later become either the Poisson bracket or the commutator: those are concrete representation of the abstract product.

What we can do now is to rewrite the chain rule from above as the Leibniz identity:

$$A\alpha (B*C) = (A\alpha B)*C + B*(A\alpha C)$$

We know that one of the products * is $$\alpha$$, so the simplest case possible is that all there is in nature is this one and only product $$\alpha$$. What kind of world would we get is there is only one product possible in nature?

What we can do now is to apply the Leibniz identity on itself.

$$A\alpha (B\alpha C) = (A\alpha B)\alpha C + B\alpha(A\alpha C)$$

So we seem to be stuck, unless... Unless we can tell something about the abstract objects $$A, B, C, ...$$. Here is where category theory of k-algebras comes to the rescue. Now I can either go for the abstract math, or I can present the physical interpretation. Let's go the physical route and anticipate that what we ultimately want to consider are compositons of physical systems. Such compositions have a unit element: the "compose with nothing" element. Let's denote it's corresponding uppercase letter "I" and see what we obtain from Leibniz in this case:

$$I\alpha (I\alpha A) = (I\alpha I) \alpha A + I \alpha (I\alpha A)$$

from which we get:

$$(I \alpha I) \alpha A = 0$$

for any A!!! Which means $$I \alpha I = 0$$ or "nothing comes from nothing".

It is now time for the second physical principle: the invariance of the laws of nature under composition. What does this mean? It means the laws of nature do not change when we add additional degrees of freedom. For a silly "what if" game, how would the world would look like if the laws of nature would depend on the number of degrees of freedom? Imagine going to the airport and boarding a plane. When enough passengers board this plane increasing its internal degrees of freedom, the plane would suddenly become a bird which would fly to its destination. There it would revert back to a plane when the passengers would disembark. Sorry Harry Potter, no quantum soup for you. Come back one year.

So now consider a composite system  $$1\otimes 2$$. For this system we would have the bipartite product alpha: $$\alpha_{1 \otimes 2}$$

Now we need to construct this bipartite product just like Lego from the only ingredients we have available in this toy world: the ordinary products $$\alpha$$ for system 1 and 2:

$$(A_1\otimes A_2 ) \alpha_{1\otimes 2} (B_1 \otimes B_2) = a (A_1 \alpha_1 B1)\otimes (A_2 \alpha_2 B_2)$$

where $$a$$ is a generic proportionality constant. But what if we pick $$A_2=B_2 = I$$? In other words the second physical system is nothing.  On one hand we get:

$$(A_1\otimes I ) \alpha_{1\otimes 2} (B_1 \otimes I) = a (A_1 \alpha_1 B_1)\otimes (I \alpha_2 I)$$

which is zero because we showed earlier that: $$I\alpha I = 0$$. On the other hand:

$$(A_1\otimes I ) = A_1$$ and $$(B_1\otimes I ) = B_1$$ and we have:

$$(A_1\otimes I ) \alpha_{1\otimes 2} (B_1 \otimes I) = A\alpha B$$

So the product alpha by itself can only be trivial: $$A\alpha B = 0$$ for any A and B.

To get something non-trivial we need at least another product which we will call $$\sigma$$. For a rich quantum soup we need meat and potatoes. I mean the commutator and the Jordan product. We'll get there, please be patient. For now we only identified the soup ingredients:

• a product $$\alpha$$ which respects the Leibniz identity (due to invariance under time evolution)
• a second unspecified product $$\sigma$$
Can we consider more products? Of course, but it will turn out that we only need those ingredients. Invariance under composition will completely determine the properties of those two products. Please stay tuned for the next episode of cooking quantum mechanics.

## The Martian

Today I want to stay in the realm of cosmos and talk about a light topic for a change: the very nice movie "The Martian". I saw it in 3D on an IMAX screen and this is the way to see it. However, I really needed to suspend disbelief to enjoy it and I want to talk about some blatant nonsense. So spoiler alert, if you did not see the movie and don't want me to ruin it for you do not scroll down below the pictures and please came back next week for the new (physics) post.

The plot of the movie is simple: caught in a Martian dust storm a crew of NASA astronauts are forced to leave the planet in a hurry but one of them is injured in the process and is presumed dead. However he is not dead and is forced to survive alone on Mars until the next NASA mission will arrive. He lives in a housing compound and here is the first problem. The airlock of the compound is way too big and this waists a lot of air and energy every time you go in and out. At some point in the movie the airlock malfunctions and blows up, and guess how the huge hole in the compound is repaired? With a thin plastic sheet attached with duct tape. Really? Duct tape?

Moving along. The rocket which brought the astronauts to Mars was supposed to be powered by some plutonium reactor. The plutonium core was buried next to the compound about 1 foot deep in the sand for "safe disposal" and the spot was marked with a flimsy flag which one usually finds on bicycle attachments for kids. The huge storm which stranded the main character on Mars was about to tip over the Mars departure rocket but it had no effect on the flag! And the astronaut finds this plutonium core later on and uses it as a heater for his rover. The radiation was apparently of no concern.

Now back on Earth, some sort of graduate student who sleeps in his office figures out a new trajectory for the main Mars ship to return by slingshot around the Earth. He needs to double check his computation on a supercomputer, and how does he do it? We see him on the floor in the supercomputer server room connecting his laptop with a USB cable directly to one of the servers blades. Apparently proper remote connections were yet to be discovered, and anyone can just stroll at will in server rooms. The supercomputer confirms his computations but when the final rescue is happening, the crew discoverers that they will miss the meeting point by some 32 kilometers. Some supercomputer calculation!

More blatant nonsense happens when the crew of the big spaceship attempts to slow down their ship and fix the 32 kilometer gap by blowing a hole in the front of the ship to use the escaping air as a speed break.  The person placing an improvised explosive device has to do a space walk outside the ship, and he is not tethered in any way. Any miss on gripping various parts of the ship as he flies around would have sent him to his death in outer space. But the captain of the ship who is exiting the ship to catch the stranded astronaut arriving from Mars is fully tethered, and her spacewalk suit is also equipped with small rockets for independent mobility. The perks of being the boss I presume.

And surprise: the unnecessary tether is too short. This would belong more in a Laurel and Hardy comedy than in this movie. And now for the grand finale and the icing on the cake, the stranded astronaut is cutting the finger of his spacesuit glove to use the escaping air to propel himself the missing distance. Never mind he should probably be dead from the suit decompression, he arrives quite healthy, smiling, and breathing normally in the arms of the captain of the mission: a nice romantic happy ending.

Now don't get me wrong, I do like this movie and I would enjoy seeing it again. It is just that common sense is abused too much and the movie starts resembling productions of Godzilla from the 60s where the monster is nothing but a series of poorly made still photographs of a plastic toy.

## Why is there something rather than nothing?

I recently discovered a very nice talk by Lawrence Krauss entitled "A Universe from Nothing".

The actual talk starts at 12:48, and discusses a topic which used to be in the area of theologians and philosophers, but as the gaps of understanding shrinks science is now able to provide a plausible (but not yet definitive) explanation about how it all began. Even if you follow topics in astronomy, I think it is still awe inspiring to contemplate our place in the universe and realize that if you magically remove all visible matter this would hardly change anything in the evolution of our universe. Trillions of years in the future, the accelerated universe expansion would make all other galaxies disappear from our view and we would not have any experimental means to test the ideas of Big Bang and inflation. All of this would only be hearsay from distant past (our time now) if we somehow manage to survive that long, which is highly doubtful.

So why there is something rather than nothing? Simply put, "nothing" is unstable because of  the combination between relativity and quantum mechanics. In quantum mechanics we have the Heisenberg uncertainty principle which shows that if we measure the position very precisely, the momentum must have a very large uncertainty. But this uncertainty would translate into very large speeds (even higher than the speed of light) and the only way out is to create particle pairs. Hence the vacuum is filled with virtual particles which pop in and out of existence short enough to obey the other Heisenberg uncertainty principle: the energy-time relations. But when you add gravity in the mix, this changes everything and the virtual particles become very real. And what would the total energy of such a system be? Precisely zero.  Then a natural question to ask is: what is the total energy of our universe? It turns out that this can actually be measured (we live in a special time when this is possible), and surprise, it is zero!

I won't spoil the video with additional information from it, please watch it, it is very nice, and instead I will focus on a part of it which I am disagreeing with: the role of the question "why?"

One common mistake people make is to think that correlation implies causation. This mistake is very easy to make because of our day to day experience. Krauss is not making this mistake, but another one which is also rooted in daily experience: why implies purpose. I will attempt to argue against this position and show that "why?" is a scientifically valid question which leads to genuine scientific answers.

Let me start with the concept of truth. In mathematics a statement is true if we can construct a proof starting from an axiomatic system. Now this concept of truth is not universal as it was shown by Godel. If your axiomatic system is rich enough to encompass arithmetic, Godel showed that any such system is incomplete and there are statements which can be neither proved nor disproved. If we augment the original axiomatic system with such a statement we create an enlarged axiomatic system. However, we can enlarge the axiomatic system with the negation of the statement and obtain another enlarged axiomatic system. Now the two enlarged axiomatic systems are incompatible with each other, and moreover, the process can be repeated. What this shows is that mathematics is infinitely rich and not axiomatizable. And so the concept of truth is parochial in mathematics.

But there is a second notion of truth which comes from nature: something is true if it is in agreement with reality. Physics uses this notion of truth because physics is an experimental science. Then a natural problem is to compare and contrast the two notions: the mathematical and the physical one.

When we answer "how?", we use the mathematical concept (and the "unreasonable effectiveness of mathematics"), but when we answer "why?" we use the physical approach.

Let me illustrate with quantum mechanics. On the mathematical and how side, one starts with the usual axioms: a Hilbert space, observables are Hermitean operators, etc. However, on the physical and the why side, one can start from a physical principle: the invariance of the laws of nature under composition.

In the how, mathematical side, you build mathematical proofs, but on the why, physical principle side, you select distinguished mathematical structures from the infinite collection of the Platonic world of math which respect the physical principles. All mathematical structures are unique, but only a handful are "distinguished" and used by nature. "Why?" is a "distinguish-ability" question. Nature does not use hyperbolic composability for example because this violates another physical principle.

So why quantum mechanics, why special relativity, why general relativity? Because of the physical principles of invariance under composition, the physical principle of relativity, the physical principle of the equivalence between inertial and gravitational mass.

And where are the physical principles coming from? Here I have a suggestion: they essentially encode the difference between the real world and the abstract world of math. Take hyperbolic quantum mechanics. It almost had a chance to describe something real, but it is a mathematical impossibility to construct a state space and hence to have an objective way to assign truth and make experimentally testable predictions. Positivity is a property of the physical world which is forbidden in the mathematical world by Godel's theorem.

So the question why is actually very meaningful and moreover it can create mathematical consequences which can be put to experimental tests. In physics why does not imply purpose but "distinguish-ability" of the mathematical structures which play a key role in the physical world. The reason Krauss disagrees with it has to do with the abuse of the question by theologians who answer why? with "because God". Unlike Europe, US is a very religious place where science is under constant attack by religious bigotry who enjoys significant political power (the video reference to Arkansas and Ohio was criticizing attempts to teach in public schools "intelligent design" as an alternative to evolution who is only "a theory": https://en.wikipedia.org/wiki/McLean_v._Arkansas https://en.wikipedia.org/wiki/Intelligent_design_in_politics).

I agree that theologians are "experts of nothing", but philosophy should not be lumped with the area which presumes the answer before you ask the question. However philosophy does not create new knowledge and only provides an interpretation of what science discovers. The physical principles behind special relativity and quantum mechanics were not uncovered by philosophical contemplation, but by solving concrete physical problems. It is also true that the change of the paradigms is not at all easy even if you have solved the concrete problem. Case in point, Lorentz discovered his transformations, but the proper paradigm was discovered by Einstein.

## Hyperbolic Quantum Mechanics

In a recent post Lubos Motl stated:

There only exist two types of theories in all of physics:

1. Classical physics
2. Quantum mechanics
3. There is simply not third way.

I added the third option in order to emphasize that it doesn't exist.

This is correct (gee, I am agreeing with Lubos, is everything all right?), but (as advertised last time) today I want to dig in deeper in option 3 and prove why this is so. In the process we will better understand quantum mechanics and explore a brand new (and unfortunately sterile) landscape of functional analysis.

Quantum mechanics shares with classical physics a key property: the laws of nature do not change when we consider additional degrees of freedom. For example, the tensor product of two Hilbert spaces is still a Hilbert space and the composed system does not become classical, quantum-classical, or something else. This invariance of the laws of nature under composition is best expressed in the category formalism and in there we have three classes of solutions:

1. Elliptic composition (quantum mechanics)
2. Parabolic composition (classical physics)
3. Hyperbolic composition (a hypothetical hyperbolic quantum mechanics)
For quantum mechanics von Neumann has unified Heisenberg's matrix formulation and Schrodinger's approach in the Hilbert space approach, and we will see that the functional analysis has a categorical origin and there is a mirror categorical formalism for hyperbolic quantum mechanics as well. We will see that this is unphysical for obvious reasons and option 3 while nice as a mathematical curiosity has no physical usefulness. But it is helpful to better understand the Hilbert space formulation of quantum mechanics.

Let me start with some elementary mathematical preliminaries. In quantum mechanics one uses complex numbers and they originate from the fundamental composition relationships of the two quantum mechanics products: the commutator and the Jordan product. In hyperbolic quantum mechanics one uses split-complex numbers

In split complex numbers the imaginary unit is $$j$$ with $$j^2 = +1$$ but $$j \ne 1$$. In matrix representation, $$j$$ is the 2x2 matrix  with zero on the diagonal and 1 on the off-diagonal. Split complex numbers have polar representations similar with the complex numbers, but the role of the sines and cosines is replaced by hyperbolic sines and hyperbolic cosines. Very important, the fundamental theorem of algebra does not hold in split complex numbers and this has very important physical consequences for hyperbolic quantum mechanics.

Formally, hyperbolic quantum mechanics is obtained by replacing $$\sqrt{-1}$$ with $$j$$ in the commutation relations. However, there are no Hilbert spaces in hyperbolic quantum mechanics!!!

So let us see the corresponding formulation of hyperbolic quantum mechanics.

Starting with de Broglie's ideas, in a hypothetical universe obeying hyperbolic quantum mechanics one would attach to a particle not a wave, but a scale transformation $$e^{jkr}$$ and the scale transformation caries a given momenta in accordance with the usual de Broglie relation (in hyperbolic quantum mechanics one still have the same Planck constant): $$p = \hbar k$$

Continuing with matrix mechanics, Heisenberg approach carries forward identically, but here we hit the first roadblock: the matrices cannot be always diagonalized because the fundamental theorem of algebra does not hold in this case.

Continuing on Schrodinger equation, it's hyperbolic analog is:

$$+\frac{\hbar^2}{2m}\frac{d^2}{d x^2} \psi (x) + V(x) = E \psi (x)$$

To really understand what is going on in the hyperbolic case, we need to investigate the functional analysis of split-complex numbers. The best starting point are the metric spaces, and not the more abstract setting of topology. Key in a metric space is the triangle inequality. If you follow any standard functional analysis book you see that all metric spaces over complex numbers ultimately prove their triangle inequality starting from the triangle inequality of complex numbers. Moreover, this arises out of the trigonometric identity:

$${cos}^2 x + {sin}^2 x = 1$$

But in the hyperbolic case, one has this identity:

$${cosh}^2 x - {sinh}^2 x = 1$$

and in each of the four quadrants of the split-complex numbers, a reversed triangle inequality holds. If you cross the diagonals all bets are off, but it turns out that one can successfully introduce a vast functional analysis landscape for split complex numbers just as rich as the usual functional analysis. The reason for this is that ultimately the fundamental Hahn-Banach theorem holds in the split-complex case as well. To coin a name for the mirror analysis of split complex numbers, we will place in front the prefix para. As such in the hyperbolic case we will have a para-Hilbert space which is a very different beast than the usual Hilbert space.

There is a translation dictionary for all definitions and proofs in para-analysis:

 Elliptic Hyperbolic Triangle inequality Reversed triangle inequality Sup Inf Bounded Unbounded Complete Incomplete

A sequence $$x_n$$ in a metric space X = (X,d) is said to be para-Cauchy if for every $$\epsilon \gt 0$$ there is an $$N = N(\epsilon)$$ such that:  $$d(x_m,x_n) \gt \epsilon$$ for every $$m,n\gt N$$ The space X is said to be para-incomplete if every para-Cauchy sequence in X diverges.

A para-Hilbert space is an indeﬁnite para-inner product space which is para-incomplete.

The para-Hilbert space is non-Hausdorff, but the key showstopper is that given a point x, and a convex set (which would correspond to a state space) there is not a unique perpendicular to the set. As such a hyperbolic GNS construction is not possible! This means that we lack positivity and hence we cannot define a physical state.

Invariance under composition demands 3 solutions: elliptic, parabolic, or hyperbolic. We can define a state only in the elliptic and parabolic cases (quantum and classical physics).
By overwhelming experimental evidence, nature is quantum mechanical, no exceptions allowed.

## Is quantum mechanics unique?

Quantum mechanics describes nature perfectly and no experiment has ever detected violations of quantum mechanics predictions. Last time I have shown that there is no realistic interpretation possible for quantum mechanics and this fact flies in the face of our classical intuition. But this intuition was developed as part of evolution of species on Earth (a lion chasing a gazelle needs not solve Schrodinger's equation) and is simply irrelevant to modern science.

If quantum mechanics is not a realist theory, then perhaps there are other realistic theories possible. Special relativity replaces Newtonian ideas of absolute space and time, electromagnetism is part of the larger electroweak theory, etc. When we look at answering uniqueness questions, there are two approaches possible.

First  you can prove no-go theorems. Bell famously said that no-go theorems only prove a lack of imagination for the author. When you hold dear to your heart a contrarian paradigm, no-go theorems will never convince you to change your point of view. I know this first hand from arguing with people who think they can beat Bell theorem in a locally realistic computer simulation although that is a mathematical impossibility.

Then there is a second approach possible: derive a physical theory from physical principles. Special theory of relativity has far fewer challengers today compared with quantum mechanics because it is much harder to argue with the principle of relativity. Can quantum mechanics be derived from physical principles? The answer is yes, the physical principle is the invariance of the laws of nature under composition:

If system A is described by quantum mechanics, and system B is described by quantum mechanics, then the composed system is described by quantum mechanics as well.

Physically this means that the Planck constant does not change when we add additional degrees of freedom. Mathematically quantum mechanics follows from using category theory arguments. But what other theories obey this invariance under composition principle?

It turns out that there are 3 such solutions possible:
• elliptic composition
• parabolic composition
• hyperbolic composition
Elliptic composition is quantum mechanics (and this is why this blog is called elliptic composability), parabolic composition is classical mechanics, but what is this hyperbolic case?

Formally, hyperbolic composability is nothing but quantum mechanics with $$\sqrt{-1}$$ replaced by $$\sqrt{+1}$$ and the resulting theory is known as hyperbolic quantum mechanics.

The first thing one notices is that in hyperbolic quantum mechanics one continues to add amplitudes just like in ordinary quantum mechanics, and therefore this is not a realistic theory either. As such realism is dead for good. But can this theory describe anything in nature? Nope, but nevertheless I'll explore the mathematics of this theory in the next post because the most valuable aspect of this theory is to act as a comparison backdrop against quantum mechanics and illuminate various properties of it.

I will not start today to dig into the mathematical aspects, and I want to discuss instead the meaning of lost realism in physics. We have seen that quantum mechanics is a probabilistic, and not a deterministic theory. The wavefunction cannot have an ontic interpretation for two main reasons: it does not carry energy or momentum, and it has several distinct representations. But perhaps realism is saved by an epistemic interpretation: what if the experiment simply reveals pre-existing values? This hope was kept alive by various classical toy models, but they were put to rest by the PBR theorem. So realism is not an option anymore, but is there a real world independent of us? Do we have to resort to solipsism or even worse to some sort of quantum cargo-cult religious new age babble based on the discredited ideas of Stapp and the crackpot new age guru Deepak Chopra? Does the observer play an active role in quantum mechanics?

Let's see what the math shows. Quantum mechanics reconstruction uses category theory, and category theory is known as "objects with arrows". The nature of the objects is completely irrelevant, all that matters are the arrows which represent the relationships. Originally category theory was introduced to map the similarities between topological and algebraic objects, and the higher abstraction of categorical proofs allowed the extraction of common behaviors in very different mathematical domains. Because quantum mechanics reconstruction is categorical in nature, this derivation is blind to any hypothetical underlying quantum ontology. The "true meaning" of the wavefunction simply does not matter. The question: what does quantum mechanics describe? is not a testable, meaningful scientific question.

But what about the observer role? The observer does not cause the outcome. It that were true, you can use quantum mechanics to send signals faster than the speed of light. It is not the observer who is important, but the configuration of the measurement device. This is because non-commutative observables cannot be measured simultaneously. The observer only plays an indirect role in deciding what and how to measure. Here Andrei can argue along the lines of "free will is an illusion": the observer is part of nature, subject to quantum (or hypothetical sub quantum deterministic) laws as well. However, as an emergent phenomena, human consciousness is fundamentally different. Why? Because in quantum mechanics information is conserved, but people are born and later on die and there is no information conservation for the soul.  Free will is a manifestation of this lack of information conservation.

To date, the best correct interpretation of quantum mechanics available is QBism. However I am not completely satisfied with it. If qubism is Copenhagen done right, I am working on "qbism done right" :) but more on this in future posts.

## Is there a realistic interpretation of Quantum Mechanics?

### (a critical analysis of the Bohmian mechanics)

In the last two posts the merits of classical (realistic) description of Nature were discussed. The major disagreement was on the burden of proof. I contend it is the responsibility of any alternative explanation to prove it is better than quantum mechanics. Given the overwhelming and irrefutable evidence for the applicability of quantum mechanics in describing nature, this is simply impossible, but is there a realistic interpretation of quantum mechanics?

When discussing physics, there are three possible mathematical frameworks to choose from:
• Lagrangian formalism in the tangent bundle,
• Hamiltonian formalism in the cotangent bundle.
• Formalism in the configuration space.
For the Lagrangian formalism, I cannot do any better than Zee in Chapter I.2 of his "Quantum Field Theory in a nutshell":

"Long ago, in a quantum mechanics class, the professor droned on and on about the double-slit experiment, giving the standard treatment.[...] The amplitude for detection is given by a fundamental postulate of quantum mechanics, the superposition principle, as the sum of the amplitudes for the particle to propagate from the source S through the hole A1 and then onward to the point O and the amplitude for the particle to propagate from the source S through the hole A2 and then onward to the point O.

Suddenly, a very bright student, let us call him Feynman, asked, "Professor, what if we drill a third hole in the screen?" The professor replied, "Clearly, the amplitude for the particle to be detected at the point O is now given by the sum of three amplitudes [...]."

The professor was just about ready to continue when Feynman interjected again, "What if I drill a fourth and a fifth hole in the screen?" Now the professor is visibly losing his patience: "All right wise guy, I think it is obvious to the whole class that we just sum over all the holes."

But Feynman persisted, ""What if I now add another screen with some holes drilled into it? The professor was really losing his patience: "Look, can't you see that you just take the amplitude to go from the source S to the hole Ai in the first screen, then to the hole Bj in the second screen, then to the detector at O, and then sum over all i and j?"

Feynman continue to pester, "What if I put a third screen, a fourth screen, eh? What if I put in a screen and drill an infinite number of holes in it so that the screen in no longer there?" [...]

What Feynman showed is even when there is just empty space between the source and the detector, the amplitude for the particle to propagate from the source to the detector is the sum of the amplitudes for the particle to go through each of one of the holes in each one of the (nonexistent) screens. In other words, we have to sum over the amplitude for the particle to propagate from the source to the detector following all possible paths between the source and the detector."

So if we have to consider all possible paths, the notion of the classical trajectory is simply doomed and there is no realistic quantum interpretation in the Lagrangian formalism. One down, two to go.

We will make a quick pass onto the phase space formalism for quantum mechanics. This is very easy, Wigner functions contain negative probability parts, and hence there is no realistic quantum interpretation in the Hamiltonian formalism either. Two down, one to go.

However the configuration formalism is the tricky one. Here one encounters the Hamilton-Jacobi and the Schrodinger equation.

Let us start from classical physics. Consider 1-d motion in a potential V. The Hamilton-Jacobi equation reads:

$$\frac{\partial S}{\partial t} + \frac{1}{2m}{(\frac{\partial S}{\partial x})}^2 + V(x) = 0$$
and
$$p = \frac{\partial S}{\partial x}$$

if V=0, we consider $$S = W-Et$$ from which one trivially obtains:

$$\frac{\partial p}{\partial x}=\sqrt{2mE}$$. If the particle is at the moment $$t_0$$ at $$x_0$$ then the particle motion is unsurprisingly:
$$x-x_0 = \sqrt{\frac{2E}{m}}(t-t_0)$$

The key point is that we need the initial particle position to solve the equation of motion.

So what would happen when we replace the Hamilton-Jacobi equation with its quantum counterpart, the Schrodinger equation:

$$-i\hbar\frac{\partial \psi}{\partial t} - \frac{\hbar^2}{2m}{(\frac{\partial \psi}{\partial x})}^2 +V\psi(x) = 0$$ ?
If
$$\psi=\sqrt{\rho} exp(i\frac{S}{\hbar})$$
then we get the Hamilton-Jacobi equation for S but with an additional term called the quantum potential, and a continuity equation for $$\rho$$. Welcome to the Bohmian formulation of quantum mechanics!

To solve the equation of motion we again need an initial condition, just like in the classical case. But this should raise big red flags!!! In school we were taught that quantum mechanics is probabilistic, not deterministic, and also that the wavefunction is all there is needed to know to make predictions. How is this possible? Where is going on here?

To make quantum mechanics predictions one needs an additional ingredient: the Born rule. It turns out that in Bohmian quantum mechanics the Born rule constraints the allowed distribution of initial conditions and this is the beginning of the end of the realism claim of this interpretation.

 Max Born

But where is the Born rule coming from? With the advantage of about 100 years of quantum mechanics now we have two nice answers. The wavefunction lives in a Hilbert space, and in there we have Gleason's theorem which basically mandates the Born rule as the only logical possibility to make sense of the lattice of projection operators. But this is an abstract mathematical take on the problem. There is also an excellent physical explanation given by (surprise...) Born himself: http://www.ymambrini.com/My_World/History_files/Born_1.pdf I will not explain Born's paper because it is very well written and very easy to be understood today, but the main point is that Born's rule is incompatible with an arbitrary initial probability density. The supporters of the Bohmian mechanics are well aware of this and they call the consistent initial probability density "quantum equilibrium". Moreover they point out that after some relaxation time an arbitrary probability density "reaches quantum equilibrium" and so any discrepancy of predictions between Bohmian quantum mechanics and standard quantum mechanics could only occur a few seconds after the Big Bang, so problem solved, right? Wrong! It is rather ironic that the undoing of Bohmian's mechanics comes from a most unexpected direction: Bell's theorem!!! Ironic because Bell was inspired to discover his theorem by viewing Bohmian mechanics as a counter-example to von Neumann's no go theorem on hidden variables.

In quantum mechanics it is very easy to see that the position operator at different times does not commute. Why? because the position operator at time $$t-t_0$$ involves the momentum operator: $$(t-t_0)P$$ and P and Q do not commute. However this means that there is no probability space for positions at different times. But in Bohmian mechanics, the particle always have a "real" position and as such in there such a probability space does exists. Hence we can detect in principle statistical violations in the predictions of standard quantum mechanics and that of Bohmian mechanics.

The proponents of Bohmian mechanics are very well aware of this problem and they have a solution: there are no comparison of measurements at different times! I have to agree this is a very clever, but the trouble is only swept under the rug and it does not go away.

The prediction discrepancy goes away in Bohmian mechanics only if the theory is "contextual". Because of this the way velocity is measured in Bohmian mechanics is not what we would normally expect: $$v = (x_1-x_0)/(t_1-t_0)$$ and Bohmian mechanics is known for "surreal trajectories".

Surreal or not, violations of the speed of light or not, non-locality or not, the main trouble is in the sudden change of the probability density after measurement. In the Copenhagen formulation, the wavefunction collapses upon measurement and this is naturally explained as updated information. After all, the wavefunction does not carry any energy or momenta and is just a tool to compute the statistical outcomes of any possible experiment. But one of the advertised virtues of Bohmian mechanics is its observer independence: the measurement simply reveals the pre-existing position of the particle. But is this really the case?

The trouble for Bohmian mechanics is that its predictions for two consecutive measurements differs from that of the standard quantum formalism. Why? Because the "quantum equilibrium" for the first measurement is not a "quantum equilibrium" for the second measurement because wavefunction collapses during the first measurement. (by the way, this is the root cause of why no correct quantum field theory can ever be created for Bohmian mechanics.)

So how does the Bohmian supporters deal with this? The theory is simply declared "contextual" and valid only between preparation (with a pre-supposed quantum-equilibrium distribution) and measurement.

Without contextuality Bohmian mechanics is an inconsistent theory as it predicts violations of the uncertainty principle. With contextuality Bohmian mechanics becomes a time-bound equivalent formulation of quantum mechanics. Think of it as an R^n flat local map of a curved manifold. A field theory in the Bohmian interpretation is impossible the same way a 2-d map of Earth (which topologically is a sphere) cannot avoid distortions. (I see a cohomology no go result for Bohmian quantum field theory in my future ;)).

Because of contextuality the Bohmian interpretation cannot be called realistic.

Now we have exhausted all 3 quantum mechanics formulations.  Quantum mechanics is simply not a realist theory anyway we look at it. It would have been really strange if in one equivalent formalism of quantum mechanics there were a realistic interpretation, and not in the other two.

Basically it is either initial particle positions, or Born rule. But the Born rule is an inescapable self-consistency condition on the Hilbert space due to Gleason's theorem, and we must rule out unrestricted initial statistical distributions.

Quantum mechanics is complete and any addition of "hidden variables" spoils its consistency.