Tuesday, June 25, 2013

Is the wavefunction ontological or epistemological?


Part 1: The EPR argument


Quantum mechanics is a fascinating subject, extremely rich in mathematical, physical, philosophical, and historical content. Studying quantum mechanics is school with its classical problems of solving the hydrogen atom is only the very first step in a long journey. The quantum foundation area with its diversified views is an equally fascinating domain. At first sight, it looks like the majority of the current interpretations are “obviously” misguided except for your own, whatever that may be, and all other interpretations must be rooted into classical prejudices. However this is not the case and it takes some time and effort to fully appreciate and accept all points of views in interpreting quantum mechanics.

Into all this mix, I am proposing yet another quantum mechanics interpretation, and I will attempt to show that quantum mechanics is actually intuitive and it all follows from clear physical principles in a reconstruction program. Since the principle names the theory (e.g the theory of relativity got its name form the relativity principle), I will call quantum mechanics: the theory of elliptic composability and I will show that all primitive concepts like for example ontology and epistemology has to be adjusted to their corresponding composability class. In particular the quantum wavefunction is neither ontological nor epistemological, meaning it is not “parabolic-ontological” not “parabolic-epistemological” but it will be shown to be “elliptic-ontological”.

I will start this journey following arguments in historical fashion, and I will start with the EPR argument. I have no clear idea how many parts this series will contain, probably around 10 but I will keep an open format.

At the dawn of quantum mechanics, Bohr struggled with its interpretation, and the ideas of complementarity and uncontrollable disturbances was a major part of the discussion. Today this is no longer the case dues to advances in understanding of the mathematical structure of quantum mechanics. Even today most textbooks are painting the wrong picture of the uncertainty principle due to sloppy mathematical formulation and this probably deserves a post of its own for clarification.

For the EPR argument suffices to state that one cannot measure simultaneously with perfect accuracy both the position and the momentum of elementary particles. Then Einstein, Podolski, and Rosen argued along the following lines: what if I have a system which disintegrates into subsystem 1 and subsystem 2 and we measure position on subsystem 1 and momentum on subsystem 2. If the original system was initially at rest, conservation of momentum implies that measuring the momentum of subsystem 2 implies we know with absolute precision the momentum of subsystem 1. But wait a minute, on subsystem 1 we measure with perfect accuracy the position as well so it seems that we succeeded on beating the uncertainty principle. Quantum mechanics does not allow that which means quantum mechanics must be incomplete.

The whole argument holds provided two critical assumptions hold as well:

  • “If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity.” 
  • “On the other hand, since at the time of measurement the two systems no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system.” 

Both assumptions are actually wrong and later on, John Bell refuted the EPR conclusion based on the second assertion (that of locality). Arguing on similar lines with Bell, one can show that the first assumption is invalid as well.

The remote effect due to local measurement is called quantum steering and while it cannot be used to send signals faster than the speed of light, it does change the remote state. Such effects were observed in actual experiments. In the elliptic composability quantum mechanics reconstruction project it is easy to understand its root cause:

In classical or quantum mechanics, observables play a dual role, that of observables and of generators. But while in classical mechanics (parabolic composability) the observables for a total system factorizes neatly into a product of observables for each subsystem, in quantum mechanics (elliptic composability) observables and generators are mixed together and the factorization is not possible in general (see Fig 3 in http://arxiv.org/pdf/1303.3935v1.pdf) . In other words, the system becomes “entangled”.

In the next post I will show Bell's refutation of EPR argument based on locality.

Thursday, June 20, 2013

Quantum mechanics and unitarity (part 4 of 4)


Now we can put the whole thing together and attempt to solve the measurement problem. But is there a problem to begin with? Here is a description of the problem as written by Roderich Tumulka http://www.math.rutgers.edu/~tumulka/teaching/fall11/325/script2.pdf (see page 53):

Start with 3 assertions:

• In each run of the experiment, there is a unique outcome.
• The wave function is a complete description of a system’s physical state.
• The evolution of the wave function of an isolated system is always given by the
Schrödinger equation

Then in the standard formulation of quantum mechanics at least one of them has to be refuted. From the quantum mechanics reconstruction work, the last two bullets are iron-clad and cannot be violated without collapsing the entire theory. This means that GRW theory, and Bohmian interpretations are automatically excluded. Also the usual Copenhagen interpretation is not viable either because it makes use of classical physics (we know that we cannot have a consistent theory of classical and quantum mechanics). Epistemic approaches in the spirit of Peres are not the whole story either because while collapse is naturally understood as information update, this means that Leibniz identity is violated as well.

So what do we have left? Only the many-worlds interpretation (MWI), or its more modern form of Zurek’s relative state interpretation http://arxiv.org/abs/0707.2832.

However, I will argue for another fully unitary solution different than MWI/relative state interpretation (and I agree with Zurek that the old fashion MWI gives up too soon on finding the solution), but in the same spirit of Zurek’s approach. The basic idea is that measurement is not a primitive operation. The experimental outcome creates huge numbers of information copies. The key difference between Zurek’s quantum Darwinism and the new explanation is on who succeeds in creating the information copies: the full wavefunction (as in quantum Darwinism), or the one and only experimental outcome. In other words, the Grothendieck equivalence relationship is broken by the measurement amplification effects: only one equivalent representative of the Grothendieck group element succeeds in making information copies and statistically overwhelms all the other ones (for all practical purposes). The information in the “collapsed part of the wavefunction” is not erased, but becomes undetectable.

Of course there are still open problems of delicate technical nature to be solved in this new paradigm, but they do seem to get their full answer in this framework. Solving them is a work in progress, and the solution is not yet ready for public disclosure.


In subsequent posts I’ll show how the wavefunction is neither epistemological, nor ontological and I will touch on Bell’s theorem, and the recent PBR result among other things.

Tuesday, June 18, 2013

Quantum mechanics and unitarity (part 3 of 4)


In part 2 we have see how to construct the Grothendieck group. Can we do this for the composability monoid in the case of classical or quantum mechanics? The construction works only if we have an equivalence relationship and this naturally exists only for quantum mechanics.

There is no Grothendieck group of the tensor product for classical mechanics, and there is no “ontological collapse” there other than an epistemic update of information in an ignorance interpretation.

In quantum mechanics the situation is different because of unitarity and one can construct an equivalence relationship starting from a property called envariance : http://arxiv.org/abs/quant-ph/0405161 Skipping the boring technical details on how to prove the usual properties of an equivalence relationship, here is the basic idea: whatever I can change using unitarity for the system over here, can be undone by another unitary evolution on the environment over there.

Therefore the correct way to write a wavefunction in quantum mechanics is not |psi>, but as in Grothendieck way: the Cartesian product: (|psi>, null) with the second element representing the “negative elements”, or the environment degrees of freedom which will absorb the “collapsed information” during measurement.

The measurement device should be represented as (null, |measurement apparatus and environment>) and the contact between the system and the measurement device should be represented as the tensor product of the two Grothendieck elements resulting into:

(|system to be measured>, |measurement apparatus and environment>)

By the equivalence relationship this is the same as:

(|collapsed system A>, |measurement apparatus displaying A and environment>)
as well as all other potential experimental outcomes:
(|collapsed system B>, |measurement apparatus displaying B and environment>)
(|collapsed system C>, |measurement apparatus displaying C and environment>)…

But then since only one outcome is recorded, we either need to resort to MWI interpretation, or we need to find another explanation for this.

The explanation is that the measuring apparatus is an unstable system which provides massive information copies (think of the Wilson’s cloud chamber in Mott’s problem). Measurement is not a neat and primitive operation, and the one and only outcome creates an extremely large number of information copies which dwarfs the information about the other potential outcomes which are now hidden in the environmental degrees of freedom.

Sir Neville Mott showed that in a cloud chamber two atoms cannot both be ionized unless they lie in a straight line with the radioactive nucleus. In other words, we only need to understand the very first ionization. Similarly, in the Schrödinger’s cat scenario, we only need to understand the first decay, and we do not need to hide “the other cat” in the environment degrees of freedom.

Please stay tuned for the conclusion in part 4.

Saturday, June 15, 2013


Quantum mechanics and unitarity (part 2 of 4)


When talking about measurement, one talks about the collapse postulate. Let us take a look of what happens with the underlying Hilbert space. During collapse, the dimensionality of the Hilbert space is reduced to the dimensionality of the subspace where the wavefunction is projected to. A key point is that the dimensionality of a Hilbert space is its sole characteristic.

Measurement is initiated by first doing the tensor product of the Hilbert space of system wavefunction with the Hilbert space of the measurement apparatus. This operation increases the dimensionality of the original Hilbert space. Then the collapse decreases the dimensionality.

As an abstract operation, the tensor product respects the properties of a commutative monoid. Short of the existence of an inverse element, this is almost a mathematical group http://en.wikipedia.org/wiki/Group_(mathematics).

To model the collapse in a fully unitary way (and free of interpretations) we would like to construct the tensor product group from the tensor product commutative monoid. Is such a construction possible? Indeed it is and it is called the Grothendieck group construction http://en.wikipedia.org/wiki/Grothendieck_group Let us explain this using a simple challenge: let’s construct the group of integers Z starting from the abelian monoid of natural numbers N. We would need to introduce negative integers using only positive numbers!!! At first sight this seems impossible. How can such a thing be even possible? N and by itself is not enough, but with the addition of an equivalence relationship it can be done.

So consider a Cartesian product NxN and we would call the first element a positive number, and the second element a negative number: p = (p,0)  n = (0, n) We would like to do something like this (p,0)+(0,n) = (p, n) = p-n

Also : (0,-q) = (q, 0) All this works in general, but the definition of a Z number is no longer unique. For example: 7=(7,0) =(8,1)=(9,2)=… and -3=(0,3)=(1,4)=(2,5)=…

Therefore we need an equivalence relationship such that two pairs (a,b) and (p,q) are considered equivalent if a+q=b+p Notice that in the equivalence relationship we used only the “+” operation of the original monoid N. The formal definition of the equivalence relationship is slightly more complex due to the need to prove the transitivity property of an equivalence relationship. We call two pairs equivalent: (a,b)~(p,q) if there is a number t such that a+q+t =b+p+t

Now since Grothendieck construction is categorical (universal), it can be applied to the tensor product commutative monoid and this will explain the collapse postulate in a pure unitary way. Please stay tuned for part 3.

Wednesday, June 12, 2013


Quantum mechanics and unitarity (part 1 of 4)


I will start a sequence of posts showing why quantum mechanics demands only unitary time evolution despite the collapse postulate and how to solve the problem. For reference, this is based on http://arxiv.org/abs/1305.3594

The quantum mechanics reconstruction project presented in http://arxiv.org/abs/1303.3935 shows that in the algebraic approach, the Leibniz identity plays a central and early role. But what is the Leibniz identity? It is the chain derivation rule: D(fg) = D(f) g + f D(g).

All standard calculus follows from this rule. For example using recursion one proves D(X^n) = n X^(n-1) and form this and the Taylor series, the derivation rules for all usual functions follow.

In the algebraic formalism of quantum mechanics, the Leibniz identity corresponds in the state space to unitarity. Any breaking of unitarity means that Leibnitz identity is violated as well. This is the case for example in the epistemological interpretation of the wavefunction where the collapse postulate is understood as simply information update. However (and here is the big problem), breaking the Leibniz identity destroys the entire quantum mechanics formalism. In other words, any non-unitary time evolution is fatal for quantum mechanics.

So how can we understand the collapse postulate? Is quantum mechanics inconsistent? Should quantum mechanics be augmented by classical physics to describe the system and the measurement apparatus? From http://arxiv.org/abs/1303.3935 we know that there cannot be any consistent classical-quantum description of nature. Also the formalism which highlighted the problem shows the way out of the conundrum. Part 2 of the series will present preliminary mathematical structures which will be used to show how quantum mechanics can be fully unitary even during measurement.

Wednesday, June 5, 2013


New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 5)


“Lagrangian-Only Quantum Theory” by Ken Wharton


What I liked about Ken’s talk was the novelty of his idea. Working in the quantum mechanics reconstruction area I always ask the “what if” questions: what if the universe would obey different kinds of mathematical relationships? So what if there are no dynamical equations, not even stochastic ones? This is the premise of Ken’s approach in arXiv:1301.7012.

As an inspiration one can pick the example of statistical mechanics and work out along the following lines:
1.      Consider all possible microstates
2.      Eliminate inconsistent states
3.      Assign an equal a priori probably
4.      Calculate probabilities as Bayesian updates

Now in nature there are distinguished mathematical structures, including the existence of Hamiltonian equations of motion. If we do not consider the dynamics, we can open the door for all kinds of other kinds of ontologies, like for example simulated virtual realities. By killing the dynamics Ken is basically going outside the realm of physics in hope to discover new insights. If I understood him well in a private conversation after the talk, Ken’s approach is basically opposite that of t`Hoofts: start with a chaotic system in the IR domain and arrive at quantum mechanics in the UV area. In the process Ken claims he recovers something that asymptotically becomes Born’s rule. If correct, this would represent a genuine new insight into the origin of Born’s rule besides the ones from Gleason theorem or Zurek’s program.

One may object that going outside physics is a fool’s errand, but then we are reminded by the unphysical PR-boxes which proved fruitful into better understanding quantum mechanics. Will Ken’s approach prove as fruitful? Only time will tell and I wish him luck.

One last word about this series of posts from the conference. Here is the link to the conference page http://carnap.umd.edu/philphysics/conference.html where one can find all the brief descriptions of the talks. 

Monday, June 3, 2013


New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 4)

“Quantum information and quantum gravity” by Seth Lloyd


A thought provoking talk at the conference was that of Seth Lloyd. He showed how to derive Einstein’s general relativity equation from quantum limits for measuring space-time geometry and an additional black hole assumption.

One way to think of measuring the geometry of space-time is to think of a comprehensive GPS system. Measuring time amounts to measuring the number of clock ticks and this requires energy. Everybody is familiar with the position-momentum uncertainty principle, but the energy-time uncertainty principle is not so clear cut. This is because in quantum mechanics time is a parameter, and not an operator, and care has to be exercised in interpreting the energy-time uncertainty principle.

Margolus and Levitin had obtained a bound of quantum evolution time in terms of the initial mean energy of the system E: E delta t >= hbar pi/2. From this the total possible number of clock ticks in a bounded region of space time (of radius r and time span t) cannot exceed 2Et/pi hbar. In principle, quantum mechanics does not limit the accuracy for measuring time, and all you need is to do is add enough energy. But in general relativity, adding energy in a bounded region will eventually lead to the creation of a black hole. So here is a general relativity assumption: we want the radius of the bonded region to be larger than the Schwarzschild radius Rs = 2GM/c^2

From this (in terms of the Plank time Tp and Plank distance Lp) one obtains the maximum number of clock ticks achievable in a bounded region of space time before creating a black hole: r t / pi Lp Tp

Now r*t is an area and naive field theory would suggest r^3 t. Also naïve string theory would suggest at first sight r^2 t.

From those kinds of area considerations, Seth was able to deduce general relativity equations inspired in part by Ted Jacobson’s ideas (in fact Seth collaborated with Ted on this result). Now you may ask (as I certainly did) if you start with Schwarzschild’s radius and you derive Einstein’s equations, are you not vulnerable to charges of circularity? Perhaps, but the result is still interesting.

(I have one more story to tell from the conference. Please stay tuned for part 5-the last one.)

Saturday, June 1, 2013


New Directions in the Foundations of Physics Conference in Washington DC 2013 (part 3)


“What is the alternative to quantum theory” by John Preskill


As promised, here is the second story from the subsequent discussions following Preskill’s talk.

After the talk I was listening in a discussion between John Preskill and Chris Fucks and at some point John asked the question: “is there anything else besides classical and quantum mechanics?” Later in the day I approached John and I told him that I know the answer to his question and I started to present my ideas captured in http://arxiv.org/abs/1303.3935 The basic idea is simple. Suppose I have on my left side a physical system A subject to the laws of nature. Also suppose I have on my right side a physical system B subject to the same laws of nature. Then if I perform the tensor product composition of system A with system B, I would get the larger system “A tensor B” subject to the same laws of physics. From this I can extract very hard constraints on the allowed form of the laws of nature. In fact it can be shown that there are only 3 such consistent solutions. One is an “elliptic” solution which corresponds to quantum mechanics, one is a “parabolic” solution which corresponds to classical mechanics, and there is a third “hyperbolic” solution which corresponds to something we do not fully understand at this time. Another way to look at the 3 solutions is by Plank’s constant: positive, zero, or imaginary.

Mathematically the whole thing can be naturally expressed in terms of category theory, and physically it corresponds to the invariance of the laws of nature under tensor composition. Now when I explain this to different people, I usually get a polite nod followed by a polite excuse to end the discussion. However, I did not get this reaction from Preskill who asked cogent clarification questions. Also he told me to look up the recent preprint of an Anton Kapustin. I did not remember the name, and the next day I asked John to type it for me on the archive search field and lo and behold I found out this preprint: http://arxiv.org/abs/1303.6917 titled: “Is there Life beyond QM?” Now the core inspiration for my result was a 70s paper by Grgin and Petersen: http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1103900192 and Kapustin had the same inspiration. Reading his preprint it struck me that we had independently discovered the same thing and I only managed to upload my preprint 11 days before him. Then the mystery of John’s reaction evaporated. Preskill is colleague with Kapustin at Caltech.

So now I had good news and bad news. The good news was that I am right and my credibility got a boost. The bad news is that I have competition in an area where I thought I worked alone. When I uploaded my preprint, I left out a piece of it, related to the unitary realization of the collapse postulate. So I rushed to package this result as a separate paper and I uploaded a few days after the conference: http://arxiv.org/abs/1305.3594

The problem is that any violation of unitarity is fatal to QM as shown by the QM reconstruction project. This includes the collapse during measurement even though it can be interpreted as Bayesian information update. There is an easy remedy to this though suggested by this composability/category theory formalism and it is based on the Grothendiek group construction. (I’ll explain how this works in detail in a subsequent post). As a side benefit this solves the measurement problem and eliminates the MWI interpretation as well.

The QM reconstruction projects is an area coming of age and Luca Bombelli maintains a page keeping tabs on all such projects: http://www.phy.olemiss.edu/~luca/Topics/qm/axioms.html I believe that such approaches will eventually lead to the elimination of all known QM interpretations as QM will be just as easily and naturally be derivable as special relativity. After all, how many conferences dedicated to the “correct” interpretation of special relativity and “ict-imaginary time” do you know?