Friday, May 29, 2015

The unreasonable effectiveness of composition axioms


This blog is called Elliptic Composability to celebrate the method of quantum mechanics reconstruction: the invariance of the laws of Nature under composition. The majority of other attempts to derive quantum mechanics from physical principles contain composition axioms:
  • Barnum and Wilce : “composites are locally tomographic”, 
  • Dakic and Brukner : “the state of a composite system is completely determined by local measurements on its subsystems and their correlations”, 
  • Lluis Masanes and Markus Muller : “the state of a composite system is characterized by the statistics of measurements on the individual components”, 
  • Chiribella, D’Ariano, and Perinotti: “if two states of a composite system are different, then we can distinguish between them from the statistics of local measurements on the component systems”, 
  • Lucien Hardy: “Composite systems rules” (\(N_{A\otimes B} = N_A N_B\) and \(K_{A\otimes B} = K_A K_B \) where N is the dimension of the state space and K are the number of degrees of freedom). .
So the big question is WHY? Why is composition occurring so often in quantum reconstruction programs? One possible answer is this: quantum mechanics is strange because it predicts correlations larger than classical correlations. Therefore to distinguish "quantumness" from classical physics one needs to talk about correlations which by its very definition requires composites. 

However a much deeper mathematical reason is lurking about and I was very much surprised to uncover why composition requirements are so "unreasonable effective" in reconstructing quantum mechanics. Basically there is a unique connection between the tensor products and products and because of it composition axioms (which involve tensor products) have algebraic consequences.

Here is how it works:


To any bilinear product \(f\) between A and B we can associate a linear product \(\hat{f}\) such that the diagram above commutes (see 1.6 in http://en.wikipedia.org/wiki/Tensor_product ). This is a Universal Property

Now quantum mechanics has an underlying algebraic structure: a C* algebra. and there are several products which are of importance: the commutator which gives the time evolution in the Heisenberg picture, the Jordan product, the usual complex multiplication for the operators. So secretly when we impose composition requirements we implicitly constrain the behind the scene algebraic structure. And algebra is like playing the piano: distinct notes and distinct well defined algebraic structure. For example in Lucien Hardy's approach, Jochen Rau was able to clarify and expand the arguments by introducing dimensional arguments for the Lie groups simply because there are only a handful of classical Lie groups (unitary, orthogonal, simplectic).

Similar arguments can be made for all of the other reconstruction approaches above. 

Now once we realize the power of the composition arguments we can take full advantage of it and squeeze every ounce of mathematical consequences from it. This is what I've done in: http://arxiv.org/abs/1505.05577 Happy reading!

Friday, May 22, 2015

D'Physics unchained


One additional event that happened at the New Directions conference was a video presentation and panel discussion about physics outreach. Brendan Foster from FQXi  led this event and he projected the winner clip of FQXi's first video contest

The author was Dagomir Kaszlikowski who had a conference talk as well and who thrilled the audience with his love for Quentin Tarantino style of movies. To understand why, watch his video:



The physics is serious and the plot was concocted to present the Elitzur-Vaidman bomb tester. However I am not sure that Tarantino's style is really appropriate to attract a new generation to do physics.  

My personal favorite from FQXi video contest was Bohemian Gravity:


which I thought was in a class of its own in terms of artistic creativity and quality.

In terms of outreach I think nothing compares with Sagan's Cosmos 




and the music still gives me goosebumps:




Another personal favorite are the Feynman lectures on physics which are now available online.

There is no cookbook on how to do outreach and we should spread back the love of physics in any way we can. Happy blogging and outreaching. 

Friday, May 15, 2015

New Directions in the Foundations of Physics Conference 

(and a reply)


Today I will continue to talk about the New Directions conference. My plans changed after all the slides were posted online (this is a very nice initiative and I urge the organizers to make it permanent) and I'll present them all to give a birds' eye view of the conference and have a convenient reference for the future. One thing to recognize about this conference is the extremely generous time slot allocation for the talks and discussions which allows to go beyond actual results and leads to in depth exploration of motivation.

Without ado, here were the presentations:

Reinhard Werner: Is an Ontological Commitment at the Quantum Level Helpful for Physics
https://docs.google.com/viewer?url=http%3A%2F%2Fcarnap.umd.edu%2Fphilphysics%2Fwernerslides.pptx
"After a brief introduction I will describe the operational approach to quantum mechanics, which aims to systematize a minimal pragmatic approach to the empirically relevant intersection of various interpretations. This minimal interpretation is signal-local and does not suffer from a measurement problem. This is then contrasted with Bohmian Mechanics, which sacrifices locality for a classical mode of description, even at the microscopic level. I will argue that the added elements have a somewhat spooky claim to reality, as they are unconnected to empirical fact as a matter of principle."

Michael Esfeld: The Measurement Problem and the Primitive Ontology of Quantum Physics.
https://docs.google.com/viewer?url=http%3A%2F%2Fcarnap.umd.edu%2Fphilphysics%2Fesfeldslides.ppt
"In this talk, I will argue that an ontology of matter arranged in physical space (known as a primitive ontology) is a necessary condition to avoid the measurement problem of quantum physics. To turn this necessary condition into a sufficient one, a dynamics is needed that excludes superpositions of matter in space, but includes entanglement. I use the de Broglie-Bohm theory to illustrate how these conditions can be satisfied. The main part of the talk then consists in sketching out a minimal primitive ontology of quantum physics in terms of matter points that are individuated only by the spatial relations in which they stand. The quantum state—including parameters such as a wave-function, spin, energy, mass, etc.—comes in only through its dynamical role for the evolution of the configuration of matter points. It is not mandatory to conceive that state as a physical reality that exists over and above the configuration of matter points."

Gilles Brassard: Parallel Lives: Why Quantum Mechanics is a Local Realistic Theory After All.
http://carnap.umd.edu/philphysics/brassardslides.pdf
"Most physicists take it for granted that the experimental violation of Bell's inequality provides evidence that our universe is nonlocal. However, this is not the case! Indeed, I shall describe a toy universe (not meant to describe our world) in which Bell's (CHSH) inequality is maximally violated, yet this world is purely local. Then, I shall present mathematical requirements for quantum mechanics itself to be local. It turns out that these requirements cannot be satisfied if we take the universal wavefunction as a complete representation of reality, even if we admit the existence of the multiverse. Nevertheless, I shall present a solution to this conundrum with a framework that provides a simple local-realistic description of quantum mechanics. In particular, one can recover the whole (even if entangled) from the description of the parts, in sharp contrast with the standard formalism of quantum mechanics."

Chris Fuchs: What QBism Learns from the Bell Inequality Violations.
http://carnap.umd.edu/philphysics/fuchsslides.pdf
"In QBism, a 4-dimensional Hilbert space is a 4-dimensional Hilbert space and should be treated with the full apparatus of quantum theory. It makes no difference whether one insists on thinking of the number 4 as the unique integer 4 or instead as the composite 2 x 2. That is, breaking a system into components, whether within a single atom or within an experiment across Lake Geneva (as in a Bell experiment), changes nothing about this fundamental edict. The same can be said of any composite integer pq, whether p=2 and q=2Avogadro's number, or anything else. The game of quantum mechanics is to be consistent with its use all the way through and not get cheap about it when a problem seems to involve other observers, i.e., quantum systems with particularly big Hilbert spaces."

Samson Abramsky: Contextuality: At the Borders of Paradox.
http://carnap.umd.edu/philphysics/abramskyslides.pdf
"Contextuality can be understood as arising where we have a family of data which is locally consistent, but globally inconsistent. From this point of view, it can be seen as a pervasive phenomenon, arising not only in quantum mechanics, but in many other areas. There are also remarkably direct connections to logical paradoxes. One can say that contextual phenomena, which we must accept as key features of our picture of physical reality, lie at the very borders of paradox, but do not cross those borders.
On the qualitative level, we show how a hierarchy of strengths of contextuality emerges naturally in a sheaf-theoretic language, and how the ‘All-versus-Nothing’ arguments which have played an important role in quantum foundations are witnessed by sheaf cohomology as obstructions to global sections. On the quantitative level, we show that all Bell inequalities, for a very general notion of contextuality scenarios, arise uniformly from logical consistency conditions."

Dagomir Kaszlikowski: The Triangle Principle: A New Approach to Non-Contextuality and Local Realism.
No link
"In my talk I will introduce the ‘triangle principle,’ which is a new approach to quantum contextuality and violations of Bell inequalities. This approach recovers all known bipartite Bell inequalities, state dependent non-contextual inequalities, predicts new such inequalities as well as various monogamy relations between them. I will also show that the triangle principle allows one to perform an experimental test to distinguish between quantum and classical correlations without using the notion of probability. I will finish my talk with a few remarks about the philosophical consequences of the triangle principle."

Dan Greenberger: Why We Are Having Such Trouble Hooking Up Gravity to Quantum Theory.
No link
"Besides the more obvious problems (non-linearity, etc.) in trying to connect relativity to gravity, my own feeling is that there is a more subtle reason that poisons the whole enterprise.
While energy plays an important dynamical role in quantum theory, mass enters as a mere parameter in our theories, even though it physically plays a dynamical role. (I don't mean internal symmetries, I am talking about space-time). For example, when a proton and electron are brought together, they form an H atom, whose mass includes the binding energy. But the individual masses don't change. The new mass is put in by hand, but should automatically change with the interaction.
Similarly, proper time is determined geometrically in classical relativity, but physically it also plays a much more complicated role in quantum theory. Both these concepts should play a dynamical role, and in fact should be canonically conjugate variables, and obey an uncertainty principle. I believe this is needed for the self-consistency of the theory, and it constitutes a necessary first step if the theories are to be consistently conjoined."

Sabine HossenfelderAnalog Duality.
http://carnap.umd.edu/philphysics/hossenfelderslides.pdf
I will discuss a new duality between strongly coupled and weakly coupled condensed matter systems. It can be obtained by combining the gauge-gravity duality with analog gravity. In my talk I will explain how one arrives at the new duality, what it can be good for, and what questions this finding raises.

Mile GuQuantum Simplicity: Can Quantum Theory Better Isolate the Causes of Natural Phenomena?
http://carnap.umd.edu/philphysics/guslides.pdf
"We understand complex phenomena around us though predictive models — algorithms that generate future predictions when given relevant past information. Each model encapsulates a way of understanding future expectations through past observations. In the spirit of Occam’s razor, the better we isolate the causes of what we observe, the greater our understanding. This philosophy privileges the simpler models; should two models make identical predictions, the one that requires less input information is preferred.
Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. In this presentation, I outline how we can systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily harness quantum dynamics.
I will discuss the potential consequences of these findings to complexity theory, where the minimal amount of causes to model a phenomenon is often used as an intrinsic measure of its structure of complexity. I show, by comparing the simplest classical models with even simpler quantum models on a range of stochastic systems, that the quantum generalization of this measure can have drastically different qualitative behaviour. Thus many observed phenomena could be significantly simpler than classically possible should quantum effects be involved, and existing notions of structure and complexity may ultimately depend on the type of information theory we use."

Joe Henson: How Causal is Quantum Mechanics?
https://docs.google.com/viewer?url=http%3A%2F%2Fcarnap.umd.edu%2Fphilphysics%2Fhensonslides.pptx
"I will discuss some attempts restore a meaningful notion of ‘locality,’ meaning lack of superluminal causal influence, to quantum theory (QT) after Bell's theorem, in particular ‘denial of independence of settings,’ ‘denial of the reality of distant outcomes,’ and ‘denial of ontological separability.’ I will point out a common problem with these attempts, and use this discussion to frame and motivate a new question: *what aspects* of the notion of causality can be maintained when dealing with QT?
If time allows, I will describe an emerging program to answer this question. Rather than imposing an answer on quantum theory, this program first delves into the details of QT in order to understand what analogies to the standard ‘Reichenbachian’ idea of causation can be made, and what aspects of this package of ideas must be given up. I will argue that this fledgling program holds the promise of a genuinely new — and useful — understanding of quantum theory."


From all this one sees the great diversity of the foundations community and the fact that no one challenges the existing points of view more than the community itself. Last time I offered in good faith Lubos a chance to present his point of view. Apparently I was mistaken in my assumption of how he would respond. Here was his reply:

1) It would take a lot of time which I won't necessarily have now.
2) You have about 100 times fewer readers than I do.
3) I think that this ratio is encouraging, and it's better to leave it in this way.
4) I like the current state of affairs for various reasons. Your blog still spreads pseudoscience but it's not influential (a much more optimistic appraisal than when one faces "mainstream media" with 100 or 1,000 times greater impact than TRF which spread the same pseudoscience as you do).
5) By the comments from Matt Leifer, you turned your blog into a 100% crackpot blog.
On number 1 Lubos gave a lengthy reply later despite not having the time. Each interpretation in quantum mechanics has strengths and weaknesses and is easy to find faults in any of them. Challenging Matt's answers was the easy way out. Responding to the faults of the traditional point of view is much harder and I would have asked Lubos different questions related to his stance on quantum mechanics. I know his position is weakest around the measurement problem where he used handwaving in the past.

For numbers 2 and 3 I want to point out that in science the size of the followers counts for nothing even if you are the Pope and claim to speak with God. The Sun does not move around the Earth regardless of how many people believe it.

Number 4 on spreading pseudoscience is plainly absurd. The quality of the readership is more important than quantity. I do not want to build a readership around false ideas like global warming is not real. On topics other than quantum mechanics, sometimes me and Lubos talk about the same things, like: 1+2+3+... = -1/12 My presentations have more in-depth information (like introducing the outstanding lectures of Mr. Bender) and I presented the idea with more rigor. On quantum mechanics topics TRF does not even come close to my blog. Quantum mechanics is best expressed in the language of sheaves, category theory, algebraic topology, non-commutative geometry. I have yet to see any meaningful in depth discussion about them on TRF. But perhaps this is not the focus of TRF and the focus is on high energy physics. Has anyone seen any presentation of the modern developments of gauge theory like the Seiberg-Witten gauge theory there? Topics like 1+2+3+...=-1/12 are cheap shots introduced for expedience reasons and I know I am guilty of this because sometimes I do not have the proper time to put into my blog. However I strive for the real in depth explanations of genuine value.

On Number 5 my answer is that it was my job is to ask hard questions and create opportunities for presenting the opposing points of view and I believe I have done that. Sure, I agreed with Matt that pursuing questions of interpretations is a worthwhile scientific activity and Lubos earlier criticism of Matt for this reason is baseless. As of today there is no consensus in the community of experts dedicating their careers pursuing those kinds of questions.

Friday, May 8, 2015

Interview with an anti-Quantum zealot


On April first, Matt Leifer had a very funny entry on his blog: “Lubos Motl is right”. Now for some background, Lubos Motl is the author of the well known blog: The Reference Frame where he is not shy to call out the “the Emperor is naked” in a most politically incorrect fashion. Now on quantum mechanics I sometimes found Lubos’ opinions out of touch or bizarre (like in supporting EPR=ER: forget publishing, just think if you would even manage to upload such a paper on the archive without risking reclassification to General Physics if you were not already a famous physicist). Still, his quantum opinions do echo the sentiment of most physicists not working in quantum foundations and there is a genuine gap of understanding between the community of physicists at large and the minority working in making new sense of quantum mechanics. Matt agreed to answer a few questions on his position, motivation, and ideas and I hope this will help bridge this gap.





So Matt, let’s start by clearing the air. You have found a way to make a ton of money and get super filthy rich by selling the T-shirt in the picture above. How many T-shirts have you sold?

As of today, 4.  But let's get one thing straight, I get only $2CAD for each t-shirt sold, and I have promised to donate my commission to the Next Einstein Initiative of the African Institute for Mathematical Sciences, so that means they'll get $8CAD so far.  We can do better than that.  Please remember that if you live in a cold climate you can also buy anti-quantum zealot hoodies, or you can buy a mug, or a onsie for your little one.  These are all perfect presents for the quantum foundationalist in your life. 

You stated that you did not like any existing interpretations. (For the record and a bit of shameless self-promotion, I do not like any of them either and that it is why I work on my own interpretation. The correct one which will take over the physics world ;) ) What is wrong with good, old fashion Copenhagen? Or with modern Copenhagen like consistent histories?

This is a complicated question because there is no one single Copenhagen interpretation.

Some people call the type of interpretation one usually finds in textbooks by the name "Copenhagen".  This is very different from the views of Bohr, Heisenberg et. al., i.e. the interpretation that actually comes from Copenhagen.  I prefer to call the textbook interpretation the "orthodox" or "Dirac-von Neumann" interpretation because it derives from the famous books of Dirac and von Neumann.  To my mind, the orthodox interpretation is simply inconsistent.  It treats the quantum state as a property of the quantum system, which evolves unitarily in the ordinary course of affairs, but suddenly jumps to a new state when a measurement is made.  The latter is inconsistent with treating measurement as a unitary interaction between system and measuring device.  Unless one is prepared to accept measurement as a primitive, i.e. to divide up those interactions that count as measurements from those that don't in advance, this is a reducto ad absurdum for the orthodox interpretation.  To my mind, it is simply wrong, and obviously so.

As an aside, I think a lot of the confusion about the interpretation of quantum theory actually comes from setting up all of the problems in opposition to the orthodox interpretation.  For example, a lot of people will tell you that the main problem we have to solve is the measurement problem, but the measurement problem is a problem with the orthodox interpretation of quantum theory, not with the theory itself.  You need to believe that the quantum state is a complete and literal description of reality in order to even set it up.  I think we should instead start from a much more minimal view of the meaning of quantum theory that is non-committal about the status of the quantum state, i.e. just start from the predictions for experimental outcomes that we all agree on, and use that as the starting point for discussion.

Moving on to real Copenhagen, which is most strongly associated with the views of Bohr, I don't actually have too much of a problem with this.  I think that some of its modern variants, like QBism, are perfectly consistent, but I just don't think they are correct.  A lot of people will tell you that Bohr's writings are far too unclear to extract a unique interpretation from them, and that is true, but I think we can extract one or two key ideas.  Firstly, unlike the orthodox interpretation, the quantum state is not supposed to be a direct representation of reality in Copenhagen.  As Bohr says, it concerns not reality itself, but rather "what we can say about Nature".  This is a clear statement of a psi-epistemic position.

The other important aspect of Copenhagen is a split between the microphysical world, which we are to describe using quantum mechanics, and the "classical" world, which we are to describe using the concepts of classical physics.  Some people seem to think that this posits a definite cut that we have to put at a definite scale somewhere between the micro- and macroscopic.  However, in Copenhagen it is clear that this cut is not supposed to have any definite location.  If you are concerned about whether a given physical system should be put on the classical or quantum side, perhaps because you are uncertain about whether quantum coherence plays a role in its operation, then Copenhagen advises you to put it on the quantum side.  In fact, you can, in principle, move the cut as far up the chain as you like, putting more and more things on the quantum side as needed, although in practice one does not have to go too far up the chain to describe most real world experiments.  The only thing that Copenhagen insists on is that the cut needs to be put somewhere.  This is not because there are any physical systems that are "fundamentally classical" and cannot be described by quantum theory, but rather because the quantum formalism is not a literal description of reality, and hence there must be some classical systems around for its predictions to refer to, i.e. measuring devices and the like.  Bohr sometimes talks about the necessity of describing these systems according to classical physics, i.e. Newtonian physics complete with positions and velocities and the like, but elsewhere he only emphasizes the need to talk about them in "ordinary language".  I interpret this as meaning that the "classical" systems must have unambiguous observable properties that we can communicate to one another, i.e. things like pointers on measuring devices pointing to specific readings, and we must assume that these are objective properties of the world.  This is more important than positing that they obey exactly the equations of classical physics.

Read like this, I think Copenhagen is fairly consistent.  It needs a few refinements to properly deal with experiments like Winger's friend, but I think the modern variants like QBism can deal with that.  I also think that the Copenhagen advice on the moveable cut is pretty good advice for the practising physicist, i.e. to put it as high as necessary and no higher, and we now have quantitative tools like decoherence theory to help us decide exactly where it should go.  The main objection I have to Copenhagen is that it does not seem to offer any advantages over a minimal statistical interpretation in which we accept the predictions of quantum theory as given, but are more non-committal about what it says about reality.  I think that would be less confusing for the practising physicist.  Copenhagen involves a lot of metaphysical claims in addition to this, e.g. claims that certain questions are necessarily meaningless and that it is necessarily impossible to achieve a deeper description of reality.  There was no good evidence for these claims at the time that Copenhagen was first proposed, and it stalled investigation of these issues for many decades.  Perhaps, one could argue, that no-go theorems like Bell, Kochen-Specker and PBR now provide some evidence, but the Copenhagenists were willing to make these claims far before we had such evidence and tried to shut down the avenues of inquiry that led to these results.  Quantum theory has always been beset by the problem of quantum jumps, by which I mean that quantum physicists are always jumping to conclusions, so I think we should try to avoid this, above all else.

The other thing I dislike about Copenhagen is that it does not seem to tell specifically on quantum theory.  By this I mean that, if we had any physical theory at all and we were confused about how it should be interpreted, then, so long as the theory made definite predictions for the outcomes of experiments, we could always do a Copenhagen job on its interpretation.  I think one of the jobs of a good interpretation is to uncover the explanatory structure of the theory, and that that this should be useful for generalizing the theory beyond its current scope,.  Copenhagen seems to do a rather poor job of this.  Something Copenhagen-like can always be used as a fall-back position though.

Regarding consistent histories, it is a bit inaccurate to lump it in with Copenhagen (at least I'll have to deal with another long email from Bob Griffith if I do so again).  I think Omnes looks at it this way, but Griffith wants to view it as a realist interpretation, just with the "single-reality" criterion thrown out.  It is more difficult to tell what Gell Mann and Hartle intend, particularly as they keep revising their interpretation by adding exotic probabilities and such like.  Furthermore, in Saunders-Wallace many-worlds, the consistent histories formalism is used to define what they mean by "worlds", so we could also think of it as a type of many-worlds theory.  Nonetheless, what we have is a broad class of interpretations, based on a histories formalism and using the decoherence conditions to decide when we have "classical" worlds to which ordinary probabilities can be assigned.

The main problem I have with consistent histories is that I think it is ill founded.  In standard quantum theory, if we prepare a system in some state and make a sequence of measurements on it then we get a formula for the probability of the outcomes.  Consistent histories takes this formula and says that it applies even if we don't actually make the measurements (providing the consistency conditions are satisfied), where now we are to think of the projectors as representing properties of the unobserved system rather than measurement outcomes.  This is totally bananas, or at least an example of the type of jumping to conclusions that I would like to avoid.  If there is one thing that we know about quantum measurements it is that they are not mere passive observations of the system.  Therefore, what justifies taking a formula that applies to a necessarily invasive process and saying that it applies even without that process?  Doing so leads to some pretty bizarre assignments of conditional probabilities, such as in the Aharonov-Vaidman three-box paradox, where the consistent historian is forced to say that there is a consistent set of histories in which the ball in definitely in box 1, another in which it is definitely in box 2, but this is OK because the two sets of histories have no common refinement, so their predictions should not be combined.  But such effects also crop up in classical models in which measurement causes disturbance, in which they have a perfectly straightforward explanation, i.e. the distrubance caused by the measurement can affect the probability of the later postselection.  If you applied a consistent-histories like formalism to these classical models they would imply a similar split into two incomparable but contradictory sets of histories, which is clearly nuts, bananas, and whatever other combination of fruits and vegetables you care to supply.  So this, in short, is why I don't like consistent histories.

The ontic camp of quantum mechanics interpretation justify their position from Bell and his opposition against measurement and seek to construct an observer-independent consistent narrative of quantum mechanics.  Why are you not in the ontic camp?

I am not sure here if you mean the psi-ontic camp, or the realist camp in general.  If it's the former, then none of these arguments tell specifically on the psi-ontic/psi-epistemic distinction, so that's why we needed theorems specifically targeted at that.

I am in the realist camp to a large degree, but I am not prepared to accept an interpretation of quantum theory just because it has a well defined ontology, if I think it has a lot of other problems.  To my mind, de Broglie-Bohm, collapse theories, and many-worlds all fall in this category, but I'm not going to engage in a take-down of each one as that would take too long.  I have already ranted against consistent histories and I think a rant against one interpretation per interview is probably enough.

I understand that the epistemic position appeals to you, but you consider yourself a realist (and so you are a “standard” psi-epistemist). Here is a hardball question. Isn’t this position discredited by the PBR theorem? If not, is psi-epistemic position falsifiable? Is it real science, or is it something like astrology?

One could equally argue that Bell's theorem and the like discredits hidden variable theories, i.e. if you are committed to locality then you need to come up with a more exotic type of ontology or go neo-Copenhagen.  The same is true of the psi-epistemic position.  If you are really committed to it then there are lots of things still to try, such as retrocausality, relationalism, and many-worlds.  Given that all of these have already been proposed as responses to Bell's theorem, I don't see that PBR poses an especially new threat here.  In fact, the idea that we are looking for a psi-epistemic theory places new constraints on what these theories must look like, so it might actually help in the search for a viable ontology.

No scientific idea is ever falsified on its own, but rather along with a variety of other assumptions about the theoretical framework and the working of experimental apparatus.  One always has a choice about which to throw out in the face of new evidence.  I would argue that the ontological models framework in which PBR was proved was already on sketchy grounds due to the previous no-go theorems like Bell.  Therefore, it only represents a starting point on investigating the issue.  It may turn out that all of the proposed alternatives have their own difficulties, or that we can prove psi-ontology within some reasonably well-defined class of them.  If so, I think the evidence will be strong enough that I'd have to go psi-ontic or neo-Copenhagen (but, as I argued, Copenhagen is an unfalsifiable idea if ever there was one, so do you want to call that unscientific too?).  I am not sure, at present, whether my realist sympathies are stronger than my psi-epistemic ones, but I don't think I have to make that decision just yet.

Not all ideas that are useful to science are directly falsifiable.  Instead, there are a pool of ideas and principles that get mixed together into our theory construction.  Some of them turn out to be important to the future of science, and some of them turn out to be dispensable and get jettisoned somewhere along the way.  It remains to be seen what becomes of the psi-ontic/psi-epistemic distinction, but it is far from astrology as it has already led to rigorous theoretical results and experiments. 

What is the difference between an anti-quantum zealot and a crackpot?

Both of these are fairly difficult to define.  An anti-quantum zealot is a person that Lubos has decided to call an anti-quantum zealot.  Generally speaking, this will be anyone who promotes an interpretation of quantum theory that is deemed "classical" by Lubo's lights, which would include things like de Broglie-Bohm theory and spontaneous collapse theories.  It doesn't include all realist theories, as many-worlders tend to be called idiots instead.  It also seems to include people, such as myself, who work on theorems about ontological models for quantum theory, even if we do not actually believe these models are good descriptions of reality, but are rather trying to investigate the differences between quantum and classical, or using them as a relatively well-defined starting point for something else.  However, this is relatively inconsistently applied by Lubos as, for example, PBR were not called anti-quantum zealots by him.

For crackpots I can do no better than John Baez http://math.ucr.edu/home/baez/crackpot.html  Of course, some anti-quantum zealots are crackpots and vice versa, but generally they are incomparable sets.

What is realism?

Broadly speaking, scientific realism is the idea that there is a physical world that objectively exists and is independent of us, and that the job of science is to attempt to describe it.  The word "attempt" is key here as, of course, we only have access to reality indirectly via our measurements and sense impressions, and history has shown that we often do a bad job of converting those into a picture of what reality is like.  Nonetheless, the realist asserts that our best physical theories provide a better picture of the world than anything else we have, so we are better off believing that the entities it posits really exist than we are not doing so.  So, for example, if the standard model posits entities like quarks which we cannot directly observe, then thinking that quarks actually exist is more accurate than thinking they don't.

This is in contrast with anti-realist positions, which only accept the reality of what can directly be verified, and view all other entities as mere theoretical constructs, ultimately to be analysed in terms of things that can be directly observed.

Put this way, I believe that realism is a position that few physicists would deny.  However, in the specific context of quantum theory it often gets conflated with narrower ideas, such as the idea that all observables must have definite values all of the time, or that a model must be formulated within the standard hidden variables/ontological models framework to be called "realist".  Given this, it is no surprise that a lot of physicists call themselves anti-realists because they have these stronger ideas in mind.  With the proper understanding though, I think that most physicists are probably realists.

What is epistemic?

Broadly speaking, anything that refers to knowledge is epistemic.  However, what we are trying to get at with the psi-ontic/psi-epistemic distinction is the distinction between something that is an intrinsic property of an individual system verses something that is not.  The archetypal example of the latter is a probability distribution.  Whatever your favourite interpretation of probability is, there is still a distinction between probabilities and intrinsic properties.  A probability must be defined with respect to relative frequencies, rational beliefs, conditions surrounding an experiment, or something like that.  Whether you call that "epistemic" or not does not really matter, e.g. you would do if you were a Bayesian, but you may prefer "statistical" if you are a frequentist.

People often get lost in the terminology, and the specific reference to "epistemic" or "knowledge" can be misleading.  What is at stake is whether a quantum state is in closer analogy to a probability distribution, or an intrinsic property like a phase-space point.

What is your intuition about quantum mechanics?

The strongest intuition I have about quantum theory (and note that I deliberately eschew the term "mechanics" here) is that it is best understood as a kind of nonclassical probability theory.  This view is extremely powerful and useful in many areas of physics.  For example, in quantum information and computation, if you want to understand how classical protocols get generalized to quantum ones then probability distributions become quantum states, stochastic maps become quantum channels, etc.  Additionally, quantum probability theory based on operator algebras has been very successful in understanding statistical mechanics.  Another example is the classical limit of quantum theory, which is best understood as a Liouville limit, where quantum states are used to derive probability distributions over phase space obeying the Liouville equation, rather than the Newtonian limit with definite trajectories.  Finally, if you try to define quantum chaos thinking that quantum states are like points in phase space you will get very confused.  Generalizing the classical definitions of chaos in terms of probabilities, e.g. the entropic definitions, works much better.  I could go on.  There are dozens more examples.

Given all this, it would be very puzzling if the quantum state were not something more like a probability distribution than a state of reality.  That would make it a miracle that these probability based generalizations work so way.  Ultimately, I think this is where my psi-epistemic convictions come from.

You are in an elevator with Edward Witten and he asks you to give him the “elevator pitch” about your approach. What do you say?

Witten was visiting Perimeter at the same time we were chatting away at the New Directions conference, so the course of events that led to this interview actually prevented this from actually happening (not that Witten would bother talking to me anyway).

If I had to emphasize one thing it would be that it is possible to make progress in quantum foundations.  It is not all about wishy-washy discussions that never lead anywhere, but we can actually turn these debates into precise questions that get resolved by rigorous argument and experiment, just like in the rest of physics.  Bell's theorem is the best example of this, and it has taken us a long time to realize we can investigate other aspects of quantum theory in a similar way, but we are now doing this.

The elevator breaks down and you now have the attention of Edward Witten for much longer time. How do you elaborate on your prior points.

To be honest, if I have a long time to spend with Witten, I would be more likely to ask him about his work and what he finds interesting than to go spouting on about quantum foundations.  I may feel like quantum foundations is very important for the future of physics, but that does not mean that Witten's insights on quantum field theory and its connection to mathematics are not even more important.  So I feel that the best use of the time would be for me to get all of the insights I can from him rather than the other way round.

David Albert has already spent an afternoon discussing quantum foundations with Witten.  I think they talked about Bell's theorem, the measurement problem, many-worlds, and perhaps a few other things.  Albert told me that Witten said it was refreshing and amusing to discuss these things, so he wasn't completely anti-foundations, but he's not likely to drop what he's doing in favour of foundations.  That would be ridiculous.  He is already very successful with his own research agenda.

However, if I did have the opportunity to discuss one thing with any non-foundational physicist, it would be Bell's theorem, as it is our best example of progress.  Most physicists know of it, and maybe also know a proof, but they don't understand it well, or what its applications are.

What is your approach on quantum foundations?

Today there are many quantum interpretations and no single one manage to win universal acceptance. What does it take for a new interpretation to be accepted by everyone?

I'm going to answer these two questions together because they are closely related.

The slogan for my approach is my repeated ad nauseum joke about Schrödinger's quantum jumps --- the problem of quantum jumps is that quantum theorists are always jumping to conclusions.  This obviously applies to the old Copenhagen hegemony, where people were prepared to say quite outlandish things about quantum theory with little evidence, but it is also meant to apply to the modern debates.

A typical history of a quantum foundations researcher up to the late 90's goes something as follows.  When they learned quantum theory at university, they were confused.  They were told all sorts of outlandish things about the theory that did not seem to be supported by the evidence, and furthermore their instructor shut down any attempt to inquire further about the foundations.

Then, at some point in their career they encountered an obscure approach to quantum theory that did seem to make sense, be it Bohmian mechanics, spontaneous collapse theories, many-worlds, etc.  They then decided to work on that approach and faced continual challenges for doing so.  Maybe it was hard to get a job, hard to get published, and they certainly encountered bad arguments as to why their approach was completely and obviously wrong.  In this climate, it is only natural that such a person would become a staunch defender of their theory, to the exclusion of almost anything else, and develop a very aggressive attitude in arguing for their approach.

The story I have just told is a bit of a cartoon, but I think it explains some of the sociology of the field.  Namely, the traditional approach has been to grab onto one very specific approach to the exclusion of everything else and defend it to the hilt.  Many of the people who do this are still around and I do not want to criticize them too much.  They were the torch-bearers for the idea that thinking about foundations is a fruitful activity in an environment where most people could not care less, and the modern field would not exist without them.  Nevertheless, I think we can now afford to step back and critically assess what has been done so far, and hopefully come up with new ideas that have a chance of leading to progress.

Overall then, I want to make a plea for more open mindedness in the foundations of quantum theory, but we should not be "so open minded that our brains fall out" (see http://www.skeptic.com/insight/open-mind-brains-fall-out-maxim-adage-aphorism/ for the origins of this quote).  This means that we need to adopt a critical attitude, properly weigh the evidence, make rigorous arguments, and be absolutely clear about what we are trying to do.

One thing we should not be trying to do, at this point in time, is to solve the measurement problem.  As I said earlier, the measurement problem is really a problem with the orthodox interpretation of quantum theory, and not with quantum theory per se, and in any case we now have at least half a dozen solutions to it.  The fact that none of these alternative interpretations of quantum theory has caught on as the mainstream view should give us some pause for thought as to whether they are really going in the right direction.

As an aside, I recently listened to an FQXi podcast http://fqxi.org/community/podcast/2015.04.18 in which Jean Bricmont, an old-school quantum foundations researcher if ever there was one, described the reasons that he thinks Bohmian mechanics has not taken on as the mainstream view.  His reasons are entirely sociological, having to do with the Copenhagen hegemony and the irrational refusal of most physicists to entertain alternative ideas.  I will admit that, in the general population of physicists, one more often encounters bad arguments for not accepting alternative interpretations than good ones.  For example, you will hear that Bell and/or von Neumann already proved the impossibility of theories like Bohmian mechanics, or that its nonlocality means that it necessarily cannot be generalized to relativistic field theory.  This is a relic of the fact that most physicists are still not that well educated in foundations.  But come on dude!  There is a whole community of researchers in the foundations of quantum theory, albeit a comparatively small one, who have dedicated their careers to properly understanding quantum theory.  These people have thought about these matters much more deeply than most physicists, and yet Bohmian mechanics has not gained uniform acceptance even within this community.  Furthermore, if Bohmian mechanics were really the correct view of quantum theory --- if it really helped one to think as clearly as possible about the meaning and application of the concepts of the theory --- then it would have already proved essential to the future progress of physics and have been accepted.  The general physics community, whilst stubborn and skeptical about non mainstream ideas, is not the socially dominated festival of cultural relativism that Bricmont appears to think it is (ironically so for the co-author of "Fashionable Nonsense").  Fruitful ideas only need to be accepted initially by a small number of people.  If they are genuinely useful then the rest of the community will eventually see massive progress being made and adopt them, perhaps slowly over a long period of time, but they will gain acceptance eventually.

I think that last aside has already revealed one of my prejudices about foundational enquiry.  I do not want to make sweeping statements about the nature of truth in general, but one thing that a scientific truth ought to have is some pragmatic value.  There may be a deeper notion of truth as well, but in order to call something "scientific" it has to have some sort of pragmatic utility.  I define pragmatic utility quite broadly: it could mean making a different prediction from existing theories that is later confirmed, it could mean being essential to theory construction, or it could just mean a helpful way of thinking that makes it far easier to derive some result then it otherwise would have been.  I am sure there are some other things I have not thought of that could be included as well.

To make this point clearer, let's look at a non-quantum example that has this kind of pragmatic value.  In the foundations of probability, there are various points of view including frequentism and various subjective/Bayesian views.  The frequentist view (as well as Popper's falsificationism) was a heavy influence on classical statistics.  The subjective view is a big influence on Bayesian statistics.  Whilst it is not impossible to pursue either of these statistical methodologies independently of the foundations of probability, foundational thinking continues to inspire new statistical methodologies, which then go on to successful use in practical applications.  It would be fairly difficult to come up with such methodologies and appreciate when and why they work work without some understanding of the foundations.  Further, the ubiquity of Bayesian methodology lends at least some credence to subjective foundations, even if it does not pin them down uniquely.  Nobody can really defend the idea that there is no truth to the subjective approach, even if they posit that some more objective notion of probability is needed in addition.

It is this type of indispensability that I want for the foundations of quantum theory.  I believe that, in this sense, there is a correct foundation for the theory and we will know when we find it via its vast array of successful applications.  For this reason, I reject the traditional distinction between the practical aspects of quantum theory and its interpretation.  If we rope off the latter as its own independent activity then it will become stale and drift further from the (scientific) truth as we only know that a foundational idea is true through its applications.  So, to answer your second question, if an idea does have such an impact, then it will win universal acceptance.

That said, I agree with Shelly Goldstein when he says we have to be clear on what the theory is about.  It is not just "anything goes".  There is no point in conducting a foundational investigation by merely futzing around with equations.  Leave that to the non-foundational physicists.  The point of foundational investigations is to achieve clarity, not to muddy the waters even more.  Shelly intends his point to mean that we must start our investigation with a clear ontology, i.e. a clear statement of what exists in the world and how it behaves.  I agree that this is our ultimate aim, but I disagree that this must be our starting point.  To me, operational ideas are also perfectly clear.  Once we have decided which systems we are going to call measurement devices, preparation devices, etc. then it is perfectly clear what you are talking about, and so perfectly fine to use that terminology to develop the theory.  So long as we are clear that we are only adopting an operational *methodology* rather than adopting operationalism wholesale, and that we still aim for ontological statements in the long run, there is no problem.  Operational methodology has proved so successful in the history of physics, e.g. in thermodynamics and the development of both relativity and quantum theory, that denying yourself these techniques would be a big handicap.

Now we are getting to the point where I can outline the kind of work I think is promising.  In much foundational work, we take the axioms of quantum theory as laid down by von Neumann as gospel and only try to find an ontology behind them.  In contrast, I think that first reformulating the theory in various ways will give us a better target to shoot at.  Historically, the same sort of thing happened in thermodynamics and statistical mechanics.  The original formulation of the second law directly in terms of the properties of heat engines is pretty hard to derive from Newtonian mechanics + probability, but once entropy is introduced into thermodynamics it has a clear microphysical counterpart and the derivations can proceed much more easily.  Similarly, I think that reformulating quantum theory will lead to new insights that make it much clearer which of our current interpretations are ill-founded and need to be ditched, and I think the answer is probably all of them.

The project of not treating standard quantum theory as a fixed target has already been tremendously successful.  Take, for example, the generalized measurement theory of POVMs and quantum instruments, or the theory of continuous quantum measurements.  Most of quantum information theory would be impossible without this and it is furthermore apparent that most measurements we do are of the latter type.  For example, when I look at the tree outside my window, I am not doing a projective measurement on it, but rather observing a some photons that are correlated comparatively weakly with the properties of the tree.  It is a rather noisy POVM rather than a projective measurement.  This means that if I am going to explain the appearance of the classical world, i.e. why trees look like trees, it is going to be in terms of generalized measurements rather than projective ones.  This is an important foundational insight that you would not get if you were myopically focussed on solving the measurement problem within the standard formalism.

In the future, I think that similarly important insights will come from playing around with the causal assumptions of the operational approach.  In the usual approach, there is a quantum state, determined by a preparation, which evolves forward in time, is subsequently measured and then collapses.  This makes it look like measurement is time-asymmetric.  However, one can alternatively formulate the measurement in a retrodictive formalism in which everything goes in the opposite time direction, which shows that things are in fact time-symmetric.  It is only the direction of inference that makes things look asymmetric, i.e. the fact that we asked a question about the future based on knowledge about the past rather than the other way round.  This insight makes approaches that posit a time asymmetry due to measurement look a bit suspicious, e.g. spontaneous collapse theories.  Similarly, I think we can make progress by not putting in causal structure by hand in advance but simply saying that I have a bunch of variables I am going to treat classically, which may be settings of preparation or measurement devices or measurement outcomes but we are not going to say which they are in advance, and asking what is the most general way that quantum theory says they can be correlated.  This is one of the things I and others are working on at the moment, and I think it has the potential to yield a lot of foundational insights.  For example, I think it will make the primacy of unitary evolution look silly, and hence the many-worlds picture may look less plausible.

That is just a flavour of the type of approach I favour.  I could go on much longer about other ideas, but perhaps that is enough for your readers for now.

I asked you in the past to help classify my position and we agreed I am a neo-Copenhagen (distinct from the other new-Copenhagen). Upon further introspection my interpretation is both observer free and beable free and therefore I do not fit in either the ontic or the epistemic camp. Moreover I just heard Sheldon Goldstein this weekend stating that to be observer free you must have beables. So one of us is wrong. If you were to bet one dollar on me vs. Sheldon, how would bet?

Depends what the odds are.  To be realist in any conventional sense there has to be something that really exists out there and you have to say what that is.  I think this is what Shelly means by a "beable" in this context and I agree with him.  That is just the meaning of conventional realism.

However, there are all sorts of subtle philosophical distinctions between different kinds of realism, and if you try hard enough I am sure you can find a sufficiently weak version of realism to call yourself a realist.  I doubt that such subtle distinctions have any relevance to quantum theory though.

PS: I want to thank Matt for all his answers. Initially I wanted to break up this interview into several parts, but it all makes sense much better together. If Lubos cares to reply, this blog is open to him (and oh boy-do I have cheeky questions for him?) .

Friday, May 1, 2015

M.C. Escher and Quantum Mechanics
(Contextuality and Paradoxes)


Last weekend I attended the New Direction conference in Washington DC and I have great new content from there. Today I will start presenting my favorite talk: “Contextuality: At the Borders of Paradox.” by Samson Abramsky from Oxford University. This talk was showcasing this preprint: http://arxiv.org/pdf/1502.03097v1.pdf



But what is contextuality? Here is how Abramsky puts it: "Contextuality can be understood as arising where we have a family of data which is locally consistent, but globally inconsistent." Look at the Escher picture above. On each side the stairs make sense, but not together.

Now how does Abramsky goes about quantifying contextuality? For the sake of a definite example, consider Hardy's paradox. Suppose Alice has two settings: \(a_1 \) and \(a_2\), and Bob has two settings as well: \(b_1\) and \(b_2\). Also during measurement they can obtain the outcomes 0 or 1.

Then in Hardy;s paradox case there are 4 logical formulas arising out of the experimental results:

\(a_1 \wedge b_1 , \lnot(a_1 \wedge b_1), \lnot (a_2 \wedge b_1), a_2 \vee b_2\)

Together those formulas lead to contradictions and we say that Hardy's paradox demands contextuality.

Now if we plot this as a fiber bundle we get the following picture:


and this kind of picture can be constructed for any quantum mechanics system. Now for the fireworks: one can investigate this using the tools of cohomology. The cohomology used is that of sheaf cohomology, but singular cohomology would work as well in this case. Basically cohomology gives you the topological obstruction to making global sense of the data. 

The amazing thing is that the philosophical concept of contextuality has a very precise mathematical representation in terms of algebraic topology tools. Even more surprising is the link with computer science in terms of relational databases. Relational databases consists of tables and those tables respect what is called the three normal forms. For performance reasons, programmers perform what is called a "denormalization", and the ultimate denormalization is to construct one single huge table containing the entire data. Now sometimes this is not possible, and the same cohomology theory used in analyzing quantum systems is used here as well to detect when this is impossible. Who knew quantum mechanics has a deep relationship with relational databases?

The ultimate motivation for Abramsky is however practical. Can we harness contextuality and use it to have better solutions in quantum information theory? The first step is to derive the mathematical tools needed to pose the question in a precise way. This is what Abramsky achieved with his amazing research.