Friday, May 30, 2014

What is the boundary of a boundary?


Intuitive Homology


Continuing the geometry discussion, let us start with an elementary argument: Let us consider a triangle and label its vertex as A, B, and C. How can we define its boundary? On the first try, it is easy: it is the sum of the segments: AB, BC, CA. Now consider a point D outside this triangle (for definiteness let it be next to segment BC) and construct the quadrilateral: ABDC. We would want to build up its boundary using the boundary of the two triangles: ABC and BDC. The problem is that the segment BC is not part of the boundary of ABDC and we need to add and subtract it at the same time. This works if we consider oriented segments with the convention that AB = -BA (here AB means walking from A and arriving at B). So if A, B, and C are the original vertexes of the triangle, the boundary is: AB, BC, -AC. Alternatively:

(noA)BC – A(noB)C + AB(noC).

Now this straightforwardly generalizes to a simplex given by vertexes A1 A2,…, An



to the following sum of lower dimensional simplexes:

Σk{(-1)k A1 A2…Ak-1 Ak+1…An}

And we can then introduce a formal definition of a boundary of a simplex (A1 A2,…, An):

∂(A1 A2,…, An) = Σk {(-1)k (A1 A2…Ak-1 Ak+1…An)}

Key question: what would be the boundary of a boundary of a simplex: ∂ ∂ (A1 A2,…, An)? In the sum we would have to kill two elements i and j and we can do it in two different way: first kill i then j, or the other way around. For the sake of the argument assume i is smaller than j. What is the sign in front of the two terms?
  • In one case after eliminating the i component the sign is (-1)i and after eliminating the j component is: (-1)j-1 because the sum became smaller by one making the final answer : (-1)i (-1)j-1  = (-1)i+j-1
  • In the other case after eliminating the j component the sign is (-1)j and after eliminating the i component is: (-1)i making the final answer : (-1)i (-1)j  = (-1)i+j

The key point is that the two terms cancel and the boundary of a boundary is zero.

But why bother with all this, what is the point? This is essential because any smooth compact and oriented space can be approximated by small enough simplexes, and that the inner boundaries between simplexes cancel each other.

In general we want to uncover the holes in a space because spaces with different number or kinds of holes are different, and this is a great tool for distinguishing spaces (for example a doughnut is different than a sphere).

But how many spaces with holes do we know in physics? Actually a lot if we think carefully. In physics they usually hide under the term: boundary conditions. So what? We want to solve problems (say the electric field distribution in electrostatic), not classify spaces. But there is a very deep connection between solving differential equations like Maxwell’s equations and the classification of spaces. To understand the link we first need to understand the concept of a hole. In general a hole has two key properties:

  1. has a boundary
  2. the boundary of a hole it is not the boundary of anything else
Why it is so? Property (1) is true because you can walk around a hole in the space which contains it. For property (2) consider a ball. The ball is hollow precisely when it is not filled. If the ball is filled and solid, its surface is then the boundary of the material inside.


This will lead to “Ker modulo Image” and “closed modulo exact”, a mathematical pattern which occurs again and again in modern physics. In particular, electric charges are best understood as “cohomology classes of the Hodge dual of a two form”. This is a mouthful but it has deep geometric interpretation which unlocks the proper understanding of Yang Mills theory and the Standard Model. Stay tuned and I’ll explain all this using only elementary arguments. The subject is a bit dry for now, but hang in there because the physics payout is great.

Friday, May 23, 2014

New Directions in the Foundations of Physics 2014


This is the last post about the conference because the remaining of the talks gradually drifted outside of my domain of expertise. Next time I’ll resume the geometry series. For today I want to talk a bit about the Many Worlds Interpretation (MWI) which was showcased in its talk by David Wallace http://arxiv.org/abs/1111.2187v1 and about Causal Sets (presented by Rafael Sorkin and David Rideout).

Since I am working in an approach different than MWI which also contends that quantum evolution is pure unitary I am very critical of MWI claims. I talked in the past about MWI but I want to clearly present why I believe MWI is fundamentally misguided.

When I was first introduced to quantum mechanics in college, the lessons followed the standard approach: Hilbert spaces, kets, bras, Schrodinger equation. Then at the end of that class, mixed states were introduced as an afterthought, and they were introduced in a misleading way. Suppose we prepare many copies of a quantum state A, and many copies of a quantum state B. We can then create a mixed state by the following preparation process: randomly choose p% of the time the quantum state A and (1-p)% of the time quantum state B. For example, suppose some photons are vertically polarized (state A), and some photons are horizontally polarized (state B). Then if we randomly select 50% of the time horizontally polarized photons and 50% of the time vertical polarized photons we end up with a random mixture of the two. Big deal I said at that time. Why do we want to do that? To make our lives harder in computing the answers to problems? And in thinking that, I completely missed the point about mixed states. Here is how:

The key point of mixed states IN QUANTUM MECHANICS is that the decomposition of MIXED STATES into PURE STATES  is not unique.  Unlike classical mechanics, an ignorance interpretation of mixed states in not tenable. For example, suppose that in the process above, “A” corresponds instead to photons which are left-circularly polarized, and “B” corresponds to photons which are right-circularly polarized. Mix them up 50%-50% and the end state is the same as mixing vertical and horizontal polarized photons: no experiment is able to distinguish between the two preparation procedures.

So now back to MWI. MWI asserts that the wavefunction evolves ONLY unitary, and this means that until observation, Schrodinger’s cat is both dead and alive. However, experiments reveal only a dead or an alive cat, and MWI’s explanation is that the world splits into two copies at the time of measurement: in one branch the cat is dead, while in another the cat is alive. (Isn’t this nice? It means we are all immortals: there is always a branch in which we never die).

But here is the catch: if the quantum state is in a mixed state, there is an ambiguity on which decomposition is used by the measurement process. For a photon in the mixed state from above, is the world going to split into a branch where a photon polarized vertically and a branch where it is polarized horizontally, or is the universe going to split into two branches where the photon is left or right circularly polarized? Somehow there must be a preferred base for this split. On this MWI is silent, but decoherence comes to the rescue. Indeed, decoherence solves the basis ambiguity problem, but in the process we miss a fundamental tension between MWI and decoherence:

  • On one hand decoherence is trivial and natural because is rooted in unavoidable interaction with the environment.
  • On the other hand, the universe splitting process is nothing short of extraordinary: a humble photon can split the entire universe into two branches.

Why is the split happening only after decoherence? What is so SPECIAL about an un-special, mundane, common-place process like decoherence? This is MWI’s big pink elephant in the room: merging two ideas opposite in spirit to solve the measurement problem. MWI without decoherence does not work due to basis ambiguity; decoherence without MWI does not work because the outcome is not unique. Together they work, but it is like mixing fire with water. No wonder some people deride MWI and call it: the Many Words Interpretation. I fully agree with Zurek when he says MWI has the feeling of a cheap shot in solving a complex problem.

Now onto Causal Sets. I never listened before to a talk about this approach and I was fortunate to hear Rafael Sorkin’s cogent presentation. To me, my first impression was about the unreasonable effectiveness of the approach. The key result of this approach is the correct order of magnitude prediction for the Cosmological Constant: 10-120.

So how do the competing physical theories stack up to the challenge of predicting the value of the cosmological constant?

  • In quantum field theory, vacuum is a very violent place and if you count vacuum fluctuations as contributing to dark energy then you make a prediction which is wrong by about 120 orders of magnitude!!!
  • In supersymmetric field theory, the cosmological constant must be exactly zero!
  • String theory predicts a small value, but of the wrong sign, and models of the right sign were called Rube Goldberg models.




Where all the smart and complex solution failed, causal sets succeeded. Here how the magic happens:

The diameter of the known universe in Plank’s units is 1060 It’s volume is then approximately ~10240 and the cosmological constant Lambda is computed by the model to be = 1/sqrt(Volume)~ 10-120 (applause please).

This is because a causal set is a set of elements with an order relation: transitive + acyclicity + local finiteness (discreetness). In causal sets “Geometry =order + number” and causal set are uniformly embeddable in a manifold. Hence Number=Volume.

Causal sets can be employed also outside physics, for example in the growth of the web pages on the internet! But the predictions become bizarre and obviously wrong in this case: just as the universe can expand and recollapse in causal set cosmological models, so too the Internet can vanish one day by pure chance of birth and death of new web pages.

This is why my impression of causal sets is that of unreasonable effectiveness. I tend to think the approach is incorrect, but still, I cannot simply discard the correct cosmological value prediction where no other theory succeeded. Either something is fundamentally right in this approach, or the model is simply incredibly lucky – don’t know which one is the correct explanation.

Friday, May 16, 2014

A flea on Schrödinger’s Cat


New Directions in the Foundations of Physics  2014


Continuing the conference talks presentations, in “Asymptotic Theory Reduction and the Measurement Problem”, Klaas Landsman introduced a fresh start on the measurement problem.

Landsman started by defining low-level and high-level theories:

L = lower-level theory = fundamental theory (physics) = reducing theory (philosophy)

H = higher-level theory = phenomenological theory (physics) = reduced theory (philosophy)

Here are some examples:

L = Quantum Physics                            (ħ→0)                         H = Classical Physics
L = Statistical Mechanics                      (N→∞)                         H = Thermodynamics
L = Molecular Dynamics                                                           H = Hydrodynamics
L = Wave Optics                                  (wavelength→0)           H = Geometric Optics

Then there are a couple of observations:

  • H is defined and understood by itself
  • H has ‘novel’ feature(s) not present in L (classical physics has counterfactual definiteness, thermodynamics allows irreversibility, etc)

Also a quick observation is that technically ħ→0 is apparently impossible because ħ is a constant. However ħ→0 is actually shorthand for changing a dimensionless combination. For example ħ2/2m→0 is the same as m→∞

Now back to the measurement problem. Here there is no consensus among physicists: some claim it is not a problem, some that it is a pseudo-problem, some that it is a very serious problem.  Basically the problem is that quantum mechanics fails to predict that measurements have outcomes:

-theoretically, Schrödinger’s Cat states of L yield mixed limit states of H
-experimentally, outcomes are sharp, hence pure states in H

Regardless of physicist’s consensus (or lack of it), this can be stated in a mathematical precise way:

H is ħ→0 limit of L, but limit LH induces the wrong classical states.

The proposed solution: asymptotic reduction similar with spontaneous symmetry breaking. Here is how it works in a completely soluble example:

Start with a symmetric double well potential in classical physics which has reflection symmetry. A test classical particle at rest can reside either at the left or right potential well bottom, but not in the middle (between the two minima) because it is an unstable equilibrium point. Hence a symmetric invariant state (a mixed state) in unphysical.

Now solve the same problem in quantum mechanics and observe that the ground state (the lowest energy state) in symmetric! Then we can take the limit ħ→0 and obtain two sharp localization peaks.

But then here comes a flea!



It can be shown analytically in several problems (double well potential, quantum Ising model, quantum Curie-Weisz model), that a tiny perturbation induces an exponential splitting of the lowest pair of energy levels. And this shows that the ground state of perturbed Hamiltonian shifts to a localized state and the density matrix not only decoheres but gets single peak!

In turn this implies that asymptotic emergence does not exist because tiny perturbations have exponentially large effects! In short, reduction is real, emergence is not.

The “Old Measurement Problem” solutions also involve reduction. The “Old Measurement Problem” demands to show that a pure state evolves to a mixt state. The proposed solutions were:

-Classical description of apparatus (Borh) for ħ→0 or N→∞
-Superselection sector (Hepp, Emch, Wigner) for N→∞
-Decoherence (Joos, Zeh, Zurek) for t→∞

Then selection of one term in mixture would completely solve the measurement problem.

The proposed solution shows how pure unitary time evolution is compatible with wavefunction  collapse. So the Everett interpretation is NOT the only game in town when it comes to unitary evolution explanation of the collapse.

There are several open problems related to this approach:

  • Where does the “flea” come from?
  • Are the perturbations deterministic or stochastic?
  • Is the mechanism dynamically viable?
  • Is the mechanism experimentally testable?
  • Is the mechanism universal?


For references, see this paper and this paper.

Friday, May 9, 2014

Decoherence in a box


New Directions in the Foundations of Physics 2014


Schrodinger considered superposition the characteristic trait of quantum mechanics:

"When two systems, of which we know the states by their respective representatives, enter into temporary physical interaction due to known forces between them, and when after a time of mutual influence the systems separate again, then they can no longer be described in the same way as before, viz. by endowing each of them with a representative of its own. I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought. By the interaction the two representatives [the quantum states] have become entangled.”



So why we don’t see then in our macroscopic world a cat both dead and alive? In other words, how does classicality emerge from quantum mechanics? The answer comes (in part) from decoherence which shows that interaction with environment makes the density matrix diagonal, free of interference terms. But is this a general enough mechanism?

In his talk “A Closed System Perspective for Decoherence”, Sebastian Fortin presented a framework which generalizes the current approach for both open and closed systems.

First let’s present the problem with closed systems. Unitary time evolution in quantum mechanics can prevent canceling the non-diagonal parts of the density matrix in a closed system. Decoherence works for open systems by talking advantage of the very large numbers of degrees of freedom of the environment which can absorb the “unwanted” information. There are two time characteristics in an open quantum system: decoherence time and relaxation time.  You can have decoherence without equilibrium (an obvious example are the planets of the solar system) but dissipation always implies decoherence. (This means that a quantum system reaches decoherence faster than equilibrium.)

But how can we talk about irreversible processes in quantum mechanics? Doesn’t this contradict unitary time evolution? Non-unitary time evolution can arise when we sum over (trace over) the degrees of freedom of the environment. This leads to a (non-unitary) master equation.

In a closed systems, equilibrium or standard decoherence may not occur, but we can still get emergent classical behavior. So here is Fortin’s proposed solution: non-unitary time evolution is achieved by course-graining:

  • split the information in relevant and irrelevant parts
  • compute time evolution of relevant observables
  • determine if you reach equilibrium (achieve relaxation)
    • if so, compute the decay times

This is not at all unusual: for example in the kinetic theory of gasses individual molecules never rest and reach (static) equilibrium. However, course graining can extract the relevant macroscopic information: gas density, pressure, etc.

In a closed quantum system the non-diagonal elements do not go to zero, but they may cancel each other out.

This mechanism also works in open systems in addition to the standard way of evolving the non-diagonal elements to zero. Hence, the framework is universal. As an application: Fortin talked about self-induced decoherence:

  • Operators belong to relevant observables O
  • States belong to dual of O
  • Computed expectation values are real quantities

A good example of relevant observables is van Hove observables. To understand more of the proposed framework, I recommend this paper:  “The problem of identifying the system and the environment in the phenomenon of decoherence”.


The key strength of the proposed unified classicality emergence framework is that it can be put to the test against real physical systems, and its predictions can be confirmed or not by experiments. 

Saturday, May 3, 2014

Two tales of time: Carlo Rovelli vs. Lee Smolin


Following the New Directions in the Foundations of Physics conference, two interesting consecutive talks made the case for opposite points of view and it is best to present them at the same time. On one hand, Carlo Rovelli made the case for the emergence of space and time and talked about the possibility of having fundamental physical theories without talking about space and time at all. On the other hand Lee Smolin made the point for the reality of time and the possibility of change for the physical laws. Because the problem of time is very hard and there is no universally accepted solution, this debate cannot be settled for now.



Let’s start with Carlo Rovelli’s position. From general quantum gravity considerations, it is not that strange to consider the possibility that space-time is not continuous. But how can you recover time? There are intuitive arguments and mathematical rigorous arguments available.

For example we can ask a silly question: “why things fall in Newtonian physics?” There is no notion of up and down in Newton’s equations. The answer is obviously due to the existence of our planet that “up” and “down” are defined. So this is a relational definition. Another question: what does it mean that entropy increases? We know that entropy increases in time but this can be turned around and used to define time in a relational way too: time is the variable in conjunction to which entropy grows. Let’s try to make this more precise. Here is the usual picture in physics: from averages in time to thermal phenomena to equilibrium states. If we flip this: start with an equilibrium state, extract the dynamics (the Hamiltonian) and arrive at time. This is the idea of the Thermal Time. At this point you may object to this seemingly naïve picture and you will have very good reasons for this. What if the system is not in equilibrium? Does time exist? What if you have only one particle for which the notion of thermodynamics is meaningless? Does time exist? However the thermal time hypothesis is not at all naïve and has a very solid mathematical and physical foundation in advanced quantum mechanics theory. This deserves a post of its own and after I will be done presenting the conference talks I will resume the mathematical explanations and build the pre-requisites to properly explain and discuss the thermal time hypothesis.

Now for the opposite point of view, that of reality of time, Lee Smolin started by presenting some ideas about time:
           
  • Timeless naturalism: related to block universe idea in relativity
  • Temporal naturalism: no timeless objects, future is not real, past is real
  • Barbour’s instantaneous naturalism: only moments exists which are all timeless

Smolin argued for the need to start over in thinking about time and discussed some fallacies:
  • Cosmological fallacy: we look at the universe as a whole from a “bird eye’s view”. There is no such thing as viewing the universe form the outside.
  • Naturalism trap: “our senses impressions are illusions, and behind them is a natural world, which really is X”.
    • For example X = Tegmark’s mathematical objects which he called a “metaphysical fantasy”

Two paradigms were discussed:

  • Newtonian paradigm (or the usual paradigm in physics. By the way, this includes quantum mechanics and the name is somewhat misleading):
    • Laws and initial conditions are the key ingredients
    • A state space (invariant under time) is constructed.
    • The history in time is represented by a timeless mathematical object
      • It is fallacious to infer from this that nature is a mathematical object or that is really timeless.
    • Falls apart when we apply to universe as a whole
  • Temporal naturalism and the laws of nature paradigm (a proposed paradigm for answering: “why those laws?”)
    • No timeless laws
    • State-laws distinction breaks down
    • Time is prior to laws of nature
    • Meta-law dilemma: how the laws change?

A solution for the meta-law dilemma was proposed by a “principle of precedence in quantum theory” described in arxiv:1205.3707

Of course temporal naturalism is highly speculative and attempts to win acceptance by making falsifiable predictions with the hope that the predictions will turn up to be true. For example an older idea by Smolin was that if our universe is typical then it is likely to be tuned to maximize the production of black holes. Then if the universe expands and re-collapses, at each bounce the laws of physics mutate slightly. Big if, but maximization of black hole production demands:
  • Small amounts of carbon and oxygen  for star formation dynamics
  • Supernovas require tuning weak interactions
  • Gravity must be weak.

Are those predictions enough? Not at all, but it is a start. How promising? Remains to be seen.