Saturday, October 1, 2016

The whole is greater than the sum of its parts


The tile of today's post is a quote from Aristotle, but I want to illustrate this in the quantum formalism. Here I will refer to a famous Hardy paper: Quantum Theory From Five Reasonable Axioms. In there one finds the following definitions:

  • The number of degrees of freedom, K, is defined as the minimum number of probability measurements needed to determine the state, or, more roughly, as the number of real parameters required to specify the state. 
  • The dimension, N, is defined as the maximum number of states that can be reliably distinguished from one another in a single shot measurement.  
Quantum mechanics obeys \(K=N^2\) while classical physics obeys \(K=N\).

Now suppose nature is realistic and the electron spin does exist independent of measurement. From Stern-Gerlach experiments we know what happens when we pass a beam of electrons through two such devices rotates by an angle \(\alpha\): suppose we pick only the spin up electrons, on the second device the electrons are still deflected up \(\cos^2 (\alpha /2)\) percent of time and are deflected down \(\sin^2 (\alpha /2)\) percent of time . This is an experimental fact.

Now suppose we have a source of electron pairs prepared in a singlet state. This means that the total spin of the system is zero. There is no reason to distinguish a particular direction in the universe and with the assumption of the existence of the spin independent of measurement we can very naturally assume that our singlet state electron source produces an isotropic distribution of particles with opposite spins. Now we ask: in an EPR-B experiment, what kind of correlation would Alice and Bob get under the above assumptions?

We can go about finding the answer to this in three ways. First we can cheat and look the answer up in a 1957 paper by Bohm and Aharonov who first made the computation, This paper (and the answer) is cited by Bell in his famous "On the Einstein-Podolsky-Rosen paradox". But we can do better than that. We can play with the simulation software from last time. Here is what you need to do:

-replace the generating functions with:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    var cosAngle= Dot(direction, sharedRandomness3DVector);
    var cosHalfAngleSquared = (1+cosAngle)/2;
    if (Math.random() < cosHalfAngleSquared )
        return +1;
    else
        return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    var cosAngle= Dot(direction, sharedRandomness3DVector);
    var cosHalfAngleSquared = (1+cosAngle)/2;
    if (Math.random() < cosHalfAngleSquared )
        return -1;
    else
        return +1;
};

-replace the -cosine curve drawing with  a -0.3333333 cosine curve: 

boardCorrelations.create('functiongraph', [function(t){ return -0.3333333*Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:  "#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});

replace the fit test for the cosine curve with one for with 0.3333333 cosine curve: 

var diffCosine = epsilon + 0.3333333*Math.cos(angle); 

and the result of the program (for 1000 directions and 1000 experiments) is:



So how does the program work? The sharedRandomness3DVector is the direction on which the spins are randomly generated. The dot product compute the cosine of the angle between the measurement direction and the spin, and from it we can compute the cosine of the half angle. The square of the cosine of the half angle is used to determine the random outcome. The resulting curve is 1/3 of the experimental correlation curve. Notice that the output generation for Alice and Bob are completely independent (locality).

But the actual analytical computation is not that hard to do either. We proceed in two steps.

Step 1: Let \(\beta\) be the angle between one spin \(x\) and a measurement device direction \(a\). We have: \(\cos (\beta) = a\cdot x\) and:

\({(\cos \frac{\beta}{2})}^2 = \frac{1+\cos\beta}{2} = \frac{1+a\cdot x}{2}\)

Keeping the direction \(x\) constant, the measurement outcomes for Alice and Bob measuring on the directions \(a\) and \(b\) respectively are:

++ \(\frac{1+a\cdot x}{2} \frac{1+b\cdot (-x)}{2}\) percent of the time
-- \(\frac{1-a\cdot x}{2} \frac{1-b\cdot (-x)}{2}\) percent of the time
+-\(\frac{1+a\cdot x}{2} \frac{1-b\cdot (-x)}{2}\) percent of the time 
-+\(\frac{1-a\cdot x}{2} \frac{1+b\cdot (-x)}{2}\) percent of the time 

which yields the correlation: \(-(a\cdot x) (b \cdot x)\)

Step 2: integrate \(-(a\cdot x) (b \cdot x)\) for all directions \(x\). To this aim align \(a\) on the z axis and have \(b\) in the y-z plane:

\(a=(0,0,a)\)
\(b=(0, b_y , b_z)\)

then go to spherical coordinates integrating using:

\(\frac{1}{4\pi}\int_{0}^{2\pi} d\theta \int_{0}^{\pi} \sin\phi d\phi\)

\(a\cdot x = \cos\phi\)
\(b\cdot x = b(0, \sin\alpha, -\cos\alpha)\cdot(\sin\phi \cos\theta, \sin\phi\sin\theta, \cos\phi)\)

where \(\alpha\) is the angle between \(a\) and \(b\).

Plugging all back in and doing the trivial integration yields: \(-\frac{\cos\alpha}{3}\)

So now for the moral of the story. the quantum mechanics prediction and the experimentally observed  correlation is  \(-\cos\alpha\) and not \(-\frac{1}{3} \cos\alpha\)

The 1/3 incorrect correlation factor comes from demanding (1) the experimentally proven behavior from two consecutive S-G device measurements, (2) the hypothesis that the electron spins exist before measurement, and (3) and isotropic distribution of spins originating from a total spin zero state.

(1) and (3) cannot be discarded because (1) is an experimental behavior, and (3) is a very natural demand of isotropy. It is (2) which is the faulty assumption.

If (2) is true then circling back on Hardy's result, we are under the classical physics condition: \(K=N\) which means that the whole is the sum of the parts. 

Bell considered both the 1/3 result and the one from his inequality and decided to showcase his inequality for experimental purposes reasons: "It is probably less easy, experimentally, to distinguish (10) from (3), then (11) from (3).". Both hidden variable models:

    if (Dot(direction, sharedRandomness3DVector) < 0)
        return +1;
    else
        return -1;

and 

    var cosAngle= Dot(direction, sharedRandomness3DVector);
    var cosHalfAngleSquared = (1+cosAngle)/2;
    if (Math.random() < cosHalfAngleSquared )
        return -1;
    else
        return +1; 

are at odds with quantum mechanics and experimental results. The difference between them is on the correlation behavior for 0 and 180 degrees. If we allow information transfer between Alice generating function and Bob generating function (nonlocality) then it is easy to generate whatever correlation curve we want under both scenarios (play with the computer model to see how it can be done).

So from realism point of view, which hidden variable model is better? Should we insist on perfect anti-correlations at 0 degrees, or should we demand the two consecutive S-G results along with realism? It does not matter since both are wrong. In the end local realism is dead.

Friday, September 23, 2016

Explanation for Bell's theorem modeling program


Today I will explain in detail the code from last time and show how can you change it to experiment with Bell's theorem. The code below needs only a text editor to make modifications and requires only a web browser to run. In other words, it is trivial to play with provided you understand the basics of HTML and Java Script. For elementary introductions to those topics see here and here.

In a standard HTML page we start in the body section with 3 entrees responsible to plot the graph in the end.

<body>
<link href="http://jsxgraph.uni-bayreuth.de/distrib/jsxgraph.css" rel="stylesheet" type="text/css"></link>
<script src="http://jsxgraph.uni-bayreuth.de/distrib/jsxgraphcore.js" type="text/javascript"></script>
<script src="http://jsxgraph.uni-bayreuth.de/distrib/GeonextReader.js" type="text/javascript"></script>


Then we have the following HTML table

<table border="4" style="width: 50%px;">
<tr><td style="width: 25%;">
<br />
Number of experiments: <input id="totAngMom" type="text" value="100" />
<br />
Number of directions: <input id="totTestDir" type="text" value="100" />
<br />
  
<input onclick="clearInput();" type="button" value="Clear Data" />

<input onclick="generateRandomData();" type="button" value="Generate Shared Random Data" />
<br />

<textarea cols="65" id="in_data" rows="7">
</textarea>
<br />

<input onclick="clearTestDir();" type="button" value="Clear data" />

<input onclick="generateTestDir();" type="button" value="Generate Random Alice Bob directions (x,y,z,x,y,z)" />
<textarea cols="65" id="in_test" rows="4">
</textarea>
<br />
<input onclick="clearOutput();" type="button" value="Clear Data" />

<input onclick="generateData();" type="button" value="Generate Data from shared randomness" />
<br />
Legend: Direction index|Data index|Measurement Alice|Measurement Bob
<textarea cols="65" id="out_measurements" rows="4">
</textarea>
<input onclick="clearBoard();" type="button" value="Clear Graph" />
<input onclick="plotData();" type="button" value="Plot Data" />

</td>
</tr>
<tr>
<td>
<div class="jxgbox" id="jxgboxCorrelations" style="height: 400px; width: 550px;">
</div>

</td></tr>
</table>

and we close the body:

</body>

The brain of the page is encapsulated by script tags:

<script type="text/javascript">
</script>

which can be placed anywhere inside the HTML page. Here are the functions which are declared inside the script tags:

//Dot is the scalar product of 2 3D vectors
function Dot(a, b)
{
 return a[0]*b[0] + a[1]*b[1] + a[2]*b[2];
};

This simply computes the dot product of two vectors in ordinary 3D Euclidean space. As a Java Script reminder, the arrays start at index zero and go to N-1. Also in Java Script comments start with two double slash // and lines end in semicolon ;

Next there is a little utility function which computes the magnitude of a vector:

//Norm computes the norm of a 3D vector
function GetNorm(vect)
{
 return Math.sqrt(Dot(vect, vect));
};

This is followed by another utility function which normalizes a vector:

//Normalize generates a unit vector out of a vector
function Normalize(vect)
{
 //declares the variable
 var ret = new Array(3);
 //computes the norm
 var norm = GetNorm(vect);

 //scales the vector
 ret[0] = vect[0]/norm;
 ret[1] = vect[1]/norm;
 ret[2] = vect[2]/norm;
 return ret;
};

To create an random oriented vector we use the function below which first randomly generates a point in a cube of side 2, eliminated the points outside a unit sphere, and then normalizes the vector:

//RandomDirection create a 3D unit vector of random direction
function RandomDirection()
{
 //declares the variable
 var ret = new Array(3);

 //fills a 3D cube with coordinates from -1 to 1 on each direction
 ret[0] = 2*(Math.random()-0.5);
 ret[1] = 2*(Math.random()-0.5);
 ret[2] = 2*(Math.random()-0.5);

 //excludes the points outside of a unit sphere (tries again)
 if(GetNorm(ret) > 1)
  return RandomDirection();
 return Normalize(ret);
};

The rest of the code is this:

var generateData = function()
{
   clearBoard();
 clearOutput();
 //gets the data
 var angMom = new Array();
 var t = document.getElementById('in_data').value;
 var data = t.split('\n');
 for (var i=0;i<data.length;i++)
 {
    var vect = data[i].split(',');
    if(vect.length == 3)
   angMom[i] = data[i].split(',');
   }
  
   var newTotAngMom = angMom.length;
   clearBoard();
 var varianceLinear = 0;
 var varianceCosine = 0;
 var totTestDirs = document.getElementById('totTestDir').value;


 var abDirections = new Array();
 var AliceDirections = new Array();
 var BobDirections = new Array();
 var t2 = document.getElementById('in_test').value;
 var data2 = t2.split('\n');
 for (var k = 0; k < data2.length; k++) 
 {
         var vect2 = data2[k].split(',');
         if (vect2.length == 6)
  {
              abDirections[k] = data2[k].split(',');
              AliceDirections[k] = data2[k].split(',');
   BobDirections[k] = data2[k].split(',');

       AliceDirections[k][0] = abDirections[k][0];
       AliceDirections[k][1] = abDirections[k][1];
       AliceDirections[k][2] = abDirections[k][2];
       BobDirections[k][0]   = abDirections[k][3];
       BobDirections[k][1]   = abDirections[k][4];
       BobDirections[k][2]   = abDirections[k][5];
  }
 }

 var TempOutput = "";

 //computes the output
 for(var j=0; j<totTestDirs; j++)
 {
         var a = AliceDirections[j];
         var b = BobDirections[j];
  for(var i=0; i<newTotAngMom; i++)
  {
       TempOutput = TempOutput + (j+1);
       TempOutput = TempOutput + ",";
       TempOutput = TempOutput + (i+1);
       TempOutput = TempOutput + ",";
       TempOutput = TempOutput + (GenerateAliceOutputFromSharedRandomness(a, angMom[i]));
       TempOutput = TempOutput + ",";
       TempOutput = TempOutput + (GenerateBobOutputFromSharedRandomness(b, angMom[i]));
   if(i != newTotAngMom-1 || j != totTestDirs-1)
        TempOutput = TempOutput + " \n";
  }
 }

 apendResults(TempOutput);
};

var plotData = function()
{
   clearBoard();
  boardCorrelations.suspendUpdate();
 //gets the data
 var angMom = new Array();
 var t = document.getElementById('in_data').value;
 var data = t.split('\n');
 for (var i=0;i<data.length;i++)
 {
    var vect = data[i].split(',');
    if(vect.length == 3)
   angMom[i] = data[i].split(',');
   }
  
   var newTotAngMom = angMom.length;
 var varianceLinear = 0;
 var varianceCosine = 0;
 var totTestDirs = document.getElementById('totTestDir').value;

 //extract directions
 var abDirections = new Array();
 var AliceDirections = new Array();
 var BobDirections = new Array();
 var t2 = document.getElementById('in_test').value;
 var data2 = t2.split('\n');
 for (var k = 0; k < data2.length; k++) 
 {
         var vect2 = data2[k].split(',');
         if (vect2.length == 6)
  {
              abDirections[k] = data2[k].split(',');
              AliceDirections[k] = data2[k].split(',');
   BobDirections[k] = data2[k].split(',');

       AliceDirections[k][0] = abDirections[k][0];
       AliceDirections[k][1] = abDirections[k][1];
       AliceDirections[k][2] = abDirections[k][2];
       BobDirections[k][0]   = abDirections[k][3];
       BobDirections[k][1]   = abDirections[k][4];
       BobDirections[k][2]   = abDirections[k][5];
  }
 }


 var tempLine = new Array();
 var Data_Val = document.getElementById('out_measurements').value;
 var data_rows = Data_Val.split('\n');

 var directionIndex = 1;
 var beginNewDirection = false;

       var a = new Array(3);
 a[0] = AliceDirections[0][0];
 a[1] = AliceDirections[0][1];
 a[2] = AliceDirections[0][2];
        var b = new Array(3);
 b[0] = BobDirections[0][0];
 b[1] = BobDirections[0][1];
 b[2] = BobDirections[0][2];
 var sum = 0;


 for (var ii=0;ii<data_rows.length;ii++)
 {
  //parse the input line
    var vect = data_rows[ii].split(',');
    if(vect.length == 4)
   tempLine = data_rows[ii].split(',');

  //see if a new direction index is starting
  if (directionIndex != tempLine[0])
  {
   beginNewDirection = true;
  }

  if(!beginNewDirection)
  {
   var sharedRandomnessIndex = tempLine[1];
   var sharedRandomness = angMom[sharedRandomnessIndex];
   var aliceOutcome = tempLine[2];
   var bobOutcome = tempLine[3];
   sum = sum + aliceOutcome*bobOutcome;
  }


  if (beginNewDirection)
  {
   //finish computation
   var epsilon = sum/newTotAngMom;
   var angle = Math.acos(Dot(a, b));
   boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});

   var diffLinear = epsilon - (-1+2/Math.PI*angle);
   varianceLinear = varianceLinear + diffLinear*diffLinear;
   var diffCosine = epsilon + Math.cos(angle); 
   varianceCosine = varianceCosine + diffCosine*diffCosine;

   //reset and start a new cycle 
   directionIndex = tempLine[0];
   a[0] = AliceDirections[directionIndex-1][0];
   a[1] = AliceDirections[directionIndex-1][1];
   a[2] = AliceDirections[directionIndex-1][2];
   b[0] = BobDirections[directionIndex-1][0];
   b[1] = BobDirections[directionIndex-1][1];
   b[2] = BobDirections[directionIndex-1][2];
   sum = 0;
   var sharedRandomnessIndex = tempLine[1];
   var sharedRandomness = angMom[sharedRandomnessIndex];
   var aliceOutcome = tempLine[2];
   var bobOutcome = tempLine[3];
   sum = sum + aliceOutcome*bobOutcome;
   beginNewDirection = false;
  }

   }
 //finish computation for last element of the loop above
 var epsilon = sum/newTotAngMom;
 var angle = Math.acos(Dot(a, b));
 boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});
 var diffLinear = epsilon - (-1+2/Math.PI*angle);
 varianceLinear = varianceLinear + diffLinear*diffLinear;
 var diffCosine = epsilon + Math.cos(angle); 
 varianceCosine = varianceCosine + diffCosine*diffCosine;
 //display total fit
 boardCorrelations.createElement('text',[2.0, -0.7, 'Linear Fitting: ' + varianceLinear],{});
 boardCorrelations.createElement('text',[2.0, -0.8, 'Cosine Fitting: ' + varianceCosine],{});
 boardCorrelations.createElement('text',[2.0, -0.9, 'Cosine/Linear: ' + varianceCosine/varianceLinear],{});
 boardCorrelations.unsuspendUpdate();
};


var clearBoard = function()
{
 JXG.JSXGraph.freeBoard(boardCorrelations); 
 boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations',{boundingbox:[-0.20, 1.25, 3.4, -1.25],axis:true, 

showCopyright:false});
 boardCorrelations.create('functiongraph', [function(t){ return -Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor: 

"#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});
 boardCorrelations.create('functiongraph', [function(t){ return -1+2/Math.PI*t; }, 0, Math.PI],{strokeColor: "#6666ff", 

strokeWidth:2,highlightStrokeColor: "#6666ff", highlightStrokeWidth:2});
};

var clearInput = function()
{
 document.getElementById('in_data').value = '';
};

var clearTestDir = function()
{
 document.getElementById('in_test').value = '';
};

var clearOutput = function()
{
 document.getElementById('out_measurements').value = '';
};

var generateTestDir = function()
{
   clearBoard();
 var totTestDir = document.getElementById('totTestDir').value;
 var testDir = new Array(totTestDir);
 var strData = "";
 for(var i=0; i<totTestDir; i++)
 {
  //first is Alice, second is Bob
  testDir[i] = RandomDirection();
  strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2]+ ", " ;
  testDir[i] = RandomDirection();
  strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2] + '\n';
 }

 document.getElementById('in_test').value = strData;
};

var generateRandomData = function()
{
   clearBoard();
 var totAngMoms = document.getElementById('totAngMom').value;
 var angMom = new Array(totAngMoms);
 var strData = "";
 for(var i=0; i<totAngMoms; i++)
 {
  angMom[i] = RandomDirection();
  strData = strData + angMom[i][0] + ", " + angMom[i][1] + ", " + angMom[i][2] + '\n';
 }

 document.getElementById('in_data').value = strData;
};

var apendResults= function(newData)
{
 var existingData = document.getElementById('out_measurements').value;
 existingData = existingData + newData;
 document.getElementById('out_measurements').value = existingData;
};


function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    //replace this with your own function returning +1 or -1
    if (Dot(direction, sharedRandomness3DVector) > 0)
        return +1;
    else
        return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    //replace this with your own function returning +1 or -1
    if (Dot(direction, sharedRandomness3DVector) < 0)
        return +1;
    else
        return -1;
};

var boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations', {axis:true, boundingbox: [-0.25, 1.25, 3.4, -1.25], showCopyright:false});

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

At loading time the page executes:

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

The key to the whole exercise are the following two functions:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    //replace this with your own function returning +1 or -1
    if (Dot(direction, sharedRandomness3DVector) > 0)
        return +1;
    else
        return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
    //replace this with your own function returning +1 or -1
    if (Dot(direction, sharedRandomness3DVector) < 0)
        return +1;
    else
        return -1;
};

To experiment with various hidden variable models all you have to do is replace the two functions above with your own concoction of hidden variable which uses the shared variable "sharedRandomness3DVector". 

There are certain models for which if we return zero (which in the correlation computation is equivalent with discarding the data since the correlations are computed by this line in the code: sum = sum + aliceOutcome*bobOutcome;) a certain number of times as a function of the angle between direction and sharedRandomness3DVector, then one can obtain the quantum mechanics correlation curve. This is the famous detection loophole (or (un)fair sampling) for Bell's theorem.

If we talk about the detection loophole the paper to read is an old one by Philip Pearle: http://journals.aps.org/prd/abstract/10.1103/PhysRevD.2.1418 In there Pearle found an entire class of solutions able to generate the quantum correlations. The original paper is hard to double check (it took me more than a week and I was still not done completely), but Richard Gill did manage to extract a useful workable detection loophole model out of it: https://arxiv.org/pdf/1505.04431.pdf

Manipulating the generating functions above one can easily test various ideas about hidden variable models. For example an isotropic model of opposite spins generates -1/3 a.b correlations. It is not that hard to double check the math in this case: a simple integrals will do the trick. in particular this shows that the spins do not exist independent of measurement. 

More manipulations using the detection loophole are even able to generate super-quantum Popescu-Rohrlich box correlations, but I let the user to experiment with this and discover how to do it for themselves. Happy computer modeling!

Thursday, September 15, 2016

Playing with Bell's theorem

In this post I'll write just a little text because editing is done straight in the HTML view which is very tedious. Below I have a Java script program which illustrates Bell's theorem. If you want to play with this code just right click on the page to view the source and extract it from there. If you do not know how to do that then you are not going to understand it in a few sentences. Next time I'll describe the code and how to experiment with various hidden variable models.
This is about an EPR-B Alice-Bob experiment where each ("measurement") generate a deterministic +1 or -1 outcome for a particular measurement direction using a shared piece of information: a random vector. Then the correlations are computed and plotted. No matter what deterministic model you try the correlation near the origin you generate a straight line vs. a curve of zero slope in the case of quantum mechanics. For this particular program, given a measurement direction specified as a unite vector in Cartesian coordinates I am computing the scalar product and I return +1 if positive I and -1 if negative. The experiment is repeated a number of times on various random measurement directions.
If you do not trust the randomly generated data, you can enter you own random Alice-Bob shared secret and your own measurement directions. Part of the credit for this program goes to Ovidiu Stoica.

Number of experiments:
Number of directions:




Legend: Direction index|Data index|Measurement Alice|Measurement Bob

Saturday, September 10, 2016

A sinister mystification


Once in a while, events in the society at large overshadow all other considerations. I will put on hold the series about Bell's theorem for this week because, such an event occurred: Mother Teresa was proclaimed to be a saint. So what? What is the big deal?

Growing up in Romania, all I heard about her was that she was the symbol of selfless devotion to the poor, a truly remarkable person symbolizing all that it is good in mankind. Coming to US, the public perception was on similar lines and her 1979 Nobel piece prize seemed well deserved. Her recent sainthood was only the realization of a natural public expectation. 

However things are not always what they appear and in this case the truth it is complete opposite with the perception. The sainthood outcome is the result of public gullibility masterly exploited by a morally bankrupt Catholic Church in collusion with dirty politicians, media, corrupt businessman, a dictatorship monarchy, and at the center of it all a pure evil person advancing a religious fanatic agenda for the benefit of the Catholic Church and her own perverted pleasure: Mother Teresa. 

The person who blew the whistle on Mother Teresa's con artist mystification was a remarkable person: Christopher Hitchens with his book: The Missionary Position. I never heard of Mr. Hitchens until a year ago when I discovered by accident his anti theistic stance. Coming from a country which suffered under communism for decades, I was turned off by his hints of admiration for Marxist ideas. It took me some time to properly asses his integrity and the value of his arguments. In the end I found him a very sharp clear thinker with a courageous attitude. I was surprised to discover he was a mini celebrity in the left political circles in US who alienated part of that audience due to his hawkish attitude and support for the Iraq war and who was also a personal friend of late Justice Scalia-the most conservative member of the US Supreme Court.

Now I don't think I will change the minds of the devout Catholics about Mother Theresa, so if you are such a person either take the blue pill and stop reading the rest of this post or take the red pill and keep reading to be shaken from your intellectual complacency and maybe stop buying the bridge the church keeps selling you.

To start I encourage you to watch the following videos:


It's too long to explain the whole mystification story but here is the gist:

Mother Teresa was not a friend of the poor but of poverty and suffering. She derived a perverted gratification from witnessing and encouraging suffering because she thought this would bring her closer to her salvation. This is the mark of a psychopath which derives meaning and pleasure from other's suffering. The places she established were not designed to alleviate suffering but decrepit places of abject poverty and suffering were people were simply brought to die. Young people were denied simple medical care which could have easily saved their lives because their suffering was sanctioned by a fanatical religious agenda. 

So maybe Mother Teresa applied the same principles on herself. Not at all. When she was ill no expenses were spared and she took advantage of the latest medical advances. What a cosmic hypocrite. But where did the money come from to establish her places of suffering? Among other sources from the brutal Duvalier dictatorship family in Haiti responsible for the deaths of tens of thousands of people over decades, and from a corrupt business person convicted of stealing the life savings of thousands of people. But perhaps the Catholic Church upon learning about this returned the blood and dirty money to the victims? After all Mother Teresa acted with the full blessing of the church. Think again.

Hitchens has a simple but true position: religion poisons everything. It takes time to evaluate his claims and try to refute it if you can. It is much more convenient to ignore it, but then I assume that if you reached this paragraph you took the red pill. Many religious figures and "religious scholars" tried debating with  Hitchens only to be shamefully debunked. None won the debates. Hitchslap is now an urban dictionary verb.

On Mother Teresa Hitchens put it like this: "It is a certainty that millions of people died because of her work. And millions more were made poorer, or stupider, more sick, more diseased, more fearful, and more ignorant."

Her sainthood is a scandal due to a sinister and cynical mystification perpetuated by many people over decades for their own benefits. The shame list includes the Catholic Church with all its popes who sanctioned Mother Teresa fanatical religious agenda, politicians like Ronald Reagan, public figures like Princess Dianna, the Nobel Peace prize committee, media like CNN, all who exploited public opinions for their own agendas, as well as corrupt businessman and dictators who provided the money in exchange of whitewashing their public image.  

Friday, September 2, 2016

I was wrong and Lubos Motl was right 

but quantum correlations are not like Bertlmann's socks


It finally happened. I was too careless and I stupidly challenged Lubos to "Show me a non-factorizable state in CM for spatially separated physical systems!!!" 

And he indeed presented one:

"Bertlemann's socks? The state is described by the probability distribution that has 0%,50%,50%,0% for left-and-right-green, left-green right-red, left-red right-green, left-and-right-read. The state, P(colorLEFT, colorRIGHT) isn't factorized. "

This is obscurely written but it is ultimately correct. Glup, glup, glup, he sank my battleship, bruised my ego, won the battle, but not the war :) I will attempt to show that quantum correlations are not like Bertlmann's socks correlations because quantum correlations depend in an essential way on the observer. First let me clarify Lubos example and why he is right in his example.

First what is this business of Bertlmann's socks? This came from a famous Bell paper:


Mr Bertlmann is a real person who uses to wear socks of different colors. The moment you see that one of his socks is pink you know that his other sock is not pink. (Lubos is using red and green) and there is a correlation between the sock colors. In general, you learn from statistic 101 that a joint probability \(P(A, B)\) factorizes: \(P(A,B)=P(A)P(B)\) if the two probabilities are independent. Therefore it was blatantly stupid on my part to ask Lubos for a non-factorizable example. But I had something else in mind for which I carelessly skipped the essential details and now I will explain it properly.

The Bell theorem factorization condition is not on independent probabilities but on residual probabilities:

\(P(A,B, \lambda) = P(A, \lambda) P(B, \lambda)\)

In the case of Bertlmann's socks there are no residual probabilities! But what does this mean?

Correlations can be generated due to physical interaction between two systems or because they share a common cause. One way people explain this is by inventing silly games where two players agree in advance on a strategy but they are not allowed to communicate during the actual game. For example consider this: Alice and Bob each have a coin and they flip it heads or tails. Then they both have to guess the result of each other and they win the game when successful. If they guess randomly the best odds of wining the game is 25% of the time. However there are two strategies they can agree beforehand which increases their odds of winning to 50%. (Can you guess what those strategies might be?). Strategies, common causes and interactions, outcome filtering, all generate correlations and ruin the factorization condition. 

But quantum mechanics is probabilistic and even if you account for all interactions the outcome is still random. After accounting for all those factors in the form of a generic variable \(\lambda\), the remaining probability is called a residual probability. Lambda can be a variable, or a set of variables of an unspecified format. The point is that after accounting for all common causes, if the physical systems A and B are spatially separated, they cannot communicate and in the quantum case it seems reasonable to demand \(P(A,B, \lambda) = P(A, \lambda) P(B, \lambda)\).

But such a factorization is at odds with both quantum mechanics predictions, and experimental results. This is the basis on which people in foundations of quantum mechanics call nature nonlocal.

I think I know how Lubos may attack this. I bet he will say that \(\lambda\) is basically a hidden variable and quantum mechanics does not admit hidden variables. This line of argument is faulty. Carefully read (several times) Bell's paper from the link above and you see that \(\lambda\) represents the coding of the usual way correlations are  accounted by this common \(\lambda\) which is present in all 3 factors.

Let me spell out better Bell's argument following a well written paper by Bernard d'Espagnat. Bell considers the singlet state and in there you have Alice and Bob in two spatially separated labs measuring the spins on directions a, and b and obtaining the outcomes A, and B respectively, Let \(\lambda\) represent a common source of the correlation between A and B. Then one can write the standard rule of statistics: \(P(M,N) = P(M|N)P(N)\) like this:

\(P(A,B|a,b,\lambda) = P(A|a,b,B,\lambda)P(B|a,b,\lambda)\)

then because what happens at Alice's side does not depend on what happens on Bob side and the other way around:

\(P(A|a,b,B,\lambda) = P(A|a, \lambda)\)
\(P(B|a,b,\lambda) = P(B|b, \lambda)\)

yields:

\(P(A,B|a,b,\lambda) = P(A|a, \lambda) P(B|b,\lambda)\)

From this the usual Bell theorem follows and disagreement with experiment is used to point out that what happens at Alice's side does depend on what happens on Bob side. In other words, nonlocality.

I disagree (with good arguments) with several points of view:

  • I disagree with Lubos that nature does not follow the logic of projectors and follows the Boolean logic instead. (measurements project on a Hilbert subspace and quantum OR and quantum NOT are different than their classical counterparts)
  • I disagree with Tim Maudlin who best defends the point that Bell proved nonlocality based on the argument above (Quantum mechanics is contextual and because of that: \(P(A|a,b,B,\lambda) \ne P(A|a, \lambda)\) . As such Bell's factorization condition is not justified. Only if you think in terms of classical Boolean logic avoiding contextuality the nonlocality conclusion is inescapable. )
  • I disagree with Lubos that quantum correlations are like Bertlmann's socks. To eliminate thinking about lambda as a hidden variable, picture it as fixing all conceivable sources of correlations (Bertlmann's socks type or any other type). Now add locality independence, Boolean logic, and you get correlations at odds with experiments. Pick your poison: give up locality or give up Boolean logic. The one to give up is Boolean logic. Unlike Bertlmann's socks correlations, quantum correlations depend in an essential way on the observer.
  • I disagree with giving up on locality and that the Bohmian position represents a valid description of nature (what happens with he quantum potential of a particle after the particle encounters its antiparticle? Both vanish or not vanish result in predictions incompatible with observations)

Bell himself provided 4 possible explanations of quantum mechanics' correlations:
-quantum mechanics is wrong sometimes
-superdeterminism (lack of free will)
-faster than light causal influences
-non-realism

My take on this is that quantum mechanics is the complete and correct description of nature, there is free will, there are no faster than light causal influences, realism is incorrect, and what people call nonlocality is actually a manifestation of contextuality because the observer (but not consciousness) does play an active role in generating the experimental outcome. This active role happens even when parts of the composed system are out of causal reach because quantum mechanics is blind to space-time separations.

Friday, August 26, 2016

CHSH inequality and the rejection of realism


I will start now a series on Bell's theorem and its importance to quantum foundations. Today I will talk about the Clauser, Horne, Shimony, and Holt inequality and its implication. 

Quantum mechanics is a probabilistic theory which does not make predictions of the outcome of individual experiments, but makes statistical predictions instead. This opens the door to consider "subquantic" or "hidden variable" theories which would be able to restore full determinism. However, even with the statistical nature of quantum mechanics predictions there is something more which can be investigated: correlations. What distinguishes quantum from classical mechanics is how the observables of a composite system are related to the observables of the individual systems. In the quantum case there is an additional term related to the generators of the Lie algebras of each individual systems and this in turn prevents the neat factorization of the Hermitean observables of the composite system. It is this lack of factorization which prevents in general the factorization of the quantum states. In literature this goes under the (bad) name nonlocality.

Now suppose we have two spatially separated laboratories which receive from a common source pairs of photons. The "left lab" L chooses to measure the polarization of the photons on two directions \(\alpha\) an \(\gamma\), while the "right lab" R chooses to measure the polarization of the photons on two directions \(\beta\) an \(\delta\), Let's call the outcome of the experiments: \(a, b, c, d\) for the directions of measurement \(\alpha, \beta\, \gamma, \delta\), respectively. The values \(a, b, c, d\) can take are +1 or -1.

Now let us compute the following expression:

\(C=(a+c)b-(a-c)d\)

Suppose \((a+c) = 0\), then \((a-c)=\pm  2\) and so \(C=\pm 2\)
Similarly if \((a-c) = 0\), then \((a+c)=\pm  2\) and again \(C=\pm 2\)

Either way \(C=\pm 2\)

Now suppose we have many runs of the experiment and for each run \(i\)we get:

\(a_i b_i + b_i c_i + c_i d_i - d_i a_i = \pm 2\)

from which we deduce on average that:

\(|\langle ab\rangle + \langle bc\rangle + \langle cd\rangle - \langle da\rangle|\leq 2\)

This is the CHSH famous inequality. Now under appropriate circumstances nature violates this inequality:

in an experiment with photons the average correlation between measurements on two distinct directions \(\alpha, \beta\) is: \(\cos 2(\alpha - \beta)\) and the inequality to be obeyed is:

\(| \cos 2(\alpha - \beta) + \cos 2(\beta - \gamma) + \cos 2(\gamma - \delta) - \cos 2(\delta - \alpha)| \leq 2\)

but if the angle differences are at 22.5 degrees we get that \(2\sqrt{2} \leq 2\) so what is going on here?

A natural first objection is that not all 4 measurements can be simultaneously be performed and so we are reasoning counterfactually. But in N runs of the experiment we get 2N experimental results and there is a finite number of ways we can fill in the missing 2N data and in each counterfactual way of filling in the unmeasured data the CHSH inequality is still obeyed.

A second potential objection is that there is no free will and there is a conspiracy going on which prevents an unbiased choice of the 4 directions. There is no counteragument for this objection except that I know I have free will. If free will does not exists then mankind has much deeper troubles than explaining quantum mechanics: try to explain morality and justify the existence of the judicial system. 

The introduction of bias can affect correlations and if the detection rate depends on the angle, then for appropriate dependencies one can obtain the quantum correlations. This is the so-called detection loophole, However, if such a dependency exists, it can be tested in additional experiments and the introduction of angle dependency only for Bell test experiments is indefensible. Loophole free Bell experiments while important to push the boundary of experimental technology have no scientific importance and they count only towards experimentalist's bragging rights.

Another way to obtain correlations above 2 is by appealing to contextuality: for example the value of \(a\) when measured by lab L when lab R measures \(b\) may not be the same when lab R measures \(d\). While quantum mechanics is contextual, in this case such an argument means that the the choice lab R makes influences the result of measurement at lab L which is spatially separated!!!

Last, if the values of \(a, b, c, d\) do not exist prior to measurement, this decouples again the value of \(a\) when lab R measures \(b\) from the value of \(a\) when lab R measures \(d\). 

Assuming free will is true, we have only two choices at out disposal to be able to obtain correlations above 2: 
  • measurement in a spatially separated lab affects the outcome on the remote lab
  • the outcome of measurement does not exist before measurement.
The first choice is taken by dBB theory because the quantum potential changes instantaneously and the second option is advocated by the Copenhagen camp. (I am excluding the MWI proposal because in it there is no valid derivation of Born rule. I am also excluding collapse models because they are a departure from quantum mechanics and experiments will soon be able to reject them).

Now here is the catch: the two labs need not be spatially separated and one experiment in lab L can unambiguously happen before the experiment in lab R. When the R lab measurement takes place it cannot affect the outcome in the L lab because that is in the past and already happened! 

But can the first measurement affect the second one? In dBB this is possible as long as the first particle and its quantum potential is still around to "guide" the second particle. However, if after the first measurement the first particle is annihilated by its antiparticle then its quantum potential vanishes. The behavior of quantum potential after annihilation  is a reason why a relativistic second quantization dBB theory is not possible: either the quantum potential sticks around and messes up subsequent measurements, or vanishes and then the correlations cannot occur in the case above. (dBB supporters pin their hopes on a "future to be discovered" relativistic dBB quantum field theory which never materialized and cannot exists for several reasons.)

So from the two choices above only one remains valid:

the outcome of measurement does not exist before measurement

Realism is rejected by Bell's theorem. However in literature Bell's result is presented instead as a rejection of locality. But this is an abuse of language: locality=state factorization. Nature and quantum mechanics are incompatible with a state factorization. State factorization is just factorization, not locality. Rejection of realism is the only viable option left. 

Saturday, August 20, 2016


Can you generate entangled particles which never interacted?


This post is the continuation of the last one because (1) it attracted a lot of attention and (2) no consensus was reached by the viewers. In particular, Lubos Motl was insisting on the fact that you cannot generate entangled particles which never interacted. So I am making one last attempt to convince him of the contrary.


The setting is from last time: entanglement swapping. This time I will not write Latex and instead I will explain the picture above (please excuse my poor MSPaint abilities). For the sake of argument, I am using photons and I show their worldliness in black going at 45 degree angles. Alice and Bob have the red resonant cavities which capture the 1 and 4 photons (feel free to replace the cavities with long enough optical fiber loops). The cavities have a release or absorption mechanism which is activated by Charlie upon obtaining the result of a projective measurement on photons 2 and 3. On average Charlie obtains his desired output 25% of the time in which case he sends the signal to release the 1 and 4 photons to the outside world. The other 75% of the time Charlie sends the signal to absorb the 1 and 4 photons.

From the outside Alice, Bob, Charlie, and 1 and 4 photons are inside of a green box. At random times out of the green box comes out two entangled photons which never interacted in the past.

Now here is the key Lubos statement:

"Excellent. Now, the filtering is caused by the decision in Charlie's brain which is in the intersection of the two past light cones of events -measurements of particles 1,4, right?

So you haven't found any counterexample to my statement, have you?"


Now here is why Lubos is wrong: See the picture below. He contends it is the purple area which is important and the fact that Charlies decision is made inside it  (again excuse my lack of precision drawing 45 degree lines).


Why is the purple area irrelevant to the discussion? Because when the photons 1 and 4 are in their respective red cavities they do not interact with each other! 

Viewed from outside of the green box at random times comes out two entangled photons which never interacted in the past and could not have interacted in the past because first they were spatially separated, and second they got trapped in an isolation cavity long enough for the signal from Charlie to reach their cavity and release them.

It does not matter that Charlie's brain is in the intersection of the two past light cones of events measurements of particles 1,4 at the exit of the green box (the purple area). It matters that  Charlie's brain is not in the yellow area of the intersection of the two past light cones of the events of particles 1,4 entering the holding red cavities.

It is hard to explain all this in words without pictures. I tried to have a crude drawing last time in the comment section but formatting mangled it. I was suspecting Lubos was only pretending not to understand my argument, but now a week after the exchange I tend think he had a genuine misunderstanding because while he was thinking of the purple area I was talking about the yellow one.

Talking about other comments, last time a statement from Andrei caught my attention:

""S1 does not imply nonlocality, but nonrealism"

It does not imply non-locality for the particle that is measured, but implies non-locality for the distant particle. You measure one particle here and you create the value of the spin for both entangled particles (including the one that is far away)."

I will address this in a future post because a quick reply does not make justice to the topic. Is is true that "You measure one particle here and you create the value of the spin for both entangled particles (including the one that is far away)"? This is deeply related with Einstein's realism criteria:

"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding lo this physical quantity."

This is a very natural criteria given a classical intuition, but it is however false. And showing why and how it is false is not a trivial matter. And if Einstein realism criteria is false then this is not true either: "you measure one particle here and you create the value of the spin for both entangled particles (including the one that is far away)."

Measurement does change something about the remote particle: it's state. But if the states are related to epistemology as opposed to ontology then there is no nonlocality problem. The easiest way to understand this is in the Bayesian paradigm where I change my degrees of belief: local measurement changes my local degree of belief about the remote particle.