## Is the Decoherent Histories Approach Consistent?

One particular approach of interpreting quantum mechanics is Decoherent Histories. All major non-Copenhagen approaches have serious issues:
- MWI has the issue of the very meaning of probability and without a non-circular derivation (impossible in my opinion) of Born rule does not qualify for anything but a "work in progress" status.
-GRW-type theories make different predictions than quantum mechanics which soon will be confirmed or rejected by ongoing experiments. (My bet is on rejection since later GRW versions tuned their free parameters to avoid collision with known experimental facts instead of making a falsifiable prediction)
-Bohmian approach has issues with "surreal trajectories" which invalidates their only hard ontic claim: the position of the particle.

Now onto Decoherent Histories. I did not closely follow this approach and I cannot state for sure if there are genuine issues here, but I can present the debate. On one hand, Robert Griffiths states:

"What is different is that by employing suitable families of histories one can show that measurement actually measure something that is there, rather than producing a mysterious collapse of the wave function"

On the other hand he states:

"Any description of the properties of an isolated physical system must consists of propositions belonging together to a common consistent logic" - in other words he introduces contextuality.

Critics of decoherent (or consistent) histories use examples which are locally consistent but globally inconsistent to criticize the interpretation.

Here is an example by Goldstein (other examples are known). The example can be found in Bricmont's recent book: Making Sense of Quantum Mechanics on page 231. Consider two particles and two basis for a two-dimensional spin base $$(|e_1\rangle\, |e_2\rangle), (|f_1\rangle, |f_2\rangle))$$ and consider the following state:

$$|\Psi\rangle = a |e_1\rangle|f_2\rangle + a|e_2\rangle|f_1\rangle - b |e_1\rangle|f_1\rangle$$

Then consider four measurements A, B, C, D corresponding to projectors on four vectors, respectively: $$|h\rangle, |g\rangle, |e_2\rangle, |f_2\rangle$$ where:

$$|g\rangle = c|e_1\rangle + d|e_2\rangle$$
$$|h\rangle = c|f_1\rangle + d|f_2\rangle$$

Then we have the following properties:

(1) A and C can be measured simultaneously, and if A=1 then C=1
(2) B and D can be measured simultaneously, and if B=1 then D=1
(3) C and D can be measured simultaneously, but we never get both C and D = 1
(4) A and B can be measured simultaneously, and sometimes we get both A and B = 1

However all 4 statements cannot be true at the same time: when A=B=1 as in (4) then by (1) and (2) C=D=1 and this contradicts (3).

So what is going on here? The mathematical formalism of decoherent histories is correct as they predict nothing different than standard quantum mechanics. The interpretation assigns probabilities to events weather we observe them or not, but does it only after taking into account contextuality. Is this a mortal sin of the approach? Nature is contextual and I don't get the point of the criticism. The interpretation would be incorrect if it does not take into account contextuality. Again, I am not an expert of this approach and I cannot offer a definite conclusion, but to state my bias I like the approach and my gut feeling is that the criticism is without merit.

PS: I'll be going on vacation soon and my next post will be delayed: I will skip a week.

## Measurement = collapse + irreversibility

I got a lot of feedback from last 2 posts and I need to continue the discussion. Even Lubos with his closed mind unable to comprehend anything different than the textbooks from 50 years ago and his combative style said something worth discussing.

But first let me thank Cristi for the picture below which will help clarify what I am trying to state. Let me quickly explain it: the interferometer arms are like the two sides of Einstein's box and once the particle was launched -for the duration of the flight- you can close the input and output of the interferometer, open the exit just in time and still have the interference. So this seems to contradict my prediction. But does it?

This time I do need to dig a bit deeper into the mathematical formalism. First, the role of the observer is paramount: no observer=no measurement. Second, the observer is described by quantum mechanics as well: there is the wavefunction of the quantum system, and there is the wavefunction of the observer. Now here is the new part: while we can combine the quantum system and the observer by tensor product and do the usual discussion of how unitary evolution does not predict an unique outcome, we need to combine the quantum system and the observer using the Cartesian product. This is something new, not present in standard quantum mechanics textbooks. However this follows naturally from the category theory derivation of quantum mechanics from first principles. There are equivalent Cartesian products corresponding to potential measurement outcomes:

$$(|collapsed ~1 \rangle, | observer~ see~1\rangle ) \equiv ( |collapsed~2\rangle, | observer~see~2\rangle)$$

This equivalence exists in a precise mathematical sense and respects the three properties of the equivalence relationship: reflexivity, symmetry, and transitivity. Break the Cartesian pair equivalence by any mechanism possible and you get the collapse of the wavefunction.

Closing the interferometer, or cutting Einstein's box in half kills the equivalence and the wavefunction collapses. However while the particle is still in flight the process is reversible!!! Open the interferometer's exits in time and you restore the equivalence, undo the collapse still get the interference (Han Solo kills the stormtrooper 100% of the time).

However there is a way to make the collapse permanent: just wait enough time with the ends closed such that the energy-time uncertainty relation allows you to reduce the energy uncertainty to the point that you can detect the particle inside by weighing the half-boxes or the arms of the interferometer. Suppose the ends of the interferometer is made out of perfect mirrors. If you wait long enough (even though you are not physically weighing anything) and then reopen the exits will result in loss of interference: this is my prediction.

But what happens if you only wait a little bit of time and you are in between full interference and no interference? You get a weak measurement.

Now let me discuss Lubos objection and then come back to the nonlocality point Bricmont was making.

First Lubos was stating: "If you place some objects (a wall) at places where a particle is certain not to be located, the effect on the particle's future behavior is obviously non-existent". The objection is vacuous. Obviously I don't disagree with the statement, but his entire line of argument is incorrect because collapse is not a dynamic process. If collapse would have had a dynamic origin then we would have had a unitary explanation for it we would have had to talk about the "propagation" of collapse. What the Cartesian pair mathematical framework does is first getting the rid of the consciousness factor, and second clarifying the precise mathematical framework of how the observer should be treated within the formalism. Contextuality is paramount in quantum mechanics and cutting the box changes the context of the experiment.

Now onto Bricmont argument. Andrei stated in his comments: " I still do not see the relevance of all this in regards to the locality dilemma.". It has deep relevance as I will explain. And by the way, the rest of Andrei's comments were simply not worth answering-nothing personal: I don't have the luxury of enough free time to answer each and every comment.

Bricmont's point on Einstein's boxes was this: "either there is action at a distance in nature (opening B1 changes the situation at B2), or the particle was in B2 all along and quantum mechanics is incomplete "

Let's discuss the two options:
1. opening B1 changes the situation at B2
2. or the particle was in B2 all along and quantum mechanics is incomplete
Option one is clearly not the case. Wait long enough and the interference will no longer happen. At that point the particle IS in either B1 or B2 and shipping one box far away changes nothing. But how about option 2? Is quantum mechanics incomplete? Bohmian supporters think so because they augment the wavefunction with a hidden variable: the particle's initial position. Do we actually need this initial condition to make predictions? Not at all. Last thing to consider: was the particle in say B2 all along? If yes, there is no interference because the which way information. What about weak measurements? This is a case of even more examples of "surrealistic trajectories": combine two interferometers and you can obtain disjoint paths!!! The only thing which makes sense is that the particle does not have a well defined trajectory.

My question to Bohmian interpretation supporters is as follows: In the above picture close the arms long enough. What happens to the quantum potential? Does it dissipate? If yes how? If no, do you always get interference after opening the stormtrooper end regardless of the wait time?

Finally back to measurement. There is no (strong) measurement without collapse. Collapse happens when a particular equivalence relationship no longer holds. Mathematically it can be proven that the wavefunction is projected on a subspace of the original Hilbert space. Moreover uniqueness can be proven as well: that this is the only mathematically valid mechanism in which projection can occur. Interaction between the quantum system and the measurement device can break the equivalence, but changing the experimental context can achieve the same thing as well. A measurement does not happen however until irreversibility occurs: there could be amplification effects, or as above enough time passes such that the energy uncertainty is low enough and the "which way" information becomes available regardless if we seek this information or not.

## A measurement can be more than an observer learning the value of a physical observable

Last post created quite a stir and I want to expand on the ideas from it. This will also help me get out of an somewhat embarrassing situation. For months now Lubos Motl tried to get revenge on his bruised ego after a well deserved April Fool's joke and became a pest at this blog. The problem is that although I have yet to see a physics post at his blog that is 100% correct, we share roughly the same intuition about quantum mechanics: I agree more much more with his position than say with the Bohmian, GRW, or MWI approaches. The differences are on the finer points and I found his in depth knowledge rusty and outdated. For his purpose: to discredit the opposite points of view at all costs this is enough, but it does not work if you are a genuine seeker of truth.

So last time he commented here: "A measurement is a process when an observer actually learns the value of a physical observable" which from 10,000 feet is enough. However this is not precise enough, and now I do have a fundamental disagreement with Lubos which hopefully will put enough distance between him and me.

More important than my little feud with Lubos, I can now propose an experiment which will either validate or reject my proposed solution to the measurement problem. I do have a novel proposal on how to solve the measurement problem and this is distinct from all other approaches. I was searching for months for a case of a novel experimental prediction, but when I applied it to many problems I was getting the same predictions as standard quantum mechanics. Here is however a case where my predictions are distinct. I will not work out the math and instead let me simply present the experiment and make my experimental claim.

Have a box with a single particle inside. The box has a middle separator and also two slits A and B which can be placed next to a two-slit screen. We can then carry two kinds of experiments:

1. open the two slits A and B without dropping the separator allowing the particle to escape the box and hit a detector screen after the two-slit screen.
2. drop the separator and then open the two slits A and B allowing the particle to escape the box and hit a detector screen after the two-slit screen.
Next we repeat the experiments 1 or 2 enough times to see the pattern emerge on the final screen. Which pattern would we observe?

For experiment 1 we already know the answer: if we repeat it many times we obtain the interference pattern, but what will we get in the case of experiment number 2?

If dropping the separator constitutes a measurement, the wavefunction would collapse and we get two spots on the detector screen corresponding to two single slit experiments. If however dropping the separator does not constitute a measurement, then we would get the same interference pattern as in experiment 1.

My prediction (distinct from textbook quantum mechanics) is that there will be no interference pattern.

## Are Einstein's Boxes an argument for nonlocality?

### (an experimental proposal)

Today I want to discuss a topic from an excellent book by Jean Bricmont: Making Sense of Quantum Mechanics which presents the best arguments for the Bohmian interpretation. Although I do not agree with this approach I appreciate the clarity of the arguments and I want to present my counter argument.

On page 112 there is the following statement: "... the conclusion of his [Bell] argument, combined with the EPR argument is rather that there are nonlocal physical effects (and not just correlations between distant events) in Nature".

To simplify the argument to its bare essentials, a thought experiment is presented in section 4.2: Einstein's boxes. Here is how the argument goes: start with a box B and a particle in the box, then cut the box into two half-boxes B1 and B2. If the original state is $$|B\rangle$$, after cutting the state it becomes:

$$\frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$

Then the two halves are spatially separated and one box is opened. Of course the expected thing happens: the particle is always found in one of the half-boxes. Now suppose we find the particle in B2. Here is the dilemma: either there is action at a distance in nature (opening B1 changes the situation at B2), or the particle was in B2 all along and quantum mechanics is incomplete because $$\frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$ does not describe what is going on. My take on this is that the dilemma is incorrect. Splitting the box amounts to a measurement regardless if you look inside the boxes or not and the particle will be in either B1 or B2.

Here is an experimental proposal to prove that after cutting the box the state is not $$\frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$:

split the box and connect the two halves to two arms of a Mach-Zehnder interferometer (bypassing the first beam splitter). Do you get interference or not? I say you will not get any interference because by weighing the boxes before releasing the particle inside the interferometer gives you the which way information.

If we do not physically split the box, then indeed $$|B\rangle = \frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$, but if we do physically split it $$|B\rangle \neq \frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$. There is a hidden assumption in Einstein's boxes argument: realism which demands non-contextuality. Nature and quantum mechanics is contextual: when we do introduce the divider the experimental context changes.

Bohmian's supporters will argue that always $$|B\rangle = \frac{1}{\sqrt{2}}(|B_1\rangle+|B_2\rangle)$$. There is a simple way to convince me I am wrong: do the experiment above and show you can tune the M-Z interferometer in such a way that there is destructive interference preventing the particle to exit at one detector.

## Gleason's Theorem

It feels good to be back to physics, and as a side note going forward I will do the weekly posts on Sunday. Today I want to talk about Gleason's theorem. But what is Gleason's theorem?

If you want to assign a non-negative real valued function $$p(v)$$ to every vector v of a Hilbert space H of dimension greater than two, then subject to some natural conditions the only possible choice is $$p(v) = {|\langle v|w \rangle |}^{2}$$ for all vectors v and an arbitrary but fixed vector w.

Therefore there is no alternative in quantum mechanics to compute the average value of an observable A the standard way by using:

$$\langle A \rangle = Tr (\rho A)$$

where $$\rho$$ is the density matrix which depends only on the preparation process.

Gleason's theorem is rather abstract and we need to unpack its physical intuition and the mathematical gist of the argument. Physically, Gleason's theorem comes from three axioms:

• Projectors are interpreted as quantum propositions
• Compatible experiments correspond to commuting projectors
• KEY REQUIREMENT: For any two orthogonal projectors P, Q, the sum of their expectation values is the expectation value of P+Q: $$\langle P \rangle + \langle Q\rangle = \langle P+Q\rangle$$
In an earlier post I showed how violating the last axiom (which is the nontrivial one), in the case of spin one particles, can be used to send signals faster than the speed of light and violate causality. But how does Gleason arrives at his result?

Let's return at the original problem: to obtain a real non-negative function p. Now add the key requirement and demand that for any complete orthonormal basis $$e_m$$ we have:

$$\sum_m p(e_m) = 1$$

For example in two dimensions on a unit circle we must have:

$$p (\theta) + p(\theta + \pi/2) = 1$$

which constrain the Fourier expansion of $$p (\theta)$$ such that only components 2, 6, 10, etc can be non zero. In three dimensions the constraints are much more severe and this involves rotations under SO(3) and spherical harmonics. I'll skip the tedious math, but it is not terribly difficult to show that the only allowed spherical harmonics must be of order 0 and 2 which yields: $$p(v) = {|\langle v|w \rangle |}^{2}$$.

The real math heavy lifting is on dimensions larger than three and to prove it Gleason first generalizes  $$\sum_m p(e_m) = 1$$ to $$\sum_m f(e_m) = k$$ where k is any positive value. He names this "f" a "frame function". Then he proceeds to show that dimensions larger than three do not add anything new.

If you are satisfied with the Hilbert spaces of dimension 3, the proof of the theorem is not above undergrad level, and I hope it is clear what the argument is. But what about Many Worlds Interpretation? Can we use Gleason's theorem there to prove Born rule? Nope. The very notion of probabilities is undefined in MWI, and I am yet to see a non-circular derivation of Born rule in MWI. I contend it can't be done because it is a mathematical impossibility and I blogged about it in the past.

## The Open Society and Its Enemies

Today I had planned to return to physics and talk about Gleason's theorem, but as US politics still brutally interjects into our life I want to explain my take of the events and to cleanse and de-Trumpify my life before resuming the physics topics. Gleason's theorem s not going anywhere and I will postpone the topic for a week.

Let me start with a "disclaimer": I am manipulated/incited by media, I am a professional protester paid by Soros, I am a hard core liberal/socialist in love with Hilary, I hate the silent majority of blue color hard working people who makes my life comfortable, and I think they are all bigots and racists. (for the record this was tongue-in-cheek).

Until my second year in college I lived under a totalitarian system and I experienced a lengthy decade of a transition to democracy after 1989 in Romania. What I observe today in the US is the process in reverse. So how did we get here?

To shorten the history, let me start with the end of second Bush presidency and the inauguration of Obama. At that time all inter-bank landing came to a screeching halt and it was as if someone had hit the turn-off switch on the economy.  To jolt the system back to life, Obama resorted to the ides of Keynes and introduced the stimulus package. But Obama had one fault: he was black and this startled the rednecks who organized into what later became the Tea Party. This was America's reactionary ideology which cannot expressed their opposition to Obama's skin color due to political correctness, and instead went after him on fiscal ideas. The mainstream republicans had a love/hate relationship with the Tea Party because on one hand they fear it as something which cannot be controlled, but on the other hand they draw their support from the same electoral pool.

The mainstream republicans are masters of duplicity: they draw their support from the poor rural, uneducated part of America by praising their self-reliance and tickling their self-esteem, while they push policies which actually hurt their mass constituency while enriching the big business donors. To pull this trick they rely on a disgusting propaganda machine: Fox News. What Trump did was to break the republican lies and come out in the open with a full display of racism and intolerance. Trump was running not only against Hillary but against mainstream republicans as well.

Now fast forward to today. I don't think Trump is stupid: he is a master manipulator, an immoral con man who plays on others core beliefs. Trump lacks any core beliefs/moral compass and this makes him extremely dangerous: a narcissistic psychopath bully now with nukes. You only fool yourself into giving him a chance/benefit of the doubt. The only thing Trump respects is raw power and a forceful push back.

But what about the republican electorate? Almost half of them are brainwashed imbeciles: in June of 2016 41% of registered republicans thought Obama was not born in US. Moreover 31% did not know what to think and only 27% of them were on the sane side!!!

So how can we deal with Trump and his constituency? Let's go back to basics, the US constitution. Trump stated that the second amendment is under  siege, but he is now attacking the first amendment:

"Prohibits Congress from making any law respecting an establishment of religion, impeding the free exercise of religion, abridging the freedom of speech, infringing on the freedom of the press, interfering with the right to peaceably assemble or prohibiting the petitioning for a governmental redress of grievances."

The main characteristic of a totalitarian state was brilliantly captured by Popper in his book:
"The Open Society and Its Enemies"

The other day Trump tweeted: "Now professional protesters, incited by the media, are protesting. Very unfair!"

He tried to intimidate the media and interfered with the rights of the people to protest. Trumps wants to build physical and economic walls, deport millions of people, and wants to turn America into a closed totalitarian society. I find this unacceptable:

Trump is not my president.

Trump is not alone. He has a cohort of bad supporters. First the shame list of totalitarians willing to trample your rights:

- Trump: see above
- Rudy Giuliani: advocates locking up political opponents
- Chris Christie: did political revenge
- Stephan Bannon: antisemitic racist; the Goebbles of Trump

Next this is followed by opportunistic, lying, immoral, disgusting individuals:
-Reince Priebus: no slimy job is too slimy for him
-Bill O'Reilly

Then plain toxic people:
-Sarah Palin
-Ted Cruz

Then are the deplorables, and here I name a troll of this site: Lubos Motl. In Romania there is one guy Radu Moraru with his TV station Nasul TV who got involved into US politics and the Romanian vote here (in a covert effort to unseat the head of the Romanian anti-corruption agency). On that TV station I respect only one guy: Grigore Cartianu. Then there are the brainwashed.

I have no respect for the people above and I draw the line here.

The rest of people who voted for Trump are not racists or brainwashed and I have a meaningful polite conversation with them. I have several friends who voted for Trump and I have no ill feeling towards them and while we disagree we do it in the boundary of decency.

I only insist on one point: if you voted for Trump you are personally responsible for the consequences of Trump's presidency.

## The beginning of the end of US democracy

Trump just visited Obama in the White House today and as a result he felt embolden to complain about protests against him.

If you voted for Trump you are personally responsible for the consequences of Trump's presidency.

## Donald Trump proves Time Travel does not exist

I have been proven wrong, Donald Trump won the election and apparently it is not wise to underestimate people's stupidity. So how did we got here?

One one hand, the establishment IS corrupt and there was a deep need for fresh air. Another Bush or another Clinton was a insupportable option. But on the other hand, racism still runs deep and while it was driven underground by politically correctness, it came back with a vengeance. While Trump's disgusting brand of politics is nothing new in Europe, US did not have the antibodies to combat it and people will lean the hard way how to do it.

If you voted for him you are personally responsible for the consequences of Trump's presidency.

## Waiting for US election results

I will comment tomorrow on the election outcome...

## Einstein's reality criterion

Sorry for the delay, this week I want to have a short post continuing to showcase the results of late Asher Peres which unfortunately are not well known and this is a shame.

In the famous EPR paper, Einstein introduced a reality criterion:

"If, without in any way disturbing a system, we can predict with certainty ... the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity"

Now it is generally accepted that the difference between classical and quantum mechanics is noncomutativity. While there are some subtle points to be made about this assertion (by the mathematical community), from the physics point of view the distinction is rock solid and we can build in confidence upon it.

Now consider again the  EPR-B experiment with its singlet state. Suppose the x and y components of the spin exists independent of measurement and let's call the measurement values: $$m_{1x}, m_{1y}, m_{2x}, m_{2y}$$. From experimental results we know:

$$m_{1x} = - m_{2x}$$
and
$$m_{1y} = - m_{2y}$$

And now for the singlet state $$\psi$$ let's compute:

$$(\sigma_{1x} \sigma_{2y} + \sigma_{1y} \sigma_{2x})\psi$$

which turns out to be zero. The beauty of this is that $$\sigma_{1x} \sigma_{2y}$$ commutes with $$\sigma_{1y} \sigma_{2x}$$ and by Einstein's reality criterion extended to commuting operators it implies that $$m_{1x} m_{2y} = - m_{1y} m_{2x}$$ which contradicts $$m_{1x} = - m_{2x}$$ and $$m_{1y} = - m_{2y}$$

This contradiction is in the same vein as the GHZ result, but it is not well known. The catch of this result is that measuring S1xS2y cannot be done at the same time with a measurement of  S1yS2x and so we are reasoning counterfactually. However, counterfactual reasoning is allowed in a noncontextual setting (in classical mechanics and in quantum mechanics for commuting operators) and the result is valid.

## von Neumann  and Gleason vs. Bell

Returning to physics topics today I want to talk about an important contention point between von Neuman and Gleason on one had, and Bell on the other. I had a series of posts about Bell in which I discussed his major achievement. However I do not subscribe to his ontic point of view and today I will attempt to explain why and perhaps persuade the reader with what what I consider to be a solid argument.

Before Bell wrote his famous paper he had another one in which he criticized von Neumann, Jauch and Piron, and Gleason. The key of the criticism was that additivity of orthogonal projection operators not necessarily implies the additivity of expectation values:

$$\langle P_u + P_v \rangle = \langle P_{u}\rangle + \langle P_{v}\rangle$$

The actual technical requirements in von Neumann and Gleason case were slightly different, but they can be reduced to the statement above and more importantly this requirement is the nontrivial one in a particular proof of Gleason's theorem

 Andrew Gleason

To Bell, additivity of expectation values is a non-natural requirement because he was able to construct hidden variable models violating this requirement. And this was the basis for his criticism of von Neumann and his theorem of the impossibility of hidden variables. But is this additivity requirement unnatural? What can happen when it is violated? I will show that violation on additivity of expectation values can allow instantaneous communication at a distance.

The experimental setting is simple and involves spin 1 particles. The example which I will present is given in late Asher Peres book: Quantum Theory: Concepts and Methods at page 191. (This book is one of my main sources of inspiration for how we should understand and interpret quantum mechanics. )

The mathematical identity we need is:

$$J_{z}^{2} = {(J_{x}^{2} - J_{y}^{2})}^2$$

and the experiment is as follows: a beam of spin 1 particles is sent through a beam splitter which sends to the left particles of eigenvalue zero for $$J_{z}^{2}$$ and to the right particles of eigenvalue one for $$J_{z}^{2}$$.

Now a lab on the right decides to measure either if $$J_z = 1$$ or if $$J_{x}^{2} - J_{y}^{2} = 1$$

For the laboratory on the right let's call the projectors in the first case $$P_u$$ and $$P_v$$ and in the second case $$P_x$$ and $$P_y$$

For the lab on the left let's call the projectors in the first case $$P_{w1}$$ and in the second case$$P_{w2}$$.

Because of the mathematical identity: $$P_u + P_v = P_x +P_y$$ the issues becomes: should the expectation value requirement hold as well?

$$\langle P_{u}\rangle + \langle P_{v}\rangle = \langle P_{x}\rangle + \langle P_{y}\rangle$$

For the punch line we have the following identities:

$$\langle P_{w1}\rangle = 1 - \langle P_{u}\rangle - \langle P_{v}\rangle$$
and
$$\langle P_{w2}\rangle = 1 - \langle P_{x}\rangle - \langle P_{y}\rangle$$

and as such if the additivity requirement is violated we have:

$$\langle P_{w1}\rangle \neq \langle P_{w2}\rangle$$

Therefore regardless of the actual spatial separation, the lab on the left can figure out which experiment the lab on the right decided to perform!!!

With this experimental setup, if additivity of expectation values is false, you can even violate causality!!!

Back to Bell: just because von Neumann and Gleason did not provide a justification for their requirements, this does not invalidate their arguments. The justification was found at a later time.

But what about the Bohmian interpretation of quantum mechanics? Although there are superluminal speeds in the theory, superluminal signaling is not possible in it. This is because Bohmian interpretation respects Born rule which is a consequence of Gleason't theorem and it respects the additivity of  expectation values as well. Bohmian interpretation suffers from other issues however.

## A US Presidential Election Analysis

Once in a while, important events deserve to be discussed and they dislodge physics topics. I wrote in the past about Donald Trump, and today I want to revisit the topic and present some analysis on what is currently going on in US election politics. By now the election outcome is all but certain: Trump will lose, and Clinton will win, but what is the basis for this prediction?

If you never heard of it, there is an amazing site by Nate Silverhttp://projects.fivethirtyeight.com/2016-election-forecast/

Nate Silver has a huge well deserved prediction credibility and he performs in-depth analysis of the elections way more than what you find on the usual media outlets like CNN.

In the image below you see the daily graph of the winning chances for Trump (the red line) and Clinton (the blue line).

Mid July Trump got a post Republican convention boost and he was on the rise until Clinton had the democratic convention.The sharp Trump decline after that convention was due to his attack on the Khan family, whose son died for America. When that scandal faded, mid August, Trump's odds began improving following Clinton's erosion of trust due to the email server scandal, and also due to concerns about her health. Then came the first debate in which Trump had a very good first half an hour but was ill prepared for the long haul of the debate. That started a turn-off reaction for the independent voters who only now got the first serious look at him.

Still the slide was temporary and the fluctuations were comparable with the prior two weeks and for two days he was climbing back in the polls. At this point the famous tape of him bragging about grabbing women by their genitals surfaced and this started a a chain reaction mostly inside the Republican party. The tape reversed the trend, but what killed his election chances was his performance in the second debate. Trump made two strategic mistakes:

• he attacked Hillary (and Bill Clinton) instead of sincerely apologizing
• he dismissed the tape as locker room talk and claimed he did not do anything physical
Let's see what those were fatal mistakes for him. By going on the offensive when people expected genuine contrition made Trump appear as a rabid dog and people were hugely disgusted by his behavior. The general consensus of the independent people who watched the second debate was that they themselves felt dirty and in need of a shower. The second debate reduced Trump's changes into low teen numbers. If you look at the two prior cycles: June-August and August-October you notice the bouncing back rate for Trump and that there is not enough time for him to close the gap before election day.

Now even if the election is postponed a few months, Trump will never recover due to his second strategic mistake. For all his playboy behavior, it is impossible that he never did anything real as he was bragging on the tape. But by claiming it was all "only talk" as opposed to Bill Clinton's actions encouraged women to come forward to tell their stories. Once this started it cannot be stopped. Just ask Bill Cosby on how it happened in his case: the same pattern will repeat here.

When the tape was released, republicans running for reelection started deserting Trump out of fear that he will negatively affect their changes of reelection due to the backlash in the women's vote. But by now is is clear Trump's chances of election are virtually zero and this has the potential to split the Republican party.

After the election loss, the finger-pointing will begin. Rience Priebus has no real vision or power and will most likely lose his job. The power vacuum will start a chaotic period for the Republican party which will end either by a victory of anti-Trump forces, or a party split. My bet is that the party will remain intact since politicians tend to act as a pack: there is strength in numbers and it is hard to survive alone.

## Local Causality in a Friedmann-Robertson-Walker Spacetime

A few days ago I learned about a controversy regarding Joy Christian's paper:
Local Causality in a Friedmann-Robertson-Walker Spacetime which got published in Annals of Physics and was recently withdrawn: http://retractionwatch.com/2016/09/30/physicist-threatens-legal-action-after-journal-mysteriously-removed-study/

The paper repeats the same mathematically incorrect arguments of Joy Christian against Bell's theorem and has nothing to do with Friedmann-Robertson-Walker spacetime. The FRW space was only used as a trick to get the wrong referees which are not experts on Bell theorem. In particular the argument is the same as in this incorrect Joy's one-pager preprint

The mistake happens in two steps:
• a unification of two algebras into the same equation
• a subtle transition from a variable to an index in a computation mixing apples with oranges
I will run the explanation in parallel between the one-pager and the withdrawn paper because it is easier to see the mistake in the one-pager.

Step 1: One-pager Eq. 3 is the same as FRW paper Eq. 49:

$$\beta_j \beta_k = -\delta_{jk} - \epsilon_{jkl} \beta_l$$
$$L(a, \lambda) L(b, \lambda) = - a\cdot b - L(a\times b, \lambda)$$

In the FRW paper $$L(a, \lambda) = \lambda I\cdot a$$ while in the 1-pager: $$\beta_j (\lambda) = \lambda \beta_j$$ where $$\lambda$$ is a choice of orientation. This make look as an innocuous unification but in fact it describes two distinct algebras with distinct representations.

This means that Eqs. 3/49 describe two multiplication rules (and let's call them A for apples and O for oranges). Unpacked, the multiplication rules are:

$$A_i A_j = -\delta_{jk} + \epsilon_{jkl} A_l$$
$$O_i O_j = -\delta_{jk} - \epsilon_{jkl} O_l$$

The matrix representations are:

$$A_1 = \left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array}\right) = i\sigma_3$$
$$A_2 = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array}\right) = -i \sigma_2$$
$$A_3 = \left( \begin{array}{cc} 0 & -i \\ -i & 0 \end{array}\right)= -i \sigma_1$$

and $$O_i = - A_i = {A_i}^{\dagger}$$

Try multiplying the above matrices to convince yourself that they are indeed a valid representation of the multiplication rule.

There is even a ket and bra or column and row vector representation of the two distinct algebras, but I won't go into details since it requires a math detour which will takes the focus away from Joy's mistake.

Step 2: summing apples with oranges (or column vectors with row vectors)

The summation is done in steps 5-7 and 67-75. The problem is that the sum from 1 to n contains two kinds of objects apples and oranges and should be in fact broken up in two sums. If this needs to be combined into a single sum then we need to convert apples and oranges to orientation independent objects. Since $$L(a, \lambda) = \lambda I\cdot a$$ and  $$\beta_j (\lambda) = \lambda \beta_j$$ with $$I \cdot a$$ and $$\beta_j$$ orientation independent objects, when we convert the two kinds of objects to a single unified kind there is an additional missing factor of lambda.

Since $$O_j=\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = +1$$ and $$A_j=-\beta_j (\lambda^k) = \lambda^k \beta_j$$ with $$\lambda^k = -1$$ where $$\lambda^k$$ is the orientation of the k-th pair of particles, in the transition from 6 to 7 and 72 to 73  in an unified sum we are missing a $$\lambda^k$$  factor.

Again, either break up the sum into apples and oranges (where the index k tells you which kinds of objects you are dealing with) or unify the sum and adjust it by converting it into orientation-free objects and this is done by multiplication by $$\lambda^k$$. If we separate the sums, they will not cancel each other out because there is -1 a conversion factor from apples to oranges  $$O = - A$$, and if we unify the sum as Joy does in Eq. 74 the sum is not of $$\lambda^k$$ but of $${(\lambda^k)}^2$$ which does not vanish.

As it happens Joy's research program is plagued by this -1 (or missing lambda) mistake in his attempt to vanquish a cross product term. But even if his proposal were mathematically valid it would not represent a genuine challenge to Bell's theorem. Inspired by Joy's program, James Weatherall found a mathematically valid example very similar with Joy's proposal but one which does not use quaternions/Clifford algebras.

The lesson of Weatherall is that correlations must be computed using actual experimental results and the computation (like the one Joy is doing at steps 67-75) must not be made in a hypothetical space of "beables".

Now back to the paper withdrawal, the journal did not acted properly: it should have notified Joy before taking action. However Joy did not act in good faith by masquerading the title to sneak it past imperfect peer review and his attempt at victimization in the comments section has no merit. In the end the paper is mathematically incorrect, has nothing to do with FRW spacetime, and (as proven by Weatherall) Joy's program is fatally flawed and cannot get off the ground even if there were no mathematical mistakes in it.

## The whole is greater than the sum of its parts

The tile of today's post is a quote from Aristotle, but I want to illustrate this in the quantum formalism. Here I will refer to a famous Hardy paper: Quantum Theory From Five Reasonable Axioms. In there one finds the following definitions:

• The number of degrees of freedom, K, is defined as the minimum number of probability measurements needed to determine the state, or, more roughly, as the number of real parameters required to specify the state.
• The dimension, N, is defined as the maximum number of states that can be reliably distinguished from one another in a single shot measurement.
Quantum mechanics obeys $$K=N^2$$ while classical physics obeys $$K=N$$.

Now suppose nature is realistic and the electron spin does exist independent of measurement. From Stern-Gerlach experiments we know what happens when we pass a beam of electrons through two such devices rotates by an angle $$\alpha$$: suppose we pick only the spin up electrons, on the second device the electrons are still deflected up $$\cos^2 (\alpha /2)$$ percent of time and are deflected down $$\sin^2 (\alpha /2)$$ percent of time . This is an experimental fact.

Now suppose we have a source of electron pairs prepared in a singlet state. This means that the total spin of the system is zero. There is no reason to distinguish a particular direction in the universe and with the assumption of the existence of the spin independent of measurement we can very naturally assume that our singlet state electron source produces an isotropic distribution of particles with opposite spins. Now we ask: in an EPR-B experiment, what kind of correlation would Alice and Bob get under the above assumptions?

We can go about finding the answer to this in three ways. First we can cheat and look the answer up in a 1957 paper by Bohm and Aharonov who first made the computation, This paper (and the answer) is cited by Bell in his famous "On the Einstein-Podolsky-Rosen paradox". But we can do better than that. We can play with the simulation software from last time. Here is what you need to do:

-replace the generating functions with:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;
};

-replace the -cosine curve drawing with  a -0.3333333 cosine curve:

boardCorrelations.create('functiongraph', [function(t){ return -0.3333333*Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:  "#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});

replace the fit test for the cosine curve with one for with 0.3333333 cosine curve:

var diffCosine = epsilon + 0.3333333*Math.cos(angle);

and the result of the program (for 1000 directions and 1000 experiments) is:

So how does the program work? The sharedRandomness3DVector is the direction on which the spins are randomly generated. The dot product compute the cosine of the angle between the measurement direction and the spin, and from it we can compute the cosine of the half angle. The square of the cosine of the half angle is used to determine the random outcome. The resulting curve is 1/3 of the experimental correlation curve. Notice that the output generation for Alice and Bob are completely independent (locality).

But the actual analytical computation is not that hard to do either. We proceed in two steps.

Step 1: Let $$\beta$$ be the angle between one spin $$x$$ and a measurement device direction $$a$$. We have: $$\cos (\beta) = a\cdot x$$ and:

$${(\cos \frac{\beta}{2})}^2 = \frac{1+\cos\beta}{2} = \frac{1+a\cdot x}{2}$$

Keeping the direction $$x$$ constant, the measurement outcomes for Alice and Bob measuring on the directions $$a$$ and $$b$$ respectively are:

++ $$\frac{1+a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time
-- $$\frac{1-a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
+-$$\frac{1+a\cdot x}{2} \frac{1-b\cdot (-x)}{2}$$ percent of the time
-+$$\frac{1-a\cdot x}{2} \frac{1+b\cdot (-x)}{2}$$ percent of the time

which yields the correlation: $$-(a\cdot x) (b \cdot x)$$

Step 2: integrate $$-(a\cdot x) (b \cdot x)$$ for all directions $$x$$. To this aim align $$a$$ on the z axis and have $$b$$ in the y-z plane:

$$a=(0,0,a)$$
$$b=(0, b_y , b_z)$$

then go to spherical coordinates integrating using:

$$\frac{1}{4\pi}\int_{0}^{2\pi} d\theta \int_{0}^{\pi} \sin\phi d\phi$$

$$a\cdot x = \cos\phi$$
$$b\cdot x = b(0, \sin\alpha, -\cos\alpha)\cdot(\sin\phi \cos\theta, \sin\phi\sin\theta, \cos\phi)$$

where $$\alpha$$ is the angle between $$a$$ and $$b$$.

Plugging all back in and doing the trivial integration yields: $$-\frac{\cos\alpha}{3}$$

So now for the moral of the story. the quantum mechanics prediction and the experimentally observed  correlation is  $$-\cos\alpha$$ and not $$-\frac{1}{3} \cos\alpha$$

The 1/3 incorrect correlation factor comes from demanding (1) the experimentally proven behavior from two consecutive S-G device measurements, (2) the hypothesis that the electron spins exist before measurement, and (3) and isotropic distribution of spins originating from a total spin zero state.

(1) and (3) cannot be discarded because (1) is an experimental behavior, and (3) is a very natural demand of isotropy. It is (2) which is the faulty assumption.

If (2) is true then circling back on Hardy's result, we are under the classical physics condition: $$K=N$$ which means that the whole is the sum of the parts.

Bell considered both the 1/3 result and the one from his inequality and decided to showcase his inequality for experimental purposes reasons: "It is probably less easy, experimentally, to distinguish (10) from (3), then (11) from (3).". Both hidden variable models:

if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;

and

var cosAngle= Dot(direction, sharedRandomness3DVector);
var cosHalfAngleSquared = (1+cosAngle)/2;
if (Math.random() < cosHalfAngleSquared )
return -1;
else
return +1;

are at odds with quantum mechanics and experimental results. The difference between them is on the correlation behavior for 0 and 180 degrees. If we allow information transfer between Alice generating function and Bob generating function (nonlocality) then it is easy to generate whatever correlation curve we want under both scenarios (play with the computer model to see how it can be done).

So from realism point of view, which hidden variable model is better? Should we insist on perfect anti-correlations at 0 degrees, or should we demand the two consecutive S-G results along with realism? It does not matter since both are wrong. In the end local realism is dead.

## Explanation for Bell's theorem modeling program

Today I will explain in detail the code from last time and show how can you change it to experiment with Bell's theorem. The code below needs only a text editor to make modifications and requires only a web browser to run. In other words, it is trivial to play with provided you understand the basics of HTML and Java Script. For elementary introductions to those topics see here and here.

In a standard HTML page we start in the body section with 3 entrees responsible to plot the graph in the end.

<body>
<script src="http://jsxgraph.uni-bayreuth.de/distrib/jsxgraphcore.js" type="text/javascript"></script>

Then we have the following HTML table

<table border="4" style="width: 50%px;">
<tr><td style="width: 25%;">
<br />
Number of experiments: <input id="totAngMom" type="text" value="100" />
<br />
Number of directions: <input id="totTestDir" type="text" value="100" />
<br />

<input onclick="clearInput();" type="button" value="Clear Data" />

<input onclick="generateRandomData();" type="button" value="Generate Shared Random Data" />
<br />

<textarea cols="65" id="in_data" rows="7">
</textarea>
<br />

<input onclick="clearTestDir();" type="button" value="Clear data" />

<input onclick="generateTestDir();" type="button" value="Generate Random Alice Bob directions (x,y,z,x,y,z)" />
<textarea cols="65" id="in_test" rows="4">
</textarea>
<br />
<input onclick="clearOutput();" type="button" value="Clear Data" />

<input onclick="generateData();" type="button" value="Generate Data from shared randomness" />
<br />
Legend: Direction index|Data index|Measurement Alice|Measurement Bob
<textarea cols="65" id="out_measurements" rows="4">
</textarea>
<input onclick="clearBoard();" type="button" value="Clear Graph" />
<input onclick="plotData();" type="button" value="Plot Data" />

</td>
</tr>
<tr>
<td>
<div class="jxgbox" id="jxgboxCorrelations" style="height: 400px; width: 550px;">
</div>

</td></tr>
</table>

and we close the body:

</body>

The brain of the page is encapsulated by script tags:

<script type="text/javascript">
</script>

which can be placed anywhere inside the HTML page. Here are the functions which are declared inside the script tags:

//Dot is the scalar product of 2 3D vectors
function Dot(a, b)
{
return a[0]*b[0] + a[1]*b[1] + a[2]*b[2];
};

This simply computes the dot product of two vectors in ordinary 3D Euclidean space. As a Java Script reminder, the arrays start at index zero and go to N-1. Also in Java Script comments start with two double slash // and lines end in semicolon ;

Next there is a little utility function which computes the magnitude of a vector:

//Norm computes the norm of a 3D vector
function GetNorm(vect)
{
return Math.sqrt(Dot(vect, vect));
};

This is followed by another utility function which normalizes a vector:

//Normalize generates a unit vector out of a vector
function Normalize(vect)
{
//declares the variable
var ret = new Array(3);
//computes the norm
var norm = GetNorm(vect);

//scales the vector
ret[0] = vect[0]/norm;
ret[1] = vect[1]/norm;
ret[2] = vect[2]/norm;
return ret;
};

To create an random oriented vector we use the function below which first randomly generates a point in a cube of side 2, eliminated the points outside a unit sphere, and then normalizes the vector:

//RandomDirection create a 3D unit vector of random direction
function RandomDirection()
{
//declares the variable
var ret = new Array(3);

//fills a 3D cube with coordinates from -1 to 1 on each direction
ret[0] = 2*(Math.random()-0.5);
ret[1] = 2*(Math.random()-0.5);
ret[2] = 2*(Math.random()-0.5);

//excludes the points outside of a unit sphere (tries again)
if(GetNorm(ret) > 1)
return RandomDirection();
return Normalize(ret);
};

The rest of the code is this:

var generateData = function()
{
clearBoard();
clearOutput();
//gets the data
var angMom = new Array();
var t = document.getElementById('in_data').value;
var data = t.split('\n');
for (var i=0;i<data.length;i++)
{
var vect = data[i].split(',');
if(vect.length == 3)
angMom[i] = data[i].split(',');
}

var newTotAngMom = angMom.length;
clearBoard();
var varianceLinear = 0;
var varianceCosine = 0;
var totTestDirs = document.getElementById('totTestDir').value;

var abDirections = new Array();
var AliceDirections = new Array();
var BobDirections = new Array();
var t2 = document.getElementById('in_test').value;
var data2 = t2.split('\n');
for (var k = 0; k < data2.length; k++)
{
var vect2 = data2[k].split(',');
if (vect2.length == 6)
{
abDirections[k] = data2[k].split(',');
AliceDirections[k] = data2[k].split(',');
BobDirections[k] = data2[k].split(',');

AliceDirections[k][0] = abDirections[k][0];
AliceDirections[k][1] = abDirections[k][1];
AliceDirections[k][2] = abDirections[k][2];
BobDirections[k][0]   = abDirections[k][3];
BobDirections[k][1]   = abDirections[k][4];
BobDirections[k][2]   = abDirections[k][5];
}
}

var TempOutput = "";

//computes the output
for(var j=0; j<totTestDirs; j++)
{
var a = AliceDirections[j];
var b = BobDirections[j];
for(var i=0; i<newTotAngMom; i++)
{
TempOutput = TempOutput + (j+1);
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (i+1);
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (GenerateAliceOutputFromSharedRandomness(a, angMom[i]));
TempOutput = TempOutput + ",";
TempOutput = TempOutput + (GenerateBobOutputFromSharedRandomness(b, angMom[i]));
if(i != newTotAngMom-1 || j != totTestDirs-1)
TempOutput = TempOutput + " \n";
}
}

apendResults(TempOutput);
};

var plotData = function()
{
clearBoard();
boardCorrelations.suspendUpdate();
//gets the data
var angMom = new Array();
var t = document.getElementById('in_data').value;
var data = t.split('\n');
for (var i=0;i<data.length;i++)
{
var vect = data[i].split(',');
if(vect.length == 3)
angMom[i] = data[i].split(',');
}

var newTotAngMom = angMom.length;
var varianceLinear = 0;
var varianceCosine = 0;
var totTestDirs = document.getElementById('totTestDir').value;

//extract directions
var abDirections = new Array();
var AliceDirections = new Array();
var BobDirections = new Array();
var t2 = document.getElementById('in_test').value;
var data2 = t2.split('\n');
for (var k = 0; k < data2.length; k++)
{
var vect2 = data2[k].split(',');
if (vect2.length == 6)
{
abDirections[k] = data2[k].split(',');
AliceDirections[k] = data2[k].split(',');
BobDirections[k] = data2[k].split(',');

AliceDirections[k][0] = abDirections[k][0];
AliceDirections[k][1] = abDirections[k][1];
AliceDirections[k][2] = abDirections[k][2];
BobDirections[k][0]   = abDirections[k][3];
BobDirections[k][1]   = abDirections[k][4];
BobDirections[k][2]   = abDirections[k][5];
}
}

var tempLine = new Array();
var Data_Val = document.getElementById('out_measurements').value;
var data_rows = Data_Val.split('\n');

var directionIndex = 1;
var beginNewDirection = false;

var a = new Array(3);
a[0] = AliceDirections[0][0];
a[1] = AliceDirections[0][1];
a[2] = AliceDirections[0][2];
var b = new Array(3);
b[0] = BobDirections[0][0];
b[1] = BobDirections[0][1];
b[2] = BobDirections[0][2];
var sum = 0;

for (var ii=0;ii<data_rows.length;ii++)
{
//parse the input line
var vect = data_rows[ii].split(',');
if(vect.length == 4)
tempLine = data_rows[ii].split(',');

//see if a new direction index is starting
if (directionIndex != tempLine[0])
{
beginNewDirection = true;
}

if(!beginNewDirection)
{
var sharedRandomnessIndex = tempLine[1];
var sharedRandomness = angMom[sharedRandomnessIndex];
var aliceOutcome = tempLine[2];
var bobOutcome = tempLine[3];
sum = sum + aliceOutcome*bobOutcome;
}

if (beginNewDirection)
{
//finish computation
var epsilon = sum/newTotAngMom;
var angle = Math.acos(Dot(a, b));
boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});

var diffLinear = epsilon - (-1+2/Math.PI*angle);
varianceLinear = varianceLinear + diffLinear*diffLinear;
var diffCosine = epsilon + Math.cos(angle);
varianceCosine = varianceCosine + diffCosine*diffCosine;

//reset and start a new cycle
directionIndex = tempLine[0];
a[0] = AliceDirections[directionIndex-1][0];
a[1] = AliceDirections[directionIndex-1][1];
a[2] = AliceDirections[directionIndex-1][2];
b[0] = BobDirections[directionIndex-1][0];
b[1] = BobDirections[directionIndex-1][1];
b[2] = BobDirections[directionIndex-1][2];
sum = 0;
var sharedRandomnessIndex = tempLine[1];
var sharedRandomness = angMom[sharedRandomnessIndex];
var aliceOutcome = tempLine[2];
var bobOutcome = tempLine[3];
sum = sum + aliceOutcome*bobOutcome;
beginNewDirection = false;
}

}
//finish computation for last element of the loop above
var epsilon = sum/newTotAngMom;
var angle = Math.acos(Dot(a, b));
boardCorrelations.createElement('point', [angle,epsilon],{size:0.1,withLabel:false});
var diffLinear = epsilon - (-1+2/Math.PI*angle);
varianceLinear = varianceLinear + diffLinear*diffLinear;
var diffCosine = epsilon + Math.cos(angle);
varianceCosine = varianceCosine + diffCosine*diffCosine;
//display total fit
boardCorrelations.createElement('text',[2.0, -0.7, 'Linear Fitting: ' + varianceLinear],{});
boardCorrelations.createElement('text',[2.0, -0.8, 'Cosine Fitting: ' + varianceCosine],{});
boardCorrelations.createElement('text',[2.0, -0.9, 'Cosine/Linear: ' + varianceCosine/varianceLinear],{});
boardCorrelations.unsuspendUpdate();
};

var clearBoard = function()
{
JXG.JSXGraph.freeBoard(boardCorrelations);
boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations',{boundingbox:[-0.20, 1.25, 3.4, -1.25],axis:true,

boardCorrelations.create('functiongraph', [function(t){ return -Math.cos(t); }, -Math.PI*10, Math.PI*10],{strokeColor:

"#66ff66", strokeWidth:2,highlightStrokeColor: "#66ff66", highlightStrokeWidth:2});
boardCorrelations.create('functiongraph', [function(t){ return -1+2/Math.PI*t; }, 0, Math.PI],{strokeColor: "#6666ff",

strokeWidth:2,highlightStrokeColor: "#6666ff", highlightStrokeWidth:2});
};

var clearInput = function()
{
document.getElementById('in_data').value = '';
};

var clearTestDir = function()
{
document.getElementById('in_test').value = '';
};

var clearOutput = function()
{
document.getElementById('out_measurements').value = '';
};

var generateTestDir = function()
{
clearBoard();
var totTestDir = document.getElementById('totTestDir').value;
var testDir = new Array(totTestDir);
var strData = "";
for(var i=0; i<totTestDir; i++)
{
//first is Alice, second is Bob
testDir[i] = RandomDirection();
strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2]+ ", " ;
testDir[i] = RandomDirection();
strData = strData + testDir[i][0] + ", " + testDir[i][1] + ", " + testDir[i][2] + '\n';
}

document.getElementById('in_test').value = strData;
};

var generateRandomData = function()
{
clearBoard();
var totAngMoms = document.getElementById('totAngMom').value;
var angMom = new Array(totAngMoms);
var strData = "";
for(var i=0; i<totAngMoms; i++)
{
angMom[i] = RandomDirection();
strData = strData + angMom[i][0] + ", " + angMom[i][1] + ", " + angMom[i][2] + '\n';
}

document.getElementById('in_data').value = strData;
};

var apendResults= function(newData)
{
var existingData = document.getElementById('out_measurements').value;
existingData = existingData + newData;
document.getElementById('out_measurements').value = existingData;
};

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) > 0)
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;
};

var boardCorrelations = JXG.JSXGraph.initBoard('jxgboxCorrelations', {axis:true, boundingbox: [-0.25, 1.25, 3.4, -1.25], showCopyright:false});

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

clearBoard();
generateRandomData();
generateTestDir();
generateData();
plotData();

The key to the whole exercise are the following two functions:

function GenerateAliceOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) > 0)
return +1;
else
return -1;
};

function GenerateBobOutputFromSharedRandomness(direction, sharedRandomness3DVector) {
//replace this with your own function returning +1 or -1
if (Dot(direction, sharedRandomness3DVector) < 0)
return +1;
else
return -1;
};

To experiment with various hidden variable models all you have to do is replace the two functions above with your own concoction of hidden variable which uses the shared variable "sharedRandomness3DVector".

There are certain models for which if we return zero (which in the correlation computation is equivalent with discarding the data since the correlations are computed by this line in the code: sum = sum + aliceOutcome*bobOutcome;) a certain number of times as a function of the angle between direction and sharedRandomness3DVector, then one can obtain the quantum mechanics correlation curve. This is the famous detection loophole (or (un)fair sampling) for Bell's theorem.

If we talk about the detection loophole the paper to read is an old one by Philip Pearle: http://journals.aps.org/prd/abstract/10.1103/PhysRevD.2.1418 In there Pearle found an entire class of solutions able to generate the quantum correlations. The original paper is hard to double check (it took me more than a week and I was still not done completely), but Richard Gill did manage to extract a useful workable detection loophole model out of it: https://arxiv.org/pdf/1505.04431.pdf

Manipulating the generating functions above one can easily test various ideas about hidden variable models. For example an isotropic model of opposite spins generates -1/3 a.b correlations. It is not that hard to double check the math in this case: a simple integrals will do the trick. in particular this shows that the spins do not exist independent of measurement.

More manipulations using the detection loophole are even able to generate super-quantum Popescu-Rohrlich box correlations, but I let the user to experiment with this and discover how to do it for themselves. Happy computer modeling!

## Playing with Bell's theorem

In this post I'll write just a little text because editing is done straight in the HTML view which is very tedious. Below I have a Java script program which illustrates Bell's theorem. If you want to play with this code just right click on the page to view the source and extract it from there. If you do not know how to do that then you are not going to understand it in a few sentences. Next time I'll describe the code and how to experiment with various hidden variable models.
This is about an EPR-B Alice-Bob experiment where each ("measurement") generate a deterministic +1 or -1 outcome for a particular measurement direction using a shared piece of information: a random vector. Then the correlations are computed and plotted. No matter what deterministic model you try the correlation near the origin you generate a straight line vs. a curve of zero slope in the case of quantum mechanics. For this particular program, given a measurement direction specified as a unite vector in Cartesian coordinates I am computing the scalar product and I return +1 if positive I and -1 if negative. The experiment is repeated a number of times on various random measurement directions.
If you do not trust the randomly generated data, you can enter you own random Alice-Bob shared secret and your own measurement directions. Part of the credit for this program goes to Ovidiu Stoica.
 Number of experiments: Number of directions: Legend: Direction index|Data index|Measurement Alice|Measurement Bob