Thursday, October 30, 2014

Clever integration tricks

Today I want to talk about a clever integration trick I learned from Achim Kempf at DICE2014. Any mathematical physicist learns clever integration tricks and one of my personal favorite is how to compute:

\(I = \int_{-\infty}^{+ \infty} e^{-x^2} dx \) 

because there is no elementary primitive function since the integral comes from the Gaussian (normal) distribution. However one can still compute the integral above quite easily by going to a 2-dimensional plane and considering the \( y \) integral as well: \(\int_{-\infty}^{+ \infty} e^{-y^2} dy \) which is \( I\) again .


\(I ^2 = \int_{-\infty}^{+ \infty}\int_{-\infty}^{+ \infty} e^{-x^2} e^{-y^2} dx dy  = \int_{-\infty}^{+ \infty}\int_{-\infty}^{+ \infty} e^{-(x^2 + y^2)} dx dy \)

and the trick is to change this to polar coordinates:

\( x^2 + y^2 = r^2\) and \( dx dy = rdr d\theta\)

integration by \( \theta\) is a trivial \( 2 \pi\), and the additional \( r \) allows you to find the primitive and integrate from \(0 \) to \( \infty\).

But how about not having to find a primitive at all? Then one can try Achim's formula:

\( \int_{\infty}^{\infty} f(x) dx = 2 \pi f(-i\partial_x ) \delta (x) |_{x=0}\)

It's a bit scary looking

Happy Halloween!

but let's first prove it:

\( 2 \pi f(-i \partial_x) \delta(x) |_{x = 0} = f(-i \partial_x)  \int_{-\infty}^{+\infty} e^{ixy} dy |_{x = 0}\)

due to a representation of \( \delta (x)\):

\( \delta (x) = \frac{1}{2 \pi} \int_{-\infty}^{+\infty} e^{ixy} dy \)

Moving \( f(-i \partial_x) \) inside the integral makes this \( \int_{-\infty}^{+\infty} f(y) e^{ixy} dy |_{x = 0} \). Why? Expand \( f \) in Taylor series and apply the powers of \( -i \partial_x \) on \( e^{ixy} \) resulting into the powers of \( y \). Then recombine the Taylor series terms in \(f(y) \). Finally compute this for \( x = 0 \) which kills the exponential term and you are left only with \( \int_{-\infty}^{+\infty} f(y) dy\) and the formula is proved.

So now let's see this formula in action. Let's compute this:\( \int_{-\infty }^{+\infty} \frac{sin x}{x} dx\):

\( \int_{-\infty }^{+\infty} \frac{sin x}{x} dx = 2 \pi sin(-i \partial_x) \frac{1}{-i\partial_x} \delta(x) |_{x = 0} = \)
\( = \frac{2 \pi}{-i} \frac{1}{2i} (e^{\partial_x} - e^{-\partial_x}) (\theta(x) + \epsilon) |_{x = 0}\)

Now we can use Taylor to prove that \( e^{a \partial_x} f(x) = f(x+a) \) and from this the integral becomes:

\( = \pi (\theta(x+1) - \theta(x-1) +c - c)|_{x=0} = \pi (1 - 0 + 0) = \pi\)

So what is really going on in this formula? If we start with another representation for the Dirac delta:

\( \delta(x) = \lim_{\sigma \rightarrow 0^{+}} \frac{1}{\sqrt{\pi \sigma}} e^{-\frac{x^2}{\sigma}}\)


\(\int_{-\infty}^{+\infty} f(x) dx = \lim_{\sigma \rightarrow 0^{+}} 2 \sqrt{\frac{\pi}{\sigma}} e^{\frac{{\partial_x}^2}{\sigma}} f(x) |_{x=0}\)

The exponential term is a Gaussian blurring which flattens \( f(x) \), and is in fact a heat kernel because a heat equation is actually a convolution with a Gaussian. Also the limit sigma goes to zero or equivalently one over square root of sigma goes to infinity would physically correspond to the temperature going to zero. 

However something does look fishy in the formula. How can the integral of a function which includes the values over the entire domain be identical with a a formula containing only the value of \( f\) in only one point \( x = 0\) ? It does not! This is because \( e^{\frac{\partial_{x}^{2}}{\sigma}}\) acts nonlocally because \( e^{\frac{\partial_{x}^{2}}{\sigma}}f(x) \) is a convolution! 

More can be said, but it is a pain to typeset the equations and the interested reader should read Enjoy.

Thursday, October 23, 2014

­Should Gravity be Quantized?

Merging quantum mechanics with general relativity is the hardest problem of modern physics. In naive quantum field theory, treating gravity quantum mechanically involves adding smaller and smaller distance contributions to perturbation theory but this corresponds to higher and higher energy scales and adding enough energy will eventually lead to creating a black hole and the overall computation ends up predicting infinities. String theory, loop quantum gravity, and non-commutative geometry have different ways to deal with those infinities, but there are also approaches which challenge the need to treat gravity using quantum theory. Those approaches are a minority view in the physics community and I side with the majority point of view because I know it is mathematically impossible to construct a self-consistent theory of quantum and non-quantum mechanics. But wouldn't be nice to be able to put those ideas to an experimental test?

Here is where a nice talk at DICE2014 by Dvir Kafri came in.  The talk was based on and .  The best way to explain is probably to present it from the end, and here is the proposed experiment (from ).

Penrose advanced the idea of the gravitational collapse of the wavefunction and Diosi refined this in the best available model so far. Rather than looking at decoherence of objects due to gravity, Dvir instead asks the following question: can two masses which only interact gravitationally become entangled? Direct superposition experiments are out of the question, but how about measuring some sort of residual quantum noise required to screen the entanglement from occurring in the first place? Sure, since the gravitational coupling is so weak, the noise needed to do this is really tiny, but what if we cool the experiment close to absolute zero? One experiment is not enough because at 10 micro Kelvins you expect one thermal phonon to be emitted every 10 seconds and the desired effect produces a phonon every 3000 seconds, but massively replicating the experiment in parallel might work to extract the signal (replicate 10,000,000 times! – OK this is a bit in the realm of science fiction for now but maybe future technological advances will drop the price of such an experiment to something manageable).

Dvir motivates the experiment by modeling how two distant objects can communicate by individually interacting with an intermediary object.
Here is a slide picture from Dvir’s presentation (I thank Dvir for providing me with a copy of his presentation)

Please note that position and momenta are non-commuting operators. So you apply A first, followed by B, followed by –A and the by –B. The intermediary F (a harmonic oscillator) is unchanged by this procedure, but gains a geometric phase proportional to \( A \otimes B \). In other words this is what happens:

If you break this process into n infinitesimal steps and repeat n times, by a corollary of Baker-Campbell-Hausdorff formula you get:

\( {(t/n)}^{n} = exp (-it [H_A + H_B + A\otimes B]) \)

This picture is a simple model for how two objects can become entangled, To prevent that entanglement (but still allow communication between A and B), we add a “screen” S  which captures the coupling with the environment

By the monogamy of entanglement, this can only decrease the entanglement between A and B.
Since the environment is learning about A and B through F, Dvir invokes what he calls the “Observer Effect”: a measurement of observable \( O \) necessarily adds uncertainty to an non-commuting observable \( O^{’} \). In this case, the process of screening entanglement means that all observables not commuting with A and B become noisier.

Here is an experimental setup that is analogous to the first experiment: S is a weak measurement and the purpose is to see the noise generation, which is model-independent in that the equations of motion are the same.

If a certain inequality is violated (relating the strenght \( \eta \) of the \( A \otimes B \) interaction to the noise added to the system), then the communication channel between the Alice-Bob system transmits quantum information. Analogously, if we can verify that \( \eta \) is only due to gravity (that is why there is a superconductor shield between the oscillators coupled by gravitational attraction), by observing the noise and checking the inequality we can conclude that gravity can convey quantum information. Pretty neat.

PS: I thank Dvir for providing clarifying edits to this post.

Friday, October 10, 2014

The amazing Graphene

Continuing the interesting talks series from DICE2014, I was blown away by a talk by Alfredo Iorio: “What after CERN?”. Physics is an experimental science and the lack of experiments forces theoreticians to construct alternative models which most likely have nothing to do with how nature really is.

In high energy physics the experiments are extremely expensive and the price tag for a new accelerator is in the billions. Why do people need larger accelerators? Because to probe smaller and smaller regions of space you need larger and larger energies. Accelerators circulate a beam of particles in a circle to gain the required energy, and the faster they go (closer and closer to the speed of light), the heavier the particle become and they need larger and larger circle radius. To probe at the scales of the string theory for example, one needs an accelerator the size of the galaxy. So is there an alternative to this?

It turns out that there are theoretical and experimental efforts of outstanding value circumventing this brute force approach and Iorio’s research belongs into this rare breed in physics.

In the past I was blogging at FQXi about an experiment done by Bill Unruh with a laboratory waterfall which was able in principle to simulate a black hole and its Hawking radiation. However even more amazing things can be achieved with Graphene

So what is so special about this material? There are two key properties which makes it extremely interesting.

First, the hexagonal structure requires two translations to reach any atom.

Given an origin, any atom can be specified first by a linear combination of two vectors: \( (a_1 , a_2 )\): \( x_a = n_1 a_1 + n_2 a_2\) where \( n_1 , n_2\) are positive or negative integers, followed by a second translation using the vectors \( s_1 , s_2 , s_3\).

Second, the band structure in graphene is very special: the conductivity and valence band touch in exactly one point (called the Dirac point) making the structure a semi-metal:

Graphene Band

When the excitation energy is small (~ 10 \( \mu \) eV), the quasi-particle excitations respect Dirac’s equation. Two of the 4 spinor components come from the Lattice A vs. Lattice B, and the other two come from the up and down bands touching at the Dirac point.

By its very geometrical structure, graphene is an ideal simulator of spin ½ particles.

Now the hard work begins. How can we use this to obtain answers about quantum field theory in curved space-time? First we can start easy and consider defects in the hexagonal pattern. A defect changes the Euler number and introduces curvature. This is tractable mathematically for all simple defects using a bit of advanced geometry, but you don’t get very far except in the description of the phenomena in terms of critical charges and magnetic fluxes.

But if you can manufacture surfaces of negative curvature:

called Beltrami spheres then the real fireworks begins. Under the right conditions you can simulate the Unruh effect ( ): an observer in A sees the quantum vacuum in the frame B as a condensate of A-particles. To observe this the tip of a Scanning Tunneling Microscope is moved across the graphene surface and probes the graphene quasi-particles.

More amazing things are possible like: Rindler, de Sitter, BTZ black hole, Hawking radiation.

Of course there are drawbacks/roadblocks too: the defects in manufacturing which might spoil those effects. It is unclear how accurate are manufacturing techniques at this time. Also I don’t know if the impurities effects are properly computed. Much more serious I am skeptical of the ability to maintain the hexagonal pattern while creating the Beltrami funnel. And if this is not maintained, in turn it will affect the band structure which can ruin the validity quasi-particle model of Dirac’s equation.

I brought my concerns to Alfredo and his response put my mind to ease. To avoid playing telephone, with his permission I am sharing his answer here:

“- So, you are perfectly correct when you doubt that the Beltrami shape can be done all with hexagons. In fact, this is not possible, not because of technical inabilities of manufacturers, but because of the Euler theorem of topology.

- How do we cope with that? Although at the location of the defects the Dirac structure is modified, the hexagonal structure resists in all the other places. When the number of atoms N is big enough, one can safely assume that the overall hexagonal structure dominates (even when the defects start growing, as they do with N, all they do is to distribute curvature more evenly over the surface).

Now, if you stay at small \( \lambda \) (large energy E), you see all local effects of having defects, and the lattice structure cannot be efficiently described by a smooth effective metric (essentially, since the \( \lambda \) and E we talk about here are those of the conductivity (or \( \pi \) ) electrons that live on the lattice (they don't make the lattice, that is made by other electrons, belonging to the \( \sigma \) bonds), we realize that when their wavelength is big enough, they cannot see the local structure of the lattice, just like large waves in the sea are insensitive to small rocks. Hence, for those electrons, the defects cannot play a local role, but, of course they keep playing a global, i.e., topological, role, e.g., by giving the intrinsic curvature (as well known, in 2 dimensions the Gauss-Bonnet theorem links topology and geometry: Total Curvature = 2 \( \pi \) Euler Characteristic).

- Thus, if I was good enough at explaining the previous points, you should see that the limit for big \( r \) (that is small curvature \( K = \pm 1/r^2 \)) is going in the right direction, in all respects: 1. the number of atoms N grows; 2. the energy \( E_r \sim 1/r \) (see Fig. Graphene Band) gets small, hence the \( \lambda \) involved gets big, hence 3. the continuous metric \( g_{\mu \nu} \) well describes the membrane; 4. the overall Dirac structure is modified, but not destroyed, and, the deformations are given by a ''gauge field'', that is of the fully geometric kind. Indeed, this gauge field describes deformations of the membrane, as seen by the Dirac quasi-particles. The result is a Dirac field (we are in the continuum) in a curved spacetime (i.e. covariant derivatives of the diffeo kind appear). In arXiv:1308.0265 we discuss all of this in Section 2.

- There is also an extra (lucky!) bonus in going to big \( r \), that is the reaching of some sort of horizon (more precisely, that is a conformal Killing horizon, that, for a Hawking kind of phenomenon, is more than enough). Why so? The issue here brings in the negative curvature. In that case the spacetime (the 2+1 dimensional spacetime!) is conformal (Weyl related) to a spacetimes with an horizon (Rindler, deSitter, BTZ). Something that does not happen for the positive curvature, the sphere, that in graphene is a fullerene-like structure. In fact, the latter spacetime is conformal (Weyl related) to an Anti deSitter, that, notoriously, does not have an intrinsic horizon.

Now, once you learn that, you also learn that surfaces of constant negative Gaussian curvature have to stop somewhere in space (they have boundaries). That is a theorem by Hilbert. For small \( r \) (large curvature) they stop too early to reach the would-be-horizon. For large \( r \), though, they manage to reach the horizon. Fortunately, for that to happen, \( r \) needs not be 1 km (that would not be an impossible Gedanken experiment, but still a tremendous task, and just unfeasible for a computer). The job is done by \( r = 1 \) micron! That is something that made us very happy: the task is within reach. It is still hard for the actual manufacturing of graphene, but, let me say, it turned into a problem at the border between engineering and applied physics, i.e. it is no longer a fundamental problem, like, e.g., the mentioned galaxy-size accelerator.

- We are actively working on the latter, as well. In this respect, we are lucky that these ``wonders’’ are happening (well... predicted to be happening) on a material that is, in its own right, enormously interesting for the condense matter friends, hence there is quite a lot of expertise around on how to manage a variety of cases. Nonetheless, you need someone willing to probe Hawking phenomena on graphene, while the standard cond-mat agenda is of a different kind. Insisting, though, very recently I managed to convince a composite group of condensed matter colleagues, mechanical engineers, and computer simulations wizards, to join me in this enterprise. So, now we are moving the first steps towards having a laboratory that is fully dedicated to capture fundamental physics predictions in an indirect way, i.e. on an analog system.

What we are doing right now, between Prague, Czech Republic (where I am based) and Trento, Italy (where the ``experimentalists`` are sitting), is the following:

First, we use ideal situations, i.e. computer simulations, hence we have no impurities nor substrates here. There no mention is made of any QFT in curved space model. We only tell the system that those are Carbon atoms, use QM to compute the orbitals and all the relevant quantified, perform the tight binding kind of calculations. Thus, the whole machinery here runs without knowing that we want it to behave as a kind of black hole.

What we are first trying is to obtain a clear picture of what happens to a bunch of particles, interacting via a simplified potential, e.g., a Lennard-Jones potential, constrained on the Beltrami. This will tell us a lot of things, because we know (from similar work with the sphere, that goes under the name of generalized Thomson problem, see, e.g., the nice work by Bowick, Nelson and Travesset) that defects will form more and more, and their spatial arrangements are highly non trivial.

When this is clear, we want to get to a point where we tell the machine that we have N points, and she (the machine) plots the Beltrami of those points. i.e. it finds the minimum, the defects, etc. This would be the end of what we are calling: Simulation Box 1 (SB1).

When SB1 is up and running, we fix a N that is of the order of 3000, take away points interacting with Lennard Jones, and substitute them with Carbon atom, i.e. we stick in the data of Carbon, the interaction potential among them, and then let a Density Functional computation go. The latter is highly demanding, computer-time wise, but doable. With this we shall refine various details of the theory, look into the structure of the electronic local density of states (LDOS), although the \( r \) we can get with N = 3000 is still too small for any Hawking anything. That is the first half of SB2.

The work of SB1 and first half of SB2, can be done with existing machines and well tested algorithms. But we need to go further, towards a big \( r \) (the 1 micron at least... although I would be happier with 1 mm, but don't tell my experimentalist friends, they would kill me!). This is possible, but we are going into the realm of non tested algorithms, of dedicated machines (i.e. large supercomputers, etc). Nonetheless, figures of the order of N = 100K (and even whispered N = 1 million) are in the air. That would be second half of SB2, i.e. when the Hawking should be visible.

That is the road I can take with the current group of people involved. I don't give up though the idea of getting someone to actually do the real graphene thing. But this would only mean a handle of a very large number of points, to the expense of more impurities, substrates, etc. Indeed, the SB2 (the computer simulations of true Carbon interactions) would be so accurate, that myself (and, most importantly, the cond-mat community) would take those results as good (if not better, because `fully clean`) as the experiments.”

In conclusion this is an extremely exciting research direction. 

Friday, October 3, 2014

The topological structure of big data

One interesting talk at DICE2014 was a talk by Mario Rasetti on understanding the bid data of our age.

You may wonder what does this have to do with physics, but please let me explain. First when we say big data, what are we really talking about? The number of cataloged stellar object is \( 10^{21}\). Pretty big, right? But consider this: in 2013 there were 300 billions email sent, 25 billions SMS messages made money for phone companies, 500 million pictures were uploaded on Facebook. In total from those activities mankind produced in 2013 \( 10^{21}\) bytes. And the every year we produce more and more data. 

How much is \( 10^{21}\) bytes? How about 323 billion copies of War and Peace. Or 4 million copies of all of Library of Congress. In four years it is estimated that we will produce \( 10^{24}\) bytes which is larger than Avogadro's number!

Now how can we get from data to information to knowledge and then wisdom? From computer science we know to lay all this data sequentially and people considered vector spaces for this. But does this make sense? For example, if we take a social network like Twitter what we have are simplicial complexes. What Mario Rasetti proposed was to extract the topological information from those kinds of large data sets. In particular he computes the Homology groups and Betti numbers which were discussed on this blog on prior posts, and the reason is that the algorithmic complexity is polynomial in computation time.

We know that if we triangulate a manifold and omit one point we obtain different topological invariants just like puncturing a three dimensional balloon results in a two dimensional surface. Therefore in computing the Betti numbers we get fluctuations but as more and more nodes are included into computation the fluctuations stabilize. 

The link with physics and Sorkin's Causal Set theory is obvious and the same techniques can be applied there. However Rasetti did not go into this direction and instead he cited the application of the method to biology. In particular, he was able to clearly distinguish if a patient took a specific drug vs. a placebo from the analysis of the brain MRI image which looked identical to the naked eye. 

Recently I saw an article on what Facebook sees in posting patterns when we fall in love:

Now all this looks really scary. Imagine the power of information gathering and topological data mining in the hands of (bad) governments around the world. And not only governments. Big companies like Facebook are abusing the trust of their users and perform unconscionable sociological tests by manipulating advertising for example. In the biological area, human cloning is rejected because the general population understands the risks, but the understanding of big data and the ability to mine it for correlations and knowledge is badly lagging behind the current technical ability. More violation of privacy scandals will occur before the public opinion will put pressure to curb bad behavior of abusers of trust.

Saturday, September 27, 2014

History of Electroweak Symmetry Breaking

The first post about DICE2014 is about Tom Kibble's keynote lecture about electroweak theory.

Physics in the 50s had great success with quantum electrodynamics and its perturbative methods because the coupling constant was smaller than 1: 1/137. However, for other interactions, perturbation theory was not working due to interaction strength and people looked at alternative theories, like S-matrix and Regge poles which ultimately lead to dead ends in physics.

If you look at strong interactions, the proton and the neutron are very similar and people naturally looked at the SU(2) symmetry. However this symmetry is broken by electromagnetism and people started thinking about how to break symmetries. Also from strong interactions the SU(3) symmetry was developed by Gell-Mann's eightfold way which made a successful prediction for a new particle. Today we know this is an approximate symmetry which comes from up, down, and strange quarks.

In 1954 Yang and Mills had their seminal paper in gauge theory. The same result was obtained by Ronald Show, a grad student under Abdus Salam, but he only wrote it in his PhD thesis and was not taken seriously. The problem of Yang-Mills theory is that it predicts a new infinite range interaction which does not exist in nature. Adding mass to the interaction restricts the range due to uncertainty principle, but adding a mass term makes the theory non-renormalizable.

Around the same time, Fermi developed his weak interaction V-A 4-point interaction theory and Schwinger suggested in 1957 what is now called the W+, W- weak bosons.

It was known that the weak interaction violates parity and was short range and the search was on for how to introduce this into the theory.

In 1961 Glashow proposed a solution to the parity problem by mixing Z0 with W0 and proposing the SU(2)xU(1) symmetry. Salam and Windberg independently proposed the same thing in 1964, and the W mass was put in by hand.

For the mass problem responsible for the short range of the interaction, Nambu proposed spontaneous symmetry breaking in 1960. Condensed matter physics were very familiar with spontaneous symmetry breaking as the explanation for plasmons in superconductivity.

The basic idea of spontaneous symmetry breaking is that the ground state does not share the system symmetry. A typical example is a ball of water which freezes: during crystallization the rotational symmetry is lost. In quantum field theory there was the Goldstone theory with its Mexican hat potential:

The radial motion generate an effective mass term (because locally one approximate the radial motion with a parabola), but the motion around the center corresponds to a zero mass particle: the Goldstone boson. Since the Goldstone boson was not observed in nature, this was a major roadblock to adding mass to non-abelian gauge theories.

In 1964 Gerald Guralnik at Imperial College, collaborated with Walk Gilbert - a student of Abdus Salam, and a US visitor: Richard Hagen came with the idea of the Higgs mechanism to combine the massless gauge theory with the massive Goldstone boson. The same mechanism was proposed also independently by Peter Higgs/ Guralnik, Hagen, and Kibble/, and by Englert and Brout.

The problem was how to avoid the unobserved Goldstone boson. If you impose a continuity equation you get a charge by integrating the current density. However you need to consider the surface at infinity and due to relativity and microcausality in Coulomb gauge charge does not exists as a self-adjoint operator and this avoids the presence of the Goldstone boson. The key is the presence or absence of long range forces which interfere with the Goldstone theorem.

Then the electroweak unification and successes followed: Weinberg in 1967 and Salam in 1967 and 1968 proposed the electroweak theory, in  1971 't Hooft proved its renormalizability. In 1973 Z0 neutral currents were observed in CERN, and in 1983 W and Z bosons were observed in CERN as well.

70's and 80's saw the development of quantum chromodynamic based on SU(3) and the Standard model based on SU(3)xSU(2)xU(1) emerged.

After 1983 the only missing piece of the puzzle was the Higgs boson. Originally this played a minor role, the big deal was the Higgs mechanics. In 2012 the Higgs boson was confirmed experimentally and Englert and Higgs were awarded the Nobel Prize.

So what next? Grand unification of electroweak and strong force and supersymmetry (SUSY)? With SUSY the three coupling constants for electromagnetism, weak and strong force converge exactly and this is very powerful evidence. Unfortunately there is no current experimental evidence for SUSY.

Then there is a big gap between the Standard Model and M-theory/quantum gravity. To put it in perspective, strings to Standard Model is like atoms to our Solar System. Or if an atom is blown to the size of the observable universe, a string in string theory is the size of a tree on Earth.

Sunday, September 21, 2014

The Sleeping Beauty Problem

I just came back from the DICE2014 conference and as I recover from the jet lag and prepare posts about the conference I'll present the last topic in the statistics mini-series, the sleeping beauty problem. Unlike the Monty Hall problem, there is no consensus on the right solution even among the experts which makes this problem that much more interesting. 

So here is the setting: Sleeping Beauty participates in an experiment. Every day she is explained the process and she is asked about the the credence (degree of belief) that a certain fair coin landed heads or tails. So what is the big deal you may ask? The coin is fair and this means that it lands heads 50% of the time, and tails 50% of the time. However, there is a clever catch. 

Whenever the sleeping beauty is put to sleep she takes an amnesia drug which erase all her prior memory. If the coins lands tails, she will be woken up Monday and Tuesday but if the coins lands heads, she will be woken up only Monday. On Wednesday the experiment ends.

So now for the majority opinion: the thirders:

To make this very clear, let's change the experiment and keep waking up the Sleeping Beauty a million time if the coin lands tails, and only once if the coin lands heads. If she is woken up on a day at random, the chances are really small that she hit the jackpot and was woken up in the one and only Monday. So being woken up more times when the coin lands tails, means that the in the original problem the credence the coin landed heads should be one third. If you play this game many times and attach a payout for the correct guess, you maximize the payout overall if your credence is one third.

Now for the opposing minority point of view: the halfers

On Sunday, the credence is obviously 50-50 because the coin is fair. Even the thirders agree with this. However, From Sunday to Monday, no new information is gained, therefore the credence should be unchanged and the overall credence should remain 50% throughout the experiment. If you adopt the thirder position you should explain how does the credence change if no new information is injected into the problem. 

So which position would you take? There were all sorts of approaches to convince the other side, but no-one had succeeded so far.  

Friday, September 12, 2014

The Monty Hall Problem

Continuing the discussion about probabilities and their intuition, here is a classical problem: the Monty Hall problem.

The setting is as follows: you are presented with three doors. Behind each door there is either a goat or a car. There are two goats and only one car. You get to pick a door, and someone who knows where the car is located, opens one of the two remaining doors and reveals a goat. Now there are two doors left, the one which you pick, and another one. Behind those two doors there is either a goat or a car. 

Then you are given a choice: switch the door, or stay with the original one. What should you do? 

Now there are two schools of thought: 
  • stay because it makes no difference, your new odds are 50/50.
  • switch because it increases your odds
Before answering the question, to build up the intuition on the correct answer, let's consider a similar problem:
Instead of 3 doors, consider 1 million doors, 999,999 goats and one car. You pick one door at random, and the chances to get the car are obviously 1 in a million. Then the host of the game, knowing the car location, opens 999,998 doors revealing 999,998 goats. Sticking with your original choice, you still have 1/1,000,000 chances to get the car, switching increases your chances to 999,999/1,000,000. There is no such thing as 50/50 in this problem (or for the original problem). For the original problem switching increases your odds from 1/2 to 2/3. Still not convinced? Use 100 billion doors instead. You are more likely to be killed by lightening than finding the car on the first try. Switching the doors is a sure way of getting the car.

The incorrect solution of 50/50 comes from a naive and faulty application of Bayes' theorem of information update. Granted the 1/3-2/3 odds are not intuitive and there are a variety of ways to convince yourself this is the correct answer, including playing this game with a friend many times. 

One thing to keep in mind is that the game show host (Monty Hall) is biased because he does know where the car is and he is always avoiding it. If the game show host would be unbiased and by luck would not reveal the car, then the odds would be 50/50 in that case. An unbiased host would sometimes reveal the car accidentally. It is the bias of the game show host which tricks our intuition to make us believe in a fair 50/50 solution. The answer is not a fair 50/50 because the game show host bias spoils the fairness overall. 

The amazing thing is that despite all explanations, about half of the population strongly defends one position, and half strongly defends the other position. If you think the correct answer is 50/50, please argue your point of view here and I'll attempt to convince you otherwise. 

Next time we'll continue discussing probabilities with another problem: the sleeping beauty problem. Unlike the Monty Hall problem, the sleeping beauty problem lacks consensus even among experts.