Friday, March 25, 2016

Annealing and the D-Wave Quantum Computer


Today I will stay in the realm of quantum computers and I want to talk about a commercial company who delivers more than 1000 qbits in a quantum chip. This is far more what the classical embedding scheme from last time can achieve, but at close inspection it is not that impressive. Granted, the D-Wave company can boast about a venture capital investment in the 100 to 200 million range, but it is the limitations of what can they do which prevents the company to be in the hundreds of billion range. 

So what is annealing, and how does D-Wave use it to solve problems? Let me start by explaining what I did for my PhD thesis some time ago. I was working in the fiber optics area which are used to send signals (phone calls, internet traffic, TV signals). What you want to do is to increase the bit rate to take full advantage of all of the bandwidth available, but propagation over (large) distances makes the light pulses overlap with each other. Think here about big undersea fiber optic cables between say California and Japan. Key to this is the dispersion property of the optical fiber and dispersion in one point of the fiber can be different than the dispersion in another point (due to variations in manufacturing). A method is needed to measure this dispersion and the method available was to cut the fiber into 1 meter sections and measure the dispersion for each section. How can we do this in a non-destructive way? The theory of light propagation in optical fibers is well know: from Maxwell's equations, in a certain appropriate approximation one obtains the so-called nonlinear Schrodinger equation and numerical simulation methods are available to obtain the output given any input and any dispersion function. 

Now the idea is simple: send some optical pulses through the fiber and measure what comes out. Then starting with a constant dispersion map, simulate the optical pulse propagation of the input and obtain some output which will be different than the measured one. Change the dispersion map values and repeat the simulation until you get the same output as the experimental ones.

So what I had was an optimization problem: I had to minimize a "cost" function, where the cost function value is given by some measure of the mismatch between the experimental and the simulated output. The problem is that the cost function is a function of many variables (over 1000). Now how can you find the minima of an unknown function of a single variable? Start at a point and evaluate  the function. You take a step left and evaluate the function. You take a step right and evaluate the function again. Then you know if you need to move to the left or to the right, is as simple as that. To generalize in multiple variables, you need to repeat an all directions and determine the direction of the steepest descent. In other words, you determine the gradient. But how do you use the gradient? Naively you take a step along the direction of the gradient, but this is not the best strategy because you encounter the steep valley problem. I wont't go in detail on what that problem is, and the strategies to go around it, because even if you solve it, you encounter the next problem: the local minima problem.

In optimization problems you are given an unknown landscape you need to explore and your task is to find the absolute minima of the landscape. Many problems fit into this category, from training neural networks to econometric problems. Annealing is a strategy for solving the local minima problem and the inspiration comes from the technique to grow crystals. The basic idea is as follows:

suppose I gave you a tray with a two dimensional landscape built on it and I place a small ball at an arbitrary place in this landscape. The ball will naturally get trapped into a local minima. Then I ask you to shake the tray up and down with a given amplitude such that the ball will jump out of the local minima and land in another local minima (the amplitude corresponds to the temperature). Also I will tell you to keep shaking it, but slowly over time reduce the amplitude of the shaking. When the process ends, the ball will have found the local minima. Similarly, when you grow a crystal, you need to slowly reduce the temperature which will allow the molecules to find the local minima in the crystal which is growing as the temperature drops.  

The D-Wave chip is nothing but an annealing machine. However, it has a quantum twist: In quantum mechanics one encounters tunneling, and because of it the "ball exploring the landscape" can tunnel from one local minima to the next, and this is the basis of its quantum speedup because the annealing schedule can be done faster. There was much debate weather D-Wave is quantum or if it has any speedups over a classical system and Scott Aaronson was the "skeptic-in-chief". I think it is clear now that their chip is quantum, but the speedups are not really there because the kinds of the landscapes which are programmable on their computer are too simplistic. Several order of magnitude increases in the number of qbits are needed, but the good news is that D-Wave is doubling their qbit number every year.

So how does it work?



The basic building bloc is a Josephson junction which hosts a qbit. Because of the manufacturing process, different junctions are not the same and they need to be calibrated with an additional loop and a very precise magnetic flux going through it. The chip is cooled to 15 miliK for superconductivity and noise reduction and it is shielded to 50,000 times the Earth's magnetic field. The noise is still large, and repeating  a computation would result in a different answer 5% of the time.

Four superconducting loops are elongated and stacked next to each other horizontally. Then the same thing is done vertically and the two layers are coupled resulting in a 8 qbit cell which realizes an Ising Hamiltonian. The cells are connected to more cells resulting in the final chip. The coupling and the biases in the Hamiltonian are controllable and represent the actual problem being solved. This is done at room temperature. Then the chip is cooled and then the annealing process starts.

The annealing corresponds to gradually turning off an initial state Hamiltonian and turning on the Hamiltonian of the problem being solved. If done slowly, the system remains in the ground state at all times during the process. In the end the final spin configuration is read and this provides the answer. The process is repeated many times to either average the answer or get the best answer.

Now which problems can you solve using a D-Wave computer? All tasks a traditional quantum computer can do, but most of the problems (except optimization problems) are ill suited for the D-Wave chip. For example on their 1000 qbit chip, D-Wave was able to factorize the product of two prime numbers: 149 and 73: 10877=149x73, but this is a far cry from what needs to be done to break encryption. The way the factorization task is cleverly modeled on a D-Wave machine needs many orders of magnitude larger number of qbits that what is currently available. There is a lot of hype around D-Wave, but the commercially viable practical applications are few and far in between. I do not think the company covers its costs from sales, and it relies on venture capital and large grants from big companies like Google to keep increasing the number of qbits. So far they delivered on that, but if they will hit an insurmountable physical limit then the funding will evaporate. The value of the company relies mostly in its patents, and I feel that there is a very long road ahead for them to provide a genuine quantum speedup in a viable commercial setting. However it is a start. 

Friday, March 18, 2016

Classical emulation of a quantum computer


I am now resuming my scheduled topics and I will start a small series on classical-quantum debate. I will start with a very interesting paper by Brian R La Cour and Granville E Ott talking about the possibility of emulating a quantum computer with purely classical means. I met Brian last year at a conference but I did not attend his presentation due to a scheduling conflict. However, after that I looked up his research and I found it most intriguing. 

The modern trend in quantum mechanics is not on discussing how counterintuitive the theory is or on Bohr-Einstein debate, but on how to use it to create useful applications not possible using classical means. One such application is a quantum computer which holds the promise of a massive speedup compared with classical ones. So the race is on to build such a thing, but to be of any value you need to have many quantum bits. However, the quantum states are very brittle and decoherence is your enemy. Against this background, here is this proposal by La Cour and Ott who say that you do not actually need quantum mechanics and you can emulate qbits using classical resources!!! Moreover, they built a working prototype. So what is going on here?

First, let me say this proposal does not contradict in any way quantum mechanics and the resulting system is pure classical. However, the scheme is quite clever and is able to realize the quantum algorithms speedups. The idea is that the computer is not digital, but analog using waves which naturally obey the superposition principle.



Here is the clever encoding scheme of a qbit corresponding to two complex numbers \(\alpha\) and \(\beta\):

\(\psi(t) = \alpha e^{i \omega_0 t} + \beta e^{-i \omega_0 t}\)

To recover \(\alpha\) and \(\beta\) from the signal one multiplies the signal with \(e^{\pm i \omega_0 t}\) and then applies a low pass filter.

To add more qbits, they use increasing signal frequencies: \(\omega_i = 2^i \omega_0\) and this ultimately provides the practical limitation of the scheme due to a finite amount of bandwidth.  This scheme cannot accommodate more than 40 qbits, but a 10 qbit system has the same processing power as a 1 GHz standard digital computer. While there is a speedup due to using quantum algorithms, there is also a slowdown due to signal extraction: the inner product of two wavefunctions is computed as a time integral of length \(2\pi /\omega_0\). \(\omega_0\) has to be small to accommodate as many qbits as possible.

But while there is superposition and a Hilbert space is present in the emulation scheme, there is no randomness and no true quantum behavior, so what can be achieved with this emulation? Surprisingly a lot because all quantum gate operations can be implemented.

The inherent determinism of the computation can be "fixed" by adding a random noise. But why do that? This makes sense if it helps speed up the extraction of partial information (remember that the inner product requires a lengthy time integration). More interesting is the realization of the teleportation protocol. The key sentence in the paper is this:

"although Alice is in possession of Bobʼs qubit, she does not manipulate it."

For separated systems, you do not actually get correlations higher than the Bell limit simply because there are no separated systems in this embedding scheme. So in a way it is cheating to call this teleportation; physically that can be achieved only with quantum mechanics. However the goal is to obtain a quantum computer able to process quantum algorithms and the actual algorithm is implementable in this embedding.

Apart from practical applications (which are rather modest due to the practical limit of 40 qubit) , this research is most valuable in opening up new roads in exploring foundations and computability. If a quantum computer is realizable with classical resources, why stop here and not realize a Popescu-Rohrlich box computer? Does anything like a PR box computer and a PR box algorithm exist? Nobody explored such concepts before because PR boxes do not exist in nature. But now we see that quantum algorithms do not rely on separated correlations. What is the relationship between a PR box and superposition? Does superposition prevent the existence of a PR box? And can we prove false the Church-Turing thesis?

Thursday, March 10, 2016

Why are unitarity violations fatal for quantum mechanics?


Last post attracted a lot of attention and criticism from Lubos Motl and I will postponed the scheduled presentation of this week and instead try to address that criticism head on. When discussing quantum mechanics one can take three points of view: physical, mathematical, and philosophical. In the past I had an argument with Lubos on Boolean vs. quantum logic in quantum mechanics, and the root cause of the disagreement then was on the mathematical vs. the physical points of view. This time the root cause of disagreement is due to the physical vs. philosophical approach. Lubos sees no value in the quantum foundations community because the proper interpretation was settled in his opinion long time ago and all quantum foundations practitioners must be crackpots (obviously there is no love lost between the quantum foundation community and Lubos). Today I will not take the philosophical point of view and instead I will attack the argument from the mathematical side and attempt to show that the measurement problem is still an open problem (not only Lubos but also some members of the foundation community disagree with this too).

The problem starts from the category theory approach in quantum mechanics reconstruction. The mathematical formalism of quantum mechanics can be derived from two physical principles:
  • invariance of the laws of nature under time evolution
  • invariance of the laws of nature under physical system composition
Those two physical principles are easy to understand: invariance under time evolution means that the laws of nature are the same today as they were yesterday, or they will be tomorrow. Invariance under composition means that the laws of nature do not change with additional degrees of freedom. How can you derive the formalism of quantum mechanics (Hilbert spaces, self adjoint operators, etc) from those physical principles? With the help of the algebraic formulation of quantum mechanics.There a C* algebra can be decomposed in a symmetric product (the Jordan algebra of observables) and a skew-symmetric product (the Lie algebra of generators). Since Hilbert space composition is done using the tensor product and since there is a universal property of tensor products linking products with tensor products, imposing the physical principle of invariance under composition has deep algebraic consequences. It can be shown that those algebraic consequences completely determine the C* algebra formulation of quantum mechanics and to recover the full formalism then one has to apply the GNS construction. But to prove all this you need a starting point, and that is the Leibniz identity which comes from the invariance of the laws of nature under time evolution. Violate Leibniz identity and the whole mathematical formalism of quantum mechanics becomes mathematically inconsistent. 

Why is this important? Because as it is well known by the mathematics community, the Leibniz identity in the algebraic formulation corresponds to unitarity in the usual formulation. No unitarity during collapse means no Leibniz identity which means mathematical disaster. This is why I am seeking a unitary solution to the collapse problem. This is not a fool's errand, as category theory which highlighted the problem in the first place also points to the unique way to solve it using the Grothendieck group construction. In the process the quantum mechanics interpretation which emerges is that of Copenhagen. And this follows as a necessary mathematical result and not as a crusade against other interpretations.

Since the deep relationship between Leibniz identity and unitarity is not well known outside the mathematical physical community, I will now present it below: 

Unitarity from Leibniz:

This follows from the categorical approach of quantum mechanics reconstruction. I presented this in a series of posts starting with this one.


Leibniz from unitarity:

To see the inverse implication we need to go in depth in the Jordan-Lie algebraic formulation of quantum mechanics. The algebraic formalism is a unifying framework for both quantum and classical mechanics and the only difference is that the observables bipartite product \(\sigma_{12}\) has an additional element in quantum mechanics: \(-\alpha_1 \otimes \alpha_2\). This additional element prevents the Bell locality factorization  (the observable bipartite product \(\sigma_{12}\) can no longer be factorized in terms of \(\sigma_1\) and \(\sigma_2\) and this prevents in turn the state factorization) and makes possible superposition and continuous transitions between pure states. 

Classical mechanics is described by a Poisson algebra while quantum mechanics is described by a Jordan-Lie algebra. If we add norm properties to a Jordan-Lie algebra, we get a J L B (Jordan, Lie, Banach) algebra which is the real part of a C* algebra. The C* algebra is the algebra of bounded operators on some Hilbert space arising out of GNS construction. The pure state space \(\cal{P}(\mathfrak{A})\) of a C* algebra \(\mathfrak{A}\) is a Poisson space with transition probability. Unitarity means that the Hamiltonian flow of the states generated by a given observable \(\psi\mapsto \psi(t)\) preserves the transition probability \(p\):

\(
p(\psi(t), \phi(t)) = p(\psi, \phi)
\)

This definition is more general than the usual definition of unitarity in a Hilbert space, because it works for both the Hilbert space representation and for the state space of the C* algebra. The definition also applies to the classical case. If the classical case there is no superposition and the transition probabilities are trivial: \(p(\psi, \phi) = \delta_{\psi \phi}\). 

To derive Leibniz identity from unitarity one can proceed in two steps. First the algebra of observables \(\mathfrak{A}_{\mathbb R}\) is recovered from the pure state space. Then the Hamiltonian flow \(\psi\mapsto \psi(t)\) defines a Jordan homomorphism and the derivative with respect to time of the homomorphism property yields the Leibniz identity.

Given a transition probability space we can define linear combinations of transition probabilities and this defines a real vector space \(\mathfrak{A}_{\mathbb R} (\mathcal{P})\). In the quantum mechanics case the elements of this vector space  have a spectral decomposition \(A = \sum_j \lambda_j p_{e_j}\) which allows the definition of a squaring map: \(A^2 = \sum_j \lambda^2_j p_{e_j}\). This in turn is used to define the Jordan product:

\(
A\sigma B = \frac{1}{4} ({(A+B)}^2 - {(A-B)}^2 )
\)

With the sup norm and the product \(\sigma\), \(\mathfrak{A}_{\mathbb R} (\mathcal{P})\) now becomes a J B (Jordan Banach) algebra and the first step is complete: starting from the pure state space equipped with transition probability we arrived at the Jordan algebra of observables. In the classical mechanics case the Jordan product is simply the regular function multiplication. 

For the second step, we use the Poisson structure and the Hamiltonian flow: \(\psi\mapsto \psi(t)\). For each element \(A\) of \(\mathfrak{A}_{\mathbb R} (\mathcal{P})\) (corresponding to an operator in a Hilbert space by the GNS construction) we can define a one-parameter map \(\beta_t\) given by \(\beta_t (A): \psi \mapsto A(\psi (t))\). Then we have: \(\beta_t (A \sigma B) = \beta_t(A) \sigma \beta_t(B)\) because \(\beta_t(A^2) = \beta_t(A)^2\). As such \(\beta_t\) is a Jordan homomorphism.

From the Hamiltonian flow we have:

\(
\frac{{\rm d} A}{{\rm d} t}(\psi (t)) = \{h, A\}(\psi (t))
\)

We take the time derivative of the isomorphism \(\beta_t (A \sigma B) = \beta_t(A) \sigma \beta_t(B)\) and we obtain the Leibniz identity. 

We need to point a notation difference between what we and the literature calls Leibniz rule. In  the mathematical literature the Leibniz identity defines only how the Poisson bracket \(\alpha\) acts on the observable algebra \(\sigma\) and the proof was outlined above. In my case Leibniz identity applies to all algebraic products. However the only other algebraic product is the Poisson bracket itself and because its skew-symmetry the Leibniz identity becomes the Jacobi identity. The Jacobi identity is assumed by the structure of a Poisson space and we do not need to derive it from unitarity. Both classical and quantum mechanics have a Poisson space structure.

Now back to the philosophical point of view. Various quantum mechanics interpretations are nothing but distinct frameworks for solving the measurement problem. The basic problem is that given a unitary explanation of one outcome A by introducing a particular coupling between the measurement device and the quantum system, and given a  unitary explanation of another outcome B, then by superposition you get a superposition of outcomes and this is nonsense (there are no dead and alive cats). Now which framework for solving the measurement problem is the correct one? Because the measurement problem must explain the non-unitary collapse, and since non-unitarity makes the mathematical framework of quantum mechanics inconsistent, the mathematical solution ultimately points out the right interpretation. So far everything in the category theory approach points towards the Copenhagen family of interpretation as the correct explanation. 

If I can offer an analogy, the Grothendieck approach for solving the measurement problem without spoiling unitarity is to quantum mechanics what the Higgs mechanics is to field theory (introducing mass without spoiling the gauge invariance). The problem is still open and it is work in progress. The basic idea is that instead of one Hilbert space describing the collapse there are actually many Hilbert spaces linked by an equivalence relationship.The equivalence relationship encodes mathematically the physical principle of outcome randomness in quantum mechanics (this comes in turn from operator non-commutativity). Each Hilbert space corresponds to a GNS representation of a C* algebra state corresponding to a potential outcome. The one and only outcome is realized when the equivalence relationship is spontaneously broken by a pure unitary mechanism. The observer (measurement device) plays an essential role and the result of measurement does not exist independent of measurement. However, the consciousness of the observer plays no role whatsoever. There is no need for the measurement device to be described classically. 

Friday, March 4, 2016

Use and abuse of von Neumann measurement of the first kind


The classification of different quantum mechanic interpretations and solutions to the measurement problem is sometimes based on von Neumann analysis of the measurement process, in particular on his "measurements of the first kind". I will attempt to show that the mathematical foundation of the classification is incorrect. But what are measurements of the first kind?

Suppose we have a measurement device \(|M\rangle\) with three states:
  • ready for measurement: \(|M_0\rangle\)
  • detecting outcome A: \(|M_A\rangle\)
  • detecting outcome B: \(|M_B\rangle\)
Also suppose we have a quantum system \(|\psi\rangle\) which can collapse on the two outcomes: \(|\psi_A\rangle\) and \(|\psi_B\rangle\). von Neumann demands the following time evolution which could be obtained by some interaction between the measurement device and the quantum system:

\(|\psi\rangle\otimes |M_0\rangle \rightarrow c_A|\psi_A\rangle \otimes |M_A\rangle + c_B|\psi_B\rangle \otimes |M_B\rangle\)



At first sight, this is perfectly reasonable. For example in the case of a Stern-Gerlach experiment, the interaction Hamiltonian corresponding to an inhomogenous magnetic field is:

\(H = -i\sigma_3 \frac{\partial}{\partial z}\)

and on plugging this into the Schrodinger equation it produces the desired effect (and I'll skip the computation details for brevity sake). So what could be wrong with using measurement of the first kind arguments into the classification of quantum mechanics interpretations?

If we try to use Schrodinger equation into the analysis of the measurement process, we need to do it in general, not only in particular cases where there is no issue. For example, what should \(|\psi_A\rangle\otimes |M_0\rangle \) evolve into? We know:

\(|\psi_A\rangle \otimes |M_0\rangle \rightarrow |\psi_A\rangle \otimes |M_A\rangle\)
 
because a repeated experiment yields the same outcome. But is this what really happens when we have an interaction Hamitonian able to generate: \(|\psi\rangle\otimes |M_0\rangle \rightarrow c_A|\psi_A\rangle \otimes |M_A\rangle + c_B|\psi_B\rangle \otimes |M_B\rangle\)? I'll show that this is impossible. The quantum axiom of repeated measurements yielding the same outcome is incompatible with von Neuman measurements of the first kind.

Before proving this, let's see why we care. If we have that pure unitary evolution satisfy:

(1)    \(|\psi_A\rangle \otimes |M_0 \rangle \rightarrow |\psi_A \rangle \otimes |M_A \rangle\)
(2)    \(|\psi_B\rangle \otimes |M_0 \rangle \rightarrow |\psi_B \rangle \otimes |M_B \rangle\)
then by superposition we get:
(3)    \((c_A|\psi_A\rangle + c_B|\psi_B\rangle)\otimes |M_0\rangle \rightarrow c_A|\psi_A\rangle \otimes |M_A\rangle + c_B|\psi_B\rangle \otimes |M_B\rangle\)

Now from (1), (2), and (3) the measurement problem can be presented as a ``trilemma'':
  • the wave-function of a system is complete,
  • the wave-function always evolves in accord with a linear dynamical equation,
  • measurements have determinate outcomes.
where any two items contradict the third one. For example Bohmian interpretation violates the first item, spontaneous collapse the second item, and many worlds violates the third one. 

Now back to Eqs. 1 and 2 we notice something odd: the lack of a backreaction from the interaction Hamiltonian on the measurement device. Can this really happen? Here comes the algebraic formalism to the rescue. In this formalism, time evolution is represented by a product \(\alpha\) which up to various factors of 2, \(\hbar\) and \(i\) is the commutator. We also have a product \(\sigma\) which is the Jordan product of Hermitean operators and again up to various factors it is the anticommutator. For a bipartite system like the one we consider: "quantum system-measurement device", the product \(\alpha\) is:

\((A_1 \otimes A_2) \alpha_{12} (B_1 \otimes B_2) = (A_1 \alpha B_1 )\otimes (A_2 \sigma B_2) + (A_1 \sigma B_1 )\otimes (A_2 \alpha B_2)\)

where \(A_1, B_1\) are operators in the Hilbert space of system 1 (the quantum system), and \(A_2, B_2\) are operators in the Hilbert space of system 2 (the measurement device). If the interaction Hamiltonian is \(h_{12}=h_1\otimes h_2\), the time evolution in the Heisenberg picture for the operators in system one is as follows:

\((\dot{A_1} \otimes I) = (h_1\otimes h_2) \alpha_{12} (A_1 \otimes I) = (h_1 \alpha A_1) \otimes (h_2 \sigma I) + (h_1 \sigma A_1) \otimes (h_2 \alpha I)\)

Because the product \(\alpha\) is skew-symmetric we have \((h_2 \alpha I) = 0\) which demands:

\(\dot{A_1} = (h_1 \alpha A_1) h_2 \)

Since \((h_1 \alpha A_1 )\) cannot be zero because the quantum system does evolve in time, the only possibility to get \(A_1\) to be constant is for \(h_2\) to be zero. But then there is no coupling with the measurement device whatsoever.

So we seem to be stuck explaining repeated measurements. Repeated measurements do generate the same outcome (this is an experimental fact) and  \(|\psi_A\rangle \otimes |M_0 \rangle \rightarrow |\psi_A \rangle \otimes |M_A \rangle\) does happen. The only way out of this mathematical problem is to realize that the transition represents not a unitary time evolution, but a change in representation. But wait a minute: changes in representation are about Hilbert spaces and they do not happen inside a Hilbert space! And since they do not happen inside a Hilbert space, then there is no superposition and therefore:

Eq. 1 + Eq. 2 DOES NOT IMPLY Eq. 3

The way the usual story goes in explaining the measurement problem is:

the wave-function of a system is complete
+
the wave-function always evolves in accord with a linear dynamical equation
+
Eq. 1, Eq. 2 (repeated measurement axiom)
+
Eq. 3 (by superposition)

IMPLIES 

measurements DO NOT have determinate outcomes (because Eq. 3)

Because Eq. 1 + Eq. 2 DOES NOT IMPLY Eq. 3 the "trilemma" is bogus and  
  • the wave-function of a system is complete
  • the wave-function always evolves in accord with a linear dynamical equation
  • measurements have determinate outcomes
are all compatible with each other. The trilemma is an artifact of using only one Hilbert space representation. If we insist on a single Hilbert space representation then Asher Peres (can we call him the forefather of QBism?) was right on demanding that quantum mechanics should not be applicable to itself. 

But now we have a solid foundation for Peres' handwaving: one universal Hilbert space representation (or to the extreme a nonsensical wavefunction of the universe) cannot explain everything. Why? Because this is incompatible with identical results for repeated measurements. There are many Hilbert space representations linked by the Grotendieck construction. 

The measurement process is the lifting of the representation degeneracy through equivalence breaking.

I will stop the measurement problem series here with the main conclusion stated on the line above. There is still one open problem before claiming success: derive Born rule in the Grotendieck Cartesian pair approach. This is a big work in progress. 

Next time at this blog I will look into classical emulations of quantum mechanics. Very interesting stuff which hold some big expectations on realizing a quantum computer without the fear of decoherence. This is all real, not a crackpot's dream of beating Bell's theorem, and it is actually built and working in the lab. But to what degree can a classical system realize pure quantum effects? Please stay tuned.