tag:blogger.com,1999:blog-38321360178937494972021-06-22T16:33:17.540-04:00Elliptic ComposabilityFlorin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.comBlogger224125tag:blogger.com,1999:blog-3832136017893749497.post-26114905061977088132020-10-04T21:57:00.001-04:002020-10-04T21:57:47.574-04:00DISHONESTY IN ACADEMIA: THE DEAFENING SILENCE OF THE ROYAL SOCIETY OPEN SCIENCE JOURNAL ON AN ACCEPTED PAPER THAT FAILED THE PEER REVIEW PROCESS<div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-KQ7SvLJOUU8/X3p9VtqOx_I/AAAAAAAABnI/1ySlUerswiYEESMAAS3XAvCVEEIPpqCEwCLcBGAsYHQ/s867/csm_FMPicturePotomac_9e79235006.jpeg" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="528" data-original-width="867" src="https://1.bp.blogspot.com/-KQ7SvLJOUU8/X3p9VtqOx_I/AAAAAAAABnI/1ySlUerswiYEESMAAS3XAvCVEEIPpqCEwCLcBGAsYHQ/s320/csm_FMPicturePotomac_9e79235006.jpeg" width="320" /></a></div><br /><div><br /></div><div><br /></div><div> More than two years ago, on February 26th 2018, I was contacted by the <b>Royal Society Open Science Journal</b> to referee a submitted manuscript. Two prior referees had accepted the paper and two had rejected it, and I was the tiebreaker. The manuscript, <i>Quantum Correlations are Weaved by the Spinors of the Euclidean Primitives</i> by Joy Christian, basically claims that Bell’s theorem is incorrect. If true, this would be a game changer in the foundation of quantum mechanics. Bell’s theorem shows that it is impossible to construct a local realistic model of the theory.<br /><br />Bell’s result is an impossibility proof; it attracts such passion as the impossibility of perpetual motion machines that were so popular some 100 years ago. A manuscript claiming the invention of a working perpetual motion device, proof that Earth is flat (yes, there is such a thing as an annual conference of Flat-Earth-ers), or that the sun circles Earth would be rejected by any respectable journal right away.<br /><br />So, what if someone managed to “disprove” Bell’s theorem and, better yet, to publish that “discovery”? This would create lots of debates and excitement – certainly, notoriety and free publicity for the journal who published your claim. In other words, good business.<br /><br />But who is claiming to have “disproven” Bell theorem? Enter Joy Christian, who has been asserting this claim for 13 years. It was debunked by many scientists and scientific panels over the years, yet Christian is not having any of it. Basically, he alleges to have found a method for obtaining the quantum correlation of a Bell pair of particles, by using a Bell “loophole”. In the no-mans-land at the intersection of physics, mathematics, and philosophy, experts in all three fields are scarce. Christian’s ‘method’ is based on a mathematical error, which is ultimately adding apples and oranges, but the error is hard to spot if you are not a genuine expert in geometric algebra. Add to this the language and structure of a well-written physics paper and you might convince an unsuspecting referee to approve your manuscript.<br /><br />I had found Christian’s mistake again in the manuscript and I recommended to reject the paper. Certain that it would never be published, I went about my daily business. Imagine my surprise when I heard Christian had somehow managed to publish his nonsense. I thought this impossible; the vote had been 3 to 2 for rejection. I checked and found that indeed, the paper had gotten accepted after submitting a revision. <b>However, I was not contacted by the journal to review the revision.</b> I started contacting colleagues who had to deal with Joy’s claims before, and together with Philippe Grangier, Richard Gill, Howard Wiseman, Brukner Časlav, Gregor Weihs, and Scott Aaronson, in a letter to the journal on July 28th 2018, we asked that the article be withdrawn:<br /><br /><i>Dear Editor-in-Chief,<br /><br />We are writing to you about the publication of the paper “Quantum Correlations are weaved by the spinors of the Euclidean primitives” by Joy Christian in your journal on May 30 2018 <a href="http://rsos.royalsocietypublishing.org/content/5/5/180526">http://rsos.royalsocietypublishing.org/content/5/5/180526</a><br /><br />The result of this paper conflicts with an established scientific fact (Bell’s theorem) well known in the foundations of quantum physics and a basis of modern quantum information science; moreover, the subject of recent high-profile experiments (“loophole free tests of Bell’s theorem”). The paper contains numerous errors in elementary algebra, calculus, and logic. The manuscript was rejected by three of the five reviewers, but the editorial process as stated to the reviewers by your journal was not followed: the manuscript was accepted without informing the reviewers and giving them a chance to rebut the misleading statements made by the author (see review history on the link above).<br /><br />The claims made by the author are well known from 2007 and they were disproven in the past (<a href="https://fqxi.org/community/forum/topic/1577">https://fqxi.org/community/forum/topic/1577</a> ). From time to time Joy Christian attempts to publish his faulty claims and recently a similar paper was withdrawn by Annals of Physics <a href="https://www.sciencedirect.com/science/article/pii/S0003491616300975">https://www.sciencedirect.com/science/article/pii/S0003491616300975</a><br /><br />The journal did extend an invitation to write a rebuttal paper but stated that Joy Christian would be a reviewer to the rebuttal. This is not an acceptable course of action from an ethical point of view because it legitimizes scientific dishonesty on behalf of Joy Christian who is well aware of the issues with his arguments for more than 10 years and yet continues to obfuscate the truth.<br /><br />Considering this, we are respectfully asking your journal to withdraw the paper.<br /><br /> <br /><br />Sincerely,<br /><br />Florin Moldoveanu - George Mason University (reviewer 5)<br /><br />Richard Gill – Leiden University<br /><br />Howard Wiseman – Griffith University (reviewer 3)<br /><br />Scott Aaronson - University of Texas<br /><br />Philippe Grangier - Institute of Optics, Charles Fabry Laboratory<br /><br />Brukner Caslav - IQOQI - Institute for Quantum Optics and Quantum Information Vienna<br /><br />Gregor Weihs – Innsbruck University</i><br /><br /> <br /><br />This was about two years ago. We kept asking for updates, and when not stonewalling us, the journal kept pushing one roadblock after another.<br /><br /><b>The Royal Society Open Science Journal had more than two years to get their act together. By now, their silence speaks louder than words.</b><br /><br />It is unconscionable that instead of putting extra checks in place for authors with a history of inaccurate publications, the journal violated their own peer review policy and chose to maintain a faulty paper instead of withdrawing it.<br /><br />We gave the journal the benefit of the doubt for two years. The passing of time made it clear that the decision to maintain the faulty paper is no accident and no mistake.<br /><br />Perhaps this is a symptom of a larger systemic problem with open journals who are paid by the authors to get their papers (usually rejected elsewhere) published. When your salary and livelihood depend on the people you are supposed to enforce rules upon, the temptation to bend those rules is high.<br /><br />I grew up in a former communist country of the eastern bloc. At the time of communism, a rule supposed to be enforced by the traffic police was that if you pay for a traffic ticket on the spot, you will be charged with half the fine. You might guess that most officers pocketed that money. The rule only solidified endemic corruption.<br /><br />In our case, the author is well known for making the same incorrect argument over and over again. However,<b> the root of the problem seems to be with the journal</b>. After all, had they followed their own policy, the problem would not have arisen in the first place. And in case the mistake was genuine (as sometimes mistakes do happen), is two years enough time to get the record straight? It makes me wonder: just how often did the editors turn a blind eye to publication issues to secure revenue? <b>Is it really a good idea that those in charge of rule enforcement are financially dependent on the rule violators?</b></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com7tag:blogger.com,1999:blog-3832136017893749497.post-25201300143257338342017-12-30T12:42:00.000-05:002017-12-30T15:45:14.558-05:00<h2 style="text-align: center;">Is Walter Lewin wrong about Kirchhoff's law? </h2><div><br /></div><div>Thinking how to organize the upcoming material about geometry and physics I came about a recent controversy about an electromagnetism lecture by <a href="https://en.wikipedia.org/wiki/Walter_Lewin" target="_blank">Walter Lewin</a> and I want to talk about it in this post. </div><div><br /></div><div>First, a little background. When I started teaching undergraduate physics I needed help on how to structure the lectures. Personally I wanted to emulate Feynman lectures, but the physics departments impose on you either Giancolli or Young and Friedman. I turned to internet for help and I found the outstanding undergrad lectures by Walter Lewin which I adapted to my needs - thank you Professor Lewin. In lesson 16 on electromagnet induction which you can watch below</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/FUUMCT7FjaI/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/FUUMCT7FjaI?feature=player_embedded" width="320"></iframe></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">on minute 34:54 the fireworks begins. The setup starts unassuming: a trivial circuit with a 1 V battery and two resistors in series. The current is computed as well as the voltage drop across the larger resistor. Then the battery is removed and is replaced by a solenoid in the middle of the circuit which generates an increasing magnetic field such that the induced voltage is the same 1 V as the battery. Now the question becomes: what is the voltage drop on each resistors? The answer is that <b>two voltmeters <u>connected on the same points</u> one on one side of the loop and one of the other side will record different voltages of opposite polarities! </b>In this case one will record +0.9 V and the other -0.1 V. And the experiment confirms this!!!</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">For a complete step-by-step derivation see: <a href="http://web.mit.edu/8.02/www/Spring02/lectures/lecsup3-15.pdf" target="_blank">web.mit.edu/8.02/www/Spring02/lectures/lecsup3-15.pdf</a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This is deeply at odds with our intuition and there are a lot of "proofs" of why Lewin is wrong. Here is one example which has the advantage over others that it is in English.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div style="text-align: center;"><iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/-AjdUuq8JNY/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/-AjdUuq8JNY?feature=player_embedded" width="320"></iframe></div><div style="text-align: center;"><br /></div><div style="text-align: left;">Lewin responded to those challenges in two distinct posts in a poor manner in my opinion: he simply repeated slowly and loudly his explanation and did not address the root cause of the discomfort people experience when presented with this counter-intuitive phenomena. So I will attempt to add my own pedagogical explanation of how to understand this.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">First, are the two voltmeters connected on the same points? It is not clearly visible on the video how the two voltmeters are connected, but if you have access to an electronics lab you can actually connect the two voltmeters on the very same points and repeat the experiment. So <b>how is it possible that two identical voltmeters connected to the same points read something differently? <u>This is because different currents flow through the two voltmeters.</u></b></div><div style="text-align: left;"><b><u><br /></u></b></div><div style="text-align: left;">Now you may say: this is absurd, why different currents flow through the two voltmeters? Can I just simply use only one voltmeter and place it on the right hand side of the circuit, and then flip it to the left hand side? <b>Why would the reading change?</b></div><div style="text-align: left;"><b><br /></b></div><div style="text-align: left;"><b>The reading does not change as long as you do not cross the changing magnetic field zone. However as you cross the changing magnetic field the voltmeter reading changes gradually from the right hand side value of +0.9 V to the left hand side value of -0.1 V. </b>We can actually compute how this happens and I will do it below. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">But before doing it, I want to say what is wrong in the "disproof" video above. First, the solenoid is the same size of the circuit and is placed under the circuit. This allows for the gradual voltage change due to crossing of the changing magnetic field to be misinterpreted as a voltage drop on the copper wiring. Second, the explanation is inconsistent with regards to the current intensities. If you compute the intensity in the copper wire due to 1 V difference you do not get 1 milli Ampers, but a huge current because copper wire has almost zero resistance. Also there is a big misunderstanding on the how to compute the flux by stating that a vertical plane has no flux going though it. If the loop would be completely vertical this would have been correct, but the the voltmeter loop also contains a horizontal path closing the circuit and this has non-zero magnetic flux crossing through it.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">So now onto the computation. I will use only one voltmeter which is placed on the right like in the first picture below:</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-NxlLGUS735c/Wke_Zn4hjNI/AAAAAAAABT8/7J7lBcTbZI8_ieJUdU5OhBKrAfWCDo2_ACLcBGAs/s1600/circ.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="528" data-original-width="450" height="320" src="https://1.bp.blogspot.com/-NxlLGUS735c/Wke_Zn4hjNI/AAAAAAAABT8/7J7lBcTbZI8_ieJUdU5OhBKrAfWCDo2_ACLcBGAs/s320/circ.png" width="272" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Then I move the connection points A and B and flip the voltmeter wire to cross the magnetic region (see the second picture and notice that the direction of i stays the same: from the voltmeter to r2).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Let's work out the math in the first case. There are two loops with currents I and i and the resistance of the voltmeter is R >> r1, r2:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I(r1+r1) - i r2= E (induction law)</div><div class="separator" style="clear: both; text-align: left;">i r2 - I r2 + i R = 0 (Kirchhoff)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">in the second equation r2 << R and we can ignore the first term: -I r2 + i R = 0 which means i << I and in the first equation we can ignore the negative term resulting in:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I (r1 + r2) = E from which we extract the current in the main loop.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Then V read by the voltmeter is V = i R = I r2 = E r2 /(r1+r2) = 0.9 V using the resistors and E value from the lecture.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now we are proceeding to flip the voltmeter from right to the left. Suppose that during flipping we cross s% of the changing magnetic flux area (the yellow area) with the voltmeter loop. The first equation reads as before:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I(r1+r1) - i r2= E (induction law)</div><div class="separator" style="clear: both; text-align: left;">but the second one is changed to:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">i r2 - I r2 + i R = -sE (induction law)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">the same order of magnitude tricks apply and we get:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">I (r1 + r2) = E</div><div class="separator" style="clear: both; text-align: left;">-I r2 + i R = - sE</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">and the voltage recorded by the voltmeter is V = i R = I r2 - sE</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">V = E r2 /(r1+r2) - sE with \(s \in (0,1)\)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">So on the right side the voltmeter reads E r2 /(r1+r2) = 0.9 V and on the left side the voltmeter reads </div><div class="separator" style="clear: both; text-align: left;">E r2 /(r1+r2) -E = -E r1/(r1 + r2) = -0.1 V with all the intermediate values in between as we flip the device.</div><br />This effect is shown in Mabilde's video from 17:40 to 18:00, but his explanation is wrong.<br /><br />Alternatively when the voltmeter is flipped all the way to the left we can consider a loop not enclosing the area of changing magnetic field and arrive at the same -0.1 V by using Kirchhoff's just like we did on the right hand side.<br /><br />If we consider two left and right voltmeters we can also understand why they record different values. We can consider the outer loop and apply the induction law to it and see that a current "i" flows from one voltmeter through the other due to the area of changing magnetic field. Since the voltmeters are polarity sensitive one would record a positive voltage and another one a negative voltage. In fact we can further simplify the setting by removing the two resistors completely and in this case one voltmeter would record +0.5 V and another -0.5 V:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-LqAxj-5DNVc/WkfOSgy3COI/AAAAAAAABUQ/akyRMa5OYY8sib5qH47tkhc6Vj45i2kJACLcBGAs/s1600/circ%2B2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="246" data-original-width="580" height="135" src="https://3.bp.blogspot.com/-LqAxj-5DNVc/WkfOSgy3COI/AAAAAAAABUQ/akyRMa5OYY8sib5qH47tkhc6Vj45i2kJACLcBGAs/s320/circ%2B2.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br /><br />In conclusion the Walter Lewin is completely correct and the controversy stems from blind trust in Kirchhoff's circuit laws which are valid only when there is no changing of magnetic flux in the circuits and a misunderstanding of Maxwell's equations.</div><div style="text-align: left;"><br /></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com7tag:blogger.com,1999:blog-3832136017893749497.post-22464740224329686712017-12-17T19:09:00.000-05:002017-12-17T19:09:05.742-05:00<h2 style="text-align: center;">Physics and Geometry</h2><div><br /></div><div>Returning to gauge theory, physics is best understood in terms of geometry. The area is vast and I am considering how to best explain the key concepts in the most intuitive way. Today I want to start with the broad picture to present the relevant mathematical landscape. We need to consider two kinds of transformations:</div><div><ul><li>transformations in space-time</li><li>gauge transformations of physical fields.</li></ul><div>The mathematical machinery involved uses fiber bundles and Cartan's language of differential forms. There are two key differential forms: the connection 1-form and the curvature 2-form. Another essential ingredient is that of parallel transport. <b>In terms of physics parallel transport corresponds to the transport of physical information. </b>When we do parallel transport around a closed loop the final state is in general different than the initial state.<br /><br />Here is some trivial example from ordinary high-school geometry. You are walking on Earth along the equator and you carry with you an arrow which points North. You walk 1/4 circumference of the Earth when you decide to walk all the way to North Pole. At any point during your journey you keep the orientation of your arrow at time \(t\) parallel to the orientation of your arrow at time \(t+\Delta t\). Initially the arrow was perpendicular to the direction of travel, and when you started going North the arrow is parallel to the direction of travel. Once at North Pole you deice to take the shortest path to your starting point. What is the orientation of your arrow when you get back? Try this with a pencil and a ball.<br /><br />The parallel transport on a closed loop can be used to define curvature. In terms of physics, gauge theory teaches us that:<br /><br /><b>force=curvature</b><br /><br />When discussing gauge theory, the natural language is that of bundles. On bundles, one starts with product bundles but then one proceeds to general bundles obtained by gluing together product bundles. For Standard Model the product bundles are enough, but for gauge theory on curved space-time (string theory, quantum gravity) you need to use the most general bundles.<br /><br />One problem arises then: how to relate what different observers see?<br /><br /><b>Changes in observers are described by cocycles.</b> Cocycles depend on both the topology of the space-time manifold and the structure of the gauge group. The deviation of vector bundles from from product bundles is measured by the so-called characteristic classes (Chern, Euler, Pontryagin, Stiefel-Whitney, Thom classes).<br /><br />To explain the math machinery of all this is a very ambitious project and I am not sure how far along I can carry the series but I will try. The geometry involved is very beautiful (at least to me). Please stay tuned.<br /><br /></div></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-73270237386858234412017-12-03T22:51:00.000-05:002017-12-03T22:55:05.885-05:00<h2 style="text-align: center;">Industry vs. Academia</h2><div><br /></div><div>Before returning to talk about gauge theory I want to discuss a topic out of my personal experience. I started my career in academia, I switched to industry where I was very successful, and I manage to come back to academia. As such I can offer a good perspective of both and hopefully clarify misconceptions. </div><div><br /></div><div>For a little history, I published my first paper second year in college and after graduation I joined the most productive theoretical research group in Romania which was publishing about a paper a month. The sky was the limit and I chose to get my PhD in US. I went to UMCP and after graduation I joined my adviser's recently created company switching to industry where I rapidly climbed the corporate ladder. Now I am back in academia (with a foot still in the industry).</div><div><br /></div><div>In academia there is this perception (and arrogance) that the outside world is not smart enough. This is farthest from the truth. The smartest person I ever met was Alain Connes (and also the most arrogant person I ever met) but beside this outlier, I can state with confidence that the industry has on average the smarter persons. In general their abilities correlate strongly with the amount of money or power they amass. Another misconception is that in industry you do not work on interesting or hard problems. The opposite is true, and I saw time and time again how the state-of-the-art knowledge in the industry is 10 to 20 years ahead of the state-of-the-art knowledge in academia. The best industry knowledge is kept under wraps on a need to know basis as trade secrets and you need to be high enough on the food chain to get to know them.</div><div><br /></div><div>For comparable skill sets, in the industry you earn 2 to 3 times the amount you will make in academia. When I was choosing my PhD adviser, I considered a thesis in particle phenomenology with Professor <a href="https://en.wikipedia.org/wiki/Rabindra_Mohapatra" target="_blank">Rabindra Mohapatra</a>. I had to qualify to be his grad student and after I did that I got to see the fine print of the deal. The entire particle physics group at UMCP (10 professors) had to pull their resources to fund a single research assistant (RA). In comparison, the professors in EE department each had between 10 to 20 RAs due to industry contracts. This turned me off and I picked a better funded adviser. A few years after graduation I was working in the industry and I was in the market for a house. It just happened that professor Mohapatra was selling his house and I got to visit it as a buyer. I did not bought it in the end but already within a few years in industry I had the same buying power as a distinguished full professor in academia.</div><div><br /></div><div>When you climb the corporate ladder in the industry there are standard steps. You start as an intern and the requirement there is to have potential to grow and to be liked by the group. Then you are a junior engineer who is working under close supervision. When you can work independently in one area you become a senior engineer and a typical salary is around $90K/year. If you can work across the board in any area, you are a principal engineer. From principle the next step is supervisor where you are responsible for the results of an entire team. Next step is manager who is a supervisor with power to do performance evaluations and make salary decisions. Up to this point the focus is on work and as a manager you get to know the dirty secrets but you still do not have a seat at the big boys table. Next is a director who is an execution machine. The most important skill of a director is to defend his back as peers and people above are out to get him. A director with industry knowledge is a vice president. A VP can run the company but he is not yet vetted from the social part: he is not a member of the C-level club (CEO, COO, CFO). At C-level you are a god for the company and your statements carry legal responsibility. For the people entering the industry, one general word of advice is that the human resource department is never your friend - don't get fooled by their friendliness or lunch events they organize. They are there to prevent the company from being sued and they do the dirty work of firing people - which no manager enjoys. </div><div><br /></div><div>I managed to climb all the way to director and VP level, and at every large company I worked the C-level are all crooks-no exceptions. Very smart, very competent, very polished, experts in the art of dissimulation and manipulation, and rotten to the core. Power does corrupt. I came to know countless unbelievable horror stories. </div><div><br /></div><div>Now back to physics and academia. One thing I hate was the publish or perish state of fact. Not everyday you have a big breakthrough and 99% of the published papers are utterly useless. People organize in groups citing each other meaningless results, and if your paper is correct most of the referee's comments are about not citing something. When I exited academia I made the decision to come back when I would have something important to say. Coming back turned out to be much much harder than I thought. </div><div><br /></div><div>Everything was an uphill battle: first results, gain archive endorsement, first paper, first conference, first teaching assignments. When you are in the system you cannot see how much your adviser is supporting you in your academic career. If you worked in industry and want to come back to academia <i>only </i>because you like physics you will not be successful. The first thing you need is to have a research domain of your own and also to have meaningful results. What matters the most is what problems you are working on and for this you need good guidance. Good guidance is hard to come by even when you are in academia. I was lucky to have met a remarkable person: Emile Grgin who worked on a research topic from Bohr. Bohr passed this topic to his personal assistant Aage Peterson who worked with Grgin as postdocs at Yeshiva University in the 70s. Grgin switched to industry and I met him as he was retiring interested to make more progress on this topic. I learned the topic and I carried the torch. Lucky for me, to no merit of mine, the area turned out to be a gold mine. So now I was having something meaningful to work on and something meaningful to say and it was time to start the transition back to academia. Also there was no competition in my research area - a wonderful thing.</div><div><br /></div><div>I might have been a young tiger before in academia but now I was a big nobody as far as academia was concerned. The first hurdle was rising above the large background noise of crackpots. I identified this organization FQXi and I won a third prize there on one of their essay contests. This opened up connections for me and now people would no longer ignore my emails and gave me the cold shoulder. I started blogging for FQXi and this is when I hit the second roadblock. After crackpots there is a layer of charlatans. I ran into the biggest of all Joy Christian. The problem was that he had powerful friends with a lot of clout and fighting Joy blacklisted me and delayed my academia comeback plan by 2-3 years until Joy's credibility was all gone. Finally after wasting in the desert of blacklisting I gained archive endorsement, I started to get invited to conferences, and I started publishing again. I also became a referee on various journals. It was time to start a blog (this one). The blog provided a much needed discipline to work a given set of hours a day on physics. In industry you live under a constant shadow of a deadline and it is very hard to set hours aside in the day to do anything else productive. Now it was time to complete the switch and make money again from a physics job. Here I ran into another large problem. Physics pays peanuts and now I cannot go back and live like a poor grad student anymore: I have a large mortgage to pay as well as college expenses for my kids to the tune of hundreds of thousands of dollars. I needed a side job in the industry to make up the difference, but no job provides that flexibility. The answer was to start my own consultancy company. This is no easy task, but I did have all the knowledge I needed.</div><div><br /></div><div>So I finally had the first paying job at a university jumping between my company work and my physics duties. Physics was no longer a hobby. Today I am very busy working over 80 hours a week and I am climbing very rapidly the academic levels. Coming back to academia is a game of building your credibility. A funny observation is that at every level no matter how low there are a lot of politics being played. To me this is very amusing. I know how to read people and get them what they expect from me when they expect it. </div><div><br /></div><div>One more observation. Industry is capitalism with the good and the bad. Academia is still run by the rules of a feudal system. There are big established hierarchies which are defended at length without regard for truth. Sadly most people in academia do not seek truth but power. You can see the hierarchy at the conferences on the order and length of time given to the speakers. I found this obscene. In a small world where everybody knows everybody the arrogance of most of the hot shots runs through the roof. There are also scientific crooks who abuse their power for personal gains. There are notable exceptions however. One of the most down to earth nice person who listens to what the lowliest grad student has to say is <a href="https://en.wikipedia.org/wiki/Gerard_%27t_Hooft" target="_blank">'t Hooft</a>. I don't agree with his approach to quantum mechanics but I value him as a decent person. Another decent person I admire is <a href="https://en.wikipedia.org/wiki/Avshalom_Elitzur" target="_blank">Avshalom Elizu</a>r. For the record I am not associated with either of them or their groups and I am not praising them for any future gains. I am genuinely impressed by the way they conduct themselves in a sea of feudal system arrogance.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com2tag:blogger.com,1999:blog-3832136017893749497.post-8095636393652045522017-10-29T23:33:00.000-04:002017-10-29T23:33:57.244-04:00<h2 style="text-align: center;">The electromagnetic field</h2><div style="text-align: center;"><br /></div>Continuing from last time, today I will talk about the electromagnetic field as a gauge theory.<br /><br /><h3>1. The gauge group</h3><div>In this case the gauge group is \(U(1)\) - the phase rotations. This group is commutative. This can be determined if we start from Dirac's equation and we demand that the group leaves the Dirac current of probability density:<br /><br />\(j^\mu = \Psi^\dagger \gamma^0 \gamma^\mu \Psi\)</div><div><br /></div><div>invariant.</div><h3>2. The covariant derivative giving rise to the gauge group</h3><div>Here the covariant derivative takes the form:<br /><br />\(D_\mu = \partial_\mu - i A_\mu\)<br /><br />To determine the <b>gauge connection</b> \(A_\mu\) we can substitute this expression in Dirac's equation:<br /><br />\(i\gamma^\mu D_\mu \Psi = m\Psi\)<br /><br />and require the equation to be invariant under a gauge transformation:<br /><br />\(\Psi^{'} = e^{i \chi}\Psi\)<br /><br />which yields:<br /><br />\(A^{'}_{\mu} = A_\mu + \partial_\mu \chi\)<br /><br />This shows that:<br />- the general gauge field for Dirac's equation is an arbitrary vector field \(A_\mu (x)\)<br />-The part of the gauge field which compensates for an arbitrary gauge transformation of the Dirac field \(\Psi (x)\) is the gradient of on an arbitrary scalar field.</div><h3><br />3. The integrability condition</h3><div>Here we want to extract a physically observable object out of a given vector field \(A_\mu (x)\). From above it follows that there is no external potential if \(A_\mu = \partial_\mu \chi\) and this is the case if and only if:<br /><br />\(\partial_\mu A_\nu - \partial_\nu A_\mu = 0\)</div><div><br /></div><h3>4. The curvature</h3><div>The curvature measures the amount of failure for the integrability condition and by definition is the left-hand side of the equation from above:<br /><br />\(F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu\)<br /><br />and this is the electromagnetic field tensor.</div><div><br /></div><h3>5. The algebraic identities</h3><div>There is only one algebraic identity in this case stemming from the curvature tensor antisymmetry:<br /><br />\(F_{\mu\nu} +F_{\nu\mu} = 0\)</div><div><br /></div><h3>6. The homogeneous differential equations</h3><div>If we take the derivative of \(F_{\mu\nu}\) and we do a cyclic sum we obtain:<br /><br />\(F_{\mu\nu , \lambda} + F_{\lambda\mu , \nu} + F_{\nu\lambda , \mu} = 0\)<br /><br />which is analogous with the Bianchi identity in general relativity.<br /><br />This identity can be expressed using the Hodge dual as follows:<br /><br />\(\partial_\rho {* F}^{\rho\mu} = 0\)<br /><br />and this is nothing but two of the Maxwell's equations:<br /><br />\(\nabla \cdot \overrightarrow{B} = 0\)<br />\(\nabla \times \overrightarrow{E} + \frac{\partial}{\partial t} \overrightarrow{B} = 0\)</div><div><h3>7. The inhomogeneous differential equations</h3></div><div>If we take the derivative of \(\partial_\beta F^{\alpha\beta}\) we get zero because the F is antisymetric and \(\partial_{\alpha\beta} = \partial_{\beta\alpha}\). and so the vector \(\beta F^{\alpha\beta}\) is divergenless. We interpret this as a current of a conserved quantity: the source for the electromagnetic field and we write:</div><div><br /></div><div>\(\partial_\rho F^{\mu\rho} = 4\pi J^\mu\)</div><div><br /></div><div>where the constant of proportionality comes from recovering Maxwell's theory (recall that last time \(8\pi G\) came from similar arguments.</div><div><br /></div><div>From this we now get the other two Maxwell's equations:</div><div><br /></div><div>\(\nabla \cdot \overrightarrow{E} = 4\pi \rho\)<br />\(\nabla \times \overrightarrow{B} - \frac{\partial}{\partial t} \overrightarrow{E} = 4\pi \overrightarrow{j}\)<br /><br />Now we can compare general relativity with electromagnetism:<br /><br />Coordinate transformation - Gauge transformation<br />Affine connection \(\Gamma^{\alpha}_{\rho\sigma}\) - Gauge connection \(iA_\mu\)<br />Gravitational potential \(\Gamma^{\alpha}_{\rho\sigma}\) - electromagnetic potential \(A_\mu\)<br />Curvature tensor \(R^{\alpha}_{\beta\gamma\delta}\) - electromagnetic field \(F_{\mu\nu}\)<br />No gravitation \(R^{\alpha}_{\beta\gamma\delta} = 0\) - no electromagnetic field \(F_{\mu\nu} = 0\)<br /><br /><br /></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-80589053061596160642017-10-08T23:57:00.000-04:002017-10-08T23:57:50.329-04:00<h2 style="text-align: center;">The gravitational field</h2><div><br /></div><div>Today we will start implementing the 7 point roadmap in the case of the gravitational field. Technically gravity does not form a gauge theory but since it was the starting point of Weyl's insight, I will start with this as well and next time I will show how the program works in case of the electromagnetic field.</div><div><br /></div><h3>1. The gauge group</h3><div>The "gauge group" in this case is the group of general coordinate transformations in a real four-dimensional Riemannian manifold M. Now the argument against Diff M as a gauge group comes from locality. An active diffeomorphism can move a state localized near the observer to one far away which can be different. However, for the sake of argument I will abuse this today and considered Diff M as a "gauge group" because of the deep similarities (which will explore in subsequent posts) between this and proper gauge theories like electromagnetism and Yang-Mills.</div><div><br /></div><h3>2. The covariant derivative giving rise to the gauge group</h3><div>For a vector field \(f^\alpha\) the covariant derivative is defined as follow:</div><div><br /></div><div>\(D_\rho f^\alpha = \partial_\rho f^\alpha +{\Gamma}^{\alpha}_{\rho\sigma} f^\alpha\)</div><div><br /></div><div>where \({\Gamma}^{\alpha}_{\rho\sigma}\) is called an <b>affine connection</b>. If we demand that the metric tensor is a covariant constant under D we can find that the connection is:</div><div><br /></div><div>\({\Gamma}^{\sigma}_{\mu\nu} = \frac{1}{2}[g_{\rho\mu,\nu} + g_{\rho\nu,\mu} - g_{\mu\nu,\rho}]\)</div><div><br /></div><div>where \(f_{\rho,\sigma} = \partial_\sigma f_\rho\) </div><h3><br />3. The integrability condition</h3><div>We define this condition as the commutativity of the covariant derivative. If we define the notation: \(D_\mu D_\nu f_\sigma = f_{\sigma;\nu\mu}\) we can write this condition as:</div><div><br /></div><div>\(f_{\rho;\mu\nu} - f_{\rho;\nu\mu} = 0\)</div><div><br /></div><div>Computing the expression above yields:</div><div><br /></div><div>\(f_{\rho;\mu\nu} - f_{\rho;\nu\mu} = f_\sigma {R}^{\sigma}_{\rho\mu\nu}\)</div><div>where</div><div><div>\({R}^{\sigma}_{\rho\mu\nu} = {\Gamma}^{\tau}_{\rho\mu}{\Gamma}^{\sigma}_{\tau\nu} - {\Gamma}^{\tau}_{\rho\nu}{\Gamma}^{\sigma}_{\tau\mu} + {\Gamma}^{\sigma}_{\rho\mu,\nu} - {\Gamma}^{\sigma}_{\rho\nu,\mu}\)</div></div><div><br /></div><h3>4. The curvature</h3><div>From above the integrability condition is \({R}^{\sigma}_{\rho\mu\nu} = 0\) and R is called the <a href="https://en.wikipedia.org/wiki/Riemann_curvature_tensor" target="_blank">Riemann curvature tensor</a>.</div><div><br /></div><h3>5. The algebraic identities</h3><div>The algebraic identities come from the symmetry properties of the curvature tensor which reduces the 256 components to only 20 independent ones. I am too tired to type the proof of the reduction to 20, but you can easily find the proof online.</div><div><br /></div><h3>6. The homogeneous differential equations</h3><div>If we take the derivative of the Riemann tensor we obtain a differential identity known as the Bianchi identity:</div><div><br /></div><div>\({R}^{\sigma}_{\rho\mu\nu;\tau} + {R}^{\sigma}_{\rho\tau\mu;\nu} + {R}^{\sigma}_{\rho\nu\tau;\mu} = 0\)</div><div><br style="background-color: #c0a154; color: #333333; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13.524px;" /><h3>7. The inhomogeneous differential equations</h3><div>This equation is of the form:</div></div><div><br /></div><div style="text-align: center;"><i>geometric concept = physical concept</i></div><div><br /></div><div>And in this case we use the stress energy tensor \(T_{\mu\nu}\) and we find a geometric object with the same mathematical properties: symmetric and divergenless build out of curvature tensor. The left-hand side is the Einstein tensor:</div><div><br /></div><div>\(G_{\mu\nu} = R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R\)</div><div><br /></div><div>The constant of proportionality comes from recovering Newton's gravitational equation in the nonrelativistic limit. In the end one obtains Einstein's equation:</div><div><br /></div><div>\(G_{\mu\nu} = 8\pi G T_{\mu\nu}\)</div><div><br /></div><div>Next time I will go through the same process for the electromagnetic field and map the similarities between the two cases. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-36351587896418578692017-09-24T19:02:00.001-04:002017-09-24T19:02:27.001-04:00<h2 style="text-align: center;">The Math of Gauge Theories</h2><h2 style="text-align: center;"></h2><div style="text-align: left;">With a bit of a delay I am resuming the posts on gauge theory and today I will talk about the math involved. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">In gauge theory you consider the base space-time as a manifold and you attach at attach point an object or what is called a fiber forming what it is called a <a href="https://en.wikipedia.org/wiki/Fiber_bundle" target="_blank">fiber bundle</a>. The picture which you should have in mind is that if a rug.</div><div style="text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-7FVwc8iFNUE/WcUurNvHGaI/AAAAAAAABRc/_7ZqXYwPdOIXy_vuiPOf4O8S3QR0vcIGQCLcBGAs/s1600/rug.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="462" data-original-width="648" height="228" src="https://2.bp.blogspot.com/-7FVwc8iFNUE/WcUurNvHGaI/AAAAAAAABRc/_7ZqXYwPdOIXy_vuiPOf4O8S3QR0vcIGQCLcBGAs/s320/rug.jpg" width="320" /></a></div><div style="text-align: left;"><br /></div><div style="text-align: left;">The nature of the fibers is unimportant at the moment, but they should obey at least the properties of a linear space. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Physically think of the fibers as internal degrees of freedom at each spacetime point, and a physical configuration would correspond to a definite location at one point long the fiber for each fibers. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">The next key concept is that of a <b>gauge group</b>. A gauge group is the group of transformations which do not affect the observables of the theory. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Mathematically, the gauge symmetry depends on how we relate points between nearby fibers and to make this precise we only need (only) one critical step: <b>define a covariant derivative</b>.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Why do we need this? Because an arbitrary gauge transformation does not change the physics and the usual ordinary derivative sees both infinitesimal changes to the fields, and the infinitesimal changes to an arbitrary gauge transformation. Basically we need to compensate for the derivative of an arbitrary gauge transformation.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">If <b>d</b> is the ordinary derivative, let's call <b>D</b> the covariant derivative and their difference (which is a linear operator) is called either a <b>differential connection</b>, a <b>gauge field</b>, or a <b>potential</b>:</div><div style="text-align: left;"><br /></div><div style="text-align: left;">A(x) = <b>D</b> - <b>d</b></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><b>D </b>and<b> d </b>act differently: <b>d</b> "sees" the neighbourhood behaviour but ignores the value of the function on which it acts, and <b>D</b> acts on the value but is blind to the neighbourhood behaviour. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">The condition we will impose on D is that is must satisfy the Leibniz identity because it is derivative:</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><b>D</b>(fg) = (<b>D</b>f)g+f(<b>D</b>g)</div><div style="text-align: left;"><br /></div><div style="text-align: left;">which in turn demands:</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><b>A</b>(fg) = (<b>A</b>f)g+f(<b>A</b>g)</div><div style="text-align: left;"><br />In general only one part of <b>A</b> may be used to compensate for gauge transformations, and the remaining part represent an external field that may be interpreted as potential. When no external potentials are involved, <b>A</b> usually respects <b>integrability conditions</b>. Those conditions depend on the concrete gauge theory and we will illustrate this in subsequent posts.<br /><br />When external fields are present, the integrability conditions are not satisfied and this is captured by what is called a <b>curvature</b>. The name comes from general relativity where lack of integrability is precisely the space-time curvature.<br /><br />The symmetry properties arising out of curvature construction gives rise to <b>algebraic identities</b>.<br /><br />Next in gauge theories we have the <b>homogeneous </b>and <b>inhomogeneous differential equations</b>. As example of homogeneous differential equations are the Bianchi identities in general relativity and the two homogeneous Maxwell's equations. The inhomogeneous equations are related to the sources of the fields (current in electrodynamics, and stress-energy tensor in general relativity).<br /><br />So to recap, the steps used to build a gauge theory are:<br /><br />1. the gauge group<br />2. the covariant derivative giving rise to the gauge field<br />3. integrability condition<br />4. the curvature<br />5. the algebraic identities<br />6. the homogeneous equations<br />7. the inhomogeneous equations<br /><br />In the following posts I will spell out this outline first for general relativity and then for electromagnetism. Technically general relativity is not a gauge theory because diffeomorphism invariance cannot be understood as a gauge group but the math similarities are striking and there is a deep connection between diffeomorphism invariange and gauge theory which I will spell out in subsequent posts. So for now please accept this sloppiness which will get corrected in due time.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-27477748535471626502017-09-04T21:00:00.000-04:002017-09-17T22:54:12.340-04:00<h2 style="text-align: center;">The Bohm-Aharaonov effect</h2><div><br /></div><div>Today we come back to gauge theory and continue on Weyl's ideas. With the advent of quantum mechanics Weyl realized that he could reinterpret his change in scale as a change in the phase of the wavefunction. Suppose we make the following change to the wavefunction:</div><div><br /></div><div>\(\psi \rightarrow \psi s^{ie\lambda/\hbar}\)</div><div><br /></div><div>The overall phase does not affect the Born rule and we did not change the physics (here \(\lambda\) does not depend on space and time and it is called a global phase transformation). Let's make this phase change depend on space and time: \(\Lambda = \Lambda (x,t) \) and see where it leads. </div><div><br /></div><div>To justify this assume we are studying charged particle motion in an electromagnetic field and suppose that \(\Lambda\) corresponds to a gauge transformation for the electromagnetic field potentials \(A\) and \(\phi\):</div><div><br /></div><div><div>\(A\rightarrow A + \nabla \Lambda\)</div><div>\(\phi \rightarrow \phi - \partial_t \Lambda\)</div></div><div><br /></div><div>This should not change the physics and in particular it should not change Schrodinger's equation. To make Schrodinger's equation invariant under a local \(\Lambda\) change we need to add \(-eA\) to the momentum quantum operator:</div><div><br /></div><div>\(-i\hbar \nabla \rightarrow -i\hbar \nabla -eA\)</div><div><br /></div><div>And the Schrodinger equation of a charged particle in an electromagnetic field reads:</div><div><br /></div><div>\([\frac{1}{2m}{(-i\hbar\nabla -eA)}^2 + e\phi +V]\psi = -i\hbar\frac{\partial \psi}{\partial t}\)</div><div><br /></div><div>But why do we have the additional \(eA\) term to begin with? It's origin is in Lorentz force. If \(B = \nabla \times A\) and \(E = -\nabla \phi - \dot{A}\), the Lagrangian takes the form:</div><div><br /></div><div>\(L = \frac{1}{2} mv^2 - e\phi + ev\cdot A\)</div><div><br /></div><div>which yields the canonical momenta to be:</div><div><br /></div><div>\(p_i = \partial{\dot{x}_i} = mv_i + eA_i\)</div><div><br /></div><div>and adding \(-eA\) to the momenta in the Hamiltonian yields Lorentz force from Hamlton's equations of motion. </div><div><br /></div><div>Coming back to Schrodinger's equation we notice that the electric and magnetic fields E and B do not enter the equation, but instead we have the electromagnetic potentials. Suppose we have a long solenoid which has inside a non zero magnetic field B, and outside zero magnetic field. Outside the solenoid, in classical physics we cannot detect any change if the current flows or not through the wire. However the vector potential is not zero outside the solenoid (\(\nabla\times A = 0\) does not imply \(A=0\)) and the Schrodinger equation solves differently when \(A = 0\) and \(A\ne 0\). </div><div><br /></div><div>From this insight <a href="https://en.wikipedia.org/wiki/Aharonov%E2%80%93Bohm_effect" target="_blank">Bohm and Aharonov</a> came up with a clever experiment to put this to the test: in a double slit experiment, after the slits they proposed to add a long solenoid. Record the interference pattern with no current flowing through the solenoid and repeat the experiment with the current creating a magnetic field inside the solenoid. Since the electrons do not enter the solenoid, from classical physics we should expect no difference, but in quantum mechanics the vector potential is not zero and the interference pattern shifts. Unsurprisingly the experiment confirms precisely the theoretical computation.</div><div><br /></div><div>There are several important points to be made. First, there is no classical explanation of the effect: E and B are not fundamental, but \(\phi\) and \(A\) are. <b>It is mind boggling that even today there are physicists who do not accept this and continue to look for effects rooted in E and B. </b>Second, the gauge symmetry is not just a accidental symmetry of Maxwell's equation but a basic physical principle which turns out to govern all fundamental forces in nature. Third, the right framework for gauge theory is geometrical and we will explore this in depth in subsequent posts. Please stay tuned.<br /><br /><h4><b style="background-color: yellow;">Due to travel, the next post is delayed 2 days.</b></h4></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com4tag:blogger.com,1999:blog-3832136017893749497.post-19506398177029549802017-08-20T20:31:00.000-04:002017-08-20T20:31:42.704-04:00<h2 style="text-align: center;">Impressions from Yellowstone</h2><div><br /></div><div>I was on vacation for a week in Yellowstone and I will put the physics post on hold want to share what I saw. First, the park is simply amazing and I highly recommend to visit if you have the chance. You need at least 3 days as a bare minimum. The main road is like the number 8 and on the west (left) side you get to see lots of fuming hot spots ejecting steam and sulfur.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-cG4nVoFoS1c/WZoeZL3xCjI/AAAAAAAABPg/jIGbPHT1Qrg88j4s-vcrURlBY2HfbDqIQCLcBGAs/s1600/DSCN1412.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-cG4nVoFoS1c/WZoeZL3xCjI/AAAAAAAABPg/jIGbPHT1Qrg88j4s-vcrURlBY2HfbDqIQCLcBGAs/s320/DSCN1412.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-jMPAbCknCfk/WZojgRn20vI/AAAAAAAABQQ/6MlzDeC5yAQNlkBoopa1hAYn3ML22zPEQCLcBGAs/s1600/DSCN1393.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://2.bp.blogspot.com/-jMPAbCknCfk/WZojgRn20vI/AAAAAAAABQQ/6MlzDeC5yAQNlkBoopa1hAYn3ML22zPEQCLcBGAs/s320/DSCN1393.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">The colors are due to bacteria and different bacteria live at different temperatures giving the hot spots rings of color.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">On the south side you get the geysers and Old Faithful which erupts every 90 minutes.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ZQQwo8QkOU4/WZoe5Gqs_3I/AAAAAAAABPo/fKaxLBzy7B4HPyExgJsYtudquZ_RNzmfQCLcBGAs/s1600/DSCN1388.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="320" src="https://4.bp.blogspot.com/-ZQQwo8QkOU4/WZoe5Gqs_3I/AAAAAAAABPo/fKaxLBzy7B4HPyExgJsYtudquZ_RNzmfQCLcBGAs/s320/DSCN1388.JPG" width="240" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: left;">You need to be there approximately 1 hour before the eruption to get a sit on the benches which surround Old Faithful. There are other geysers but you don't know when they erupt.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">On the east side at the bottom of the 8 there is Yellowstone lake which gives rise to Yellowstone river and the Yellowstone canyon. Not much to do at the lake, the water is very cold. The river forms two large waterfalls and you can visit them on both sides.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-hf-X6rRLEIc/WZogJLcl2sI/AAAAAAAABP0/Strexi5ZwhsjyZFYxo2anDj9uhItfZ0-QCLcBGAs/s1600/DSCN1498.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="320" src="https://1.bp.blogspot.com/-hf-X6rRLEIc/WZogJLcl2sI/AAAAAAAABP0/Strexi5ZwhsjyZFYxo2anDj9uhItfZ0-QCLcBGAs/s320/DSCN1498.JPG" width="240" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-K7QxESPaWiE/WZogUmbdBTI/AAAAAAAABP4/u-J2XC8ymLgjKS6VjZX4UzSLfzgAnEtZwCLcBGAs/s1600/DSCN1504.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://3.bp.blogspot.com/-K7QxESPaWiE/WZogUmbdBTI/AAAAAAAABP4/u-J2XC8ymLgjKS6VjZX4UzSLfzgAnEtZwCLcBGAs/s320/DSCN1504.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Coming north on the east side, you encounter more waterfalls and a bit of bisons. If you are lucky you get to see in the distance bears usually eating a dead moose. By the way, there is a big business ripoff in terms of bear sprays. You can buy one for $50, but you should rent one for $10/day when you hike in the forest. Even better just buy a $1 bell to wear to let the wildlife you are there (bears avoid people if they can hear them coming).</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">You can hike Mt. Washburn (4 hour round trip hike) to get a panoramic view of the park 50 miles in any direction.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-BY6MjBuX3AQ/WZoj2eZyNzI/AAAAAAAABQY/hMRnVI0vECEnY-jWBoMSxDR0hUCi6ZWnACLcBGAs/s1600/DSCN1576.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://2.bp.blogspot.com/-BY6MjBuX3AQ/WZoj2eZyNzI/AAAAAAAABQY/hMRnVI0vECEnY-jWBoMSxDR0hUCi6ZWnACLcBGAs/s320/DSCN1576.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">There is nothing to see in the east-west part of the road at the middle of the 8, and on the the north of the east road there is another road leading east in Lamar's valley. Here is where you see a ton of wildlife: bisons, moose, wolves. Literally there are thousands of bisons in big herds which often cross the road.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-cKu7aOHjiZE/WZoi1ZGO46I/AAAAAAAABQI/hYfHPNz-c_QHmoB-FyylPSv0oRfgHAA6QCLcBGAs/s1600/DSCN1659.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://4.bp.blogspot.com/-cKu7aOHjiZE/WZoi1ZGO46I/AAAAAAAABQI/hYfHPNz-c_QHmoB-FyylPSv0oRfgHAA6QCLcBGAs/s320/DSCN1659.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-iMGboZXpMMM/WZojBmTVSEI/AAAAAAAABQM/xEJay1yRZSEtqW7cmisAvO06BERGPsS2wCLcBGAs/s1600/DSCN1649.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://2.bp.blogspot.com/-iMGboZXpMMM/WZojBmTVSEI/AAAAAAAABQM/xEJay1yRZSEtqW7cmisAvO06BERGPsS2wCLcBGAs/s320/DSCN1649.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-dr2OlNiwGGE/WZokL04OHQI/AAAAAAAABQc/dED9cjQfYOIjcqV3J9Xe2kQyTHmh65ecwCLcBGAs/s1600/DSCN1439.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://1.bp.blogspot.com/-dr2OlNiwGGE/WZokL04OHQI/AAAAAAAABQc/dED9cjQfYOIjcqV3J9Xe2kQyTHmh65ecwCLcBGAs/s320/DSCN1439.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Driving in the park is slow (25 mph) due to many attractions on the side and the traffic jams caused by animals. You need one day for north part, one day for the south loop, and one day for Lamar valley.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Yellowstone is at the spot of a supervolcano which erupted 7 times in the past: when it erupts it covers half of US with volcanic ash. There is a stationary hot spot of magma and because the tectonic plate moves different eruptions occur in different places. The past eruption locations trace a clear path on the map.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-vD96n-tw31s/WZonZq_EO5I/AAAAAAAABQo/cnV5k-VoGd0jOW339lu0knAmLAWdsQCiwCLcBGAs/s1600/HotspotsSRP_update2013.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="730" data-original-width="1063" height="219" src="https://3.bp.blogspot.com/-vD96n-tw31s/WZonZq_EO5I/AAAAAAAABQo/cnV5k-VoGd0jOW339lu0knAmLAWdsQCiwCLcBGAs/s320/HotspotsSRP_update2013.JPG" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">Yellowstone park is located in the caldera (the volcano crater) of the last eruption.</div><div class="separator" style="clear: both; text-align: left;"><br /></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-27456631207218784912017-08-06T21:07:00.000-04:002017-08-06T21:07:58.925-04:00<h2 style="text-align: center;">The origins of gauge theory</h2><br />After a bit of absence I am back resuming my usual blog activity. However I am extremely busy and I will create new posts every two weeks from now on. I am starting now a series explaining gauge theory and today I will start at the beginning with Hermann Weyl's proposal.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-2hpX48sDNZQ/WYSVe7rANVI/AAAAAAAABNg/lETh39eQAboVSAxsV9MuxB2WV487XWsZgCLcBGAs/s1600/Hermann_Weyl.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="479" data-original-width="462" height="320" src="https://1.bp.blogspot.com/-2hpX48sDNZQ/WYSVe7rANVI/AAAAAAAABNg/lETh39eQAboVSAxsV9MuxB2WV487XWsZgCLcBGAs/s320/Hermann_Weyl.jpg" width="308" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">In 1918 Hermann Weyl attempted to unify gravity with electromagnetism (the only two forces known at the time) and in the process he introduce the idea of gauge theory. He espouse his ideas in his book "Space Time Matter" and this is a book which I personally find hard to read. Usually the leading physics people have crystal clear original papers: von Neumann, Born, Schrodinger, but Weyl's book combines mathematical musings with metaphysical ideas in an unclear direction. The impression I got was of a mathematical, physical and philosophical random walk testing in all possible ways and directions and see where he could make progress. He got lucky and his lack of cohesion saved the day because he could not spot simple counter arguments against his proposal which could have stopped him cold in his tracks. But what was his motivation and what was his approach?</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Weyl like the local character of general relativity and proposed (from pure philosophical reasons) the idea that all physical measurements are relative. I particular, the norm of a vector should not be thought as an absolute value, but as a value that can change at various point of spacetime. To compare at different points, you need a "gauge", like a device used in train tracks to make sure the train tracks remained at a fixed distance from each other. Another word he used was "calibration", but the name "gauge" stuck.</div><br />So now suppose we have a norm \(N(x)\) of a vector and we do a shift to \(x + dx\). Then:<br /><br />\(N(x+dx) = N(x) + \partial_{\mu}N dx^{\mu}\)<br /><br />Also suppose that there is a scaling factor \(S(x)\):<br /><br />\(S(x+dx) = S(x) + \partial_{\mu}S dx^{\mu}\)<br /><br />and so to first order we get that N changes by:<br /><br />\(( \partial_{\mu} + \partial_{\mu} S) N dx^{\mu} \)<br />Since for a second gauge \(\Lambda\), \(S\) transforms like:<br /><br />\(\partial_{\mu} S \rightarrow \partial_{\mu} S +\partial_{\mu} \Lambda \)<br /><br />and since in electromagnetism the potential changes like:<br /><br />\(A_{\mu} \rightarrow A_{\mu} S +\partial_{\mu} \Lambda \)<br /><br /><div>Weyl conjectured that \(\partial_{\mu} S = A_{\mu}\).</div><div><br /></div><div>However this is disastrous because (as pointed by Einstein to Weyl on a postcard) it implies that the clocks would change their frequencies based on the paths they travel (and since you can make atomic clocks it implies that the atomic spectra is not stable).</div><div><br /></div><div>Later on with the advent of quantum mechanics Weyl changed his idea of scale change into that of a phase change for the wavefunction and the original objections became mute. Still more needed to be done for gauge theory to become useful.</div><div><br /></div><div>Next time I will talk about Bohm-Aharonov and the importance of potentials in physics as a segway into the proper math for gauge theory. </div><div><br /></div><div>Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-8975140737566540782017-07-10T00:04:00.000-04:002017-07-10T00:04:32.066-04:00<h2 style="text-align: center;">The main problem of MWI is the concept of probability</h2><div style="text-align: center;"><br /></div><div style="text-align: left;">Now it is my turn to present the counter arguments against many worlds. All known derivations of Born rule in MWI have (documented) issues of circularity: in the derivation the Born rule is injected in some form or another. However the problem is deeper: <b>there is no good way to define probability in MWI</b>.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Probability can be defined either in the <a href="https://en.wikipedia.org/wiki/Frequentist_probability" target="_blank">frequentist approach</a> as limit of frequency for large trial numbers, or subjectively as information update in the <a href="https://en.wikipedia.org/wiki/Bayesian_probability" target="_blank">Bayesian approach</a>. Both those approaches are making the same predictions. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">It is generally assumed by all MWI supporters that branch counting leads to incorrect predictions and because of this the focused is changed on subjective probabilities and the "apparent emergence" of Born rule. However this implicitly breaks the frequentist-subjective probability relationship. The only way one can use the frequentist approach is by using branch counting. Let's have a simple example.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">Suppose you work at a factory which makes fair (quantum) coins which land 50% up and 50% down. Your job is quality assurance and you are tasked with finding the defective coins. Can you do your job in a MWI quantum universe? The only thing you can do is to flip the coin many times and see if it lands about 50% up and 50% down. For a fair coin there is no issue. However<b> for a biased coin (say 80%-20%) you get the very same outcomes as in the case of the fair coins and you cannot do your job</b>.</div><div style="text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-v3a8EYXG0W8/WWL2JRYuN6I/AAAAAAAABMs/GBhaM1Hm4Acud3qnEzz3TglGGtE4kdfOACLcBGAs/s1600/dice.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="490" data-original-width="700" height="140" src="https://3.bp.blogspot.com/-v3a8EYXG0W8/WWL2JRYuN6I/AAAAAAAABMs/GBhaM1Hm4Acud3qnEzz3TglGGtE4kdfOACLcBGAs/s200/dice.jpg" width="200" /></a></div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div><div style="text-align: left;">There is only one way to fix the problem: consider that the world does not split in 2 up and down branches, but say in 1 million up and 1 million down branches. In this case you can think that in the unfair case the world splits in 1.6 million up worlds, and 400 thousand down worlds. <b>This would fix the concept of probability in MWI restoring the link between frequentist and subjective probabilities, but <u>this is not what MWI supporters claim</u>. </b>Plus, this has problems of its own with irrational numbers and the solution is only approximate to some limit of precision which can be refuted by any experiment run long enough.</div><div style="text-align: left;"><br /></div><div style="text-align: left;">So to boil the problem down, in MWI there is no outcome difference in case of a fair coin versus an unfair coin toss: in both cases you get an "up world" and a "down world". Repeating the coin toss any number of times does not change the nature of the problem in any way. Physics is an experimental science and we test the validity of the theories against experiments. <b>Discarding branch counting in MWI is simply unscientific</b>. </div><div style="text-align: left;"><br /></div><div style="text-align: left;">Now in the last post Per argued for MWI. I asked him to show what would happen if we flip a fair and an unfair coin three times to simply run through his argument on an elementary example and not hid behind general equations. After some back and forth, Per computed the distribution \(\rho\) in the fair and unfair case (to match quantum mechanics predictions) but <b>the point is that</b> \(\rho\) <b>must arise out of the relative frequencies and not be computed by hand</b>. <b>Because the relative frequencies are identical in the two cases</b> \(\rho\) must be injected by a different mechanism. His computation of \(\rho\) is the point where circularity is introduced in the explanation. If you look back in his post, this comes from his equation 5 which is derived from equation 3. <b>Equation 3 assumes Born rule and is the root cause of circularity in his argument. <u>Per's equation 7 recovers the Born rule in the limit case after assuming Born rule in equation 3 -</u></b><u> <b>q.e.d.</b></u></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com30tag:blogger.com,1999:blog-3832136017893749497.post-81022975360025661612017-06-25T15:58:00.000-04:002017-07-02T22:31:26.672-04:00<h2 style="text-align: center;">Guest Post defending MWI</h2><div><i><br /></i></div><div><i>As promised, here is a guest post from Per Arve. I am not interjecting my opinion in the main text but I will ask questions in the comments section.</i><br /><i><br /></i><i>Due to the popularity of this post I am delaying the next post for a week.</i></div><div><br /></div>The reason to abandon the orthodox interpretation of quantum mechanics is its incompleteness. Bohr and Heisenberg refused the possibility to describe the measurement process as a physical process. This is encoded in Bohr's claim that the quantum world cannot be understood. Such an attitude served to avoid endless discussions about the weirdness of quantum mechanics and divert attention to the description of microscopic physics with quantum mechanics. Well done! A limited theory is better than no theory. <br /><br />But, we should always try to find theories that in a unified way describes the larger set of processes. The work by Everett and the later development of decoherence theory by Zeh, Zurek and others have given us elements to describe also the measurement process as a quantum mechanical process. Their analysis of the measurement process implies that the unitary quantum evolution leads to the emergence of separate new "worlds". The appearance of separate "worlds" can only be avoided if there is some mechanism that breaks unitarity.<br /><br />The most well-known problem of Everett's interpretation is that of the derivation of the Born rule. I describe the solution of that problem here. (You can also check my article on the arxiv <a href="http://arxiv.org/abs/1603.01625" target="_blank">[1603.01625] Postulates for and measurements in Everett's quantum mechanics</a>)<br /><br />The main point is to prove that physicists experience the Born rule. That is by taking an outside view of the parallel worlds created in a measurement situation. The question, what probability is from the perspective of an observer inside a particular branch, is more a matter of philosophy than of science.<br /><br />The natural way to find out where something is located is to test with some force and find out where we find resistance. The force should not be so strong that it modifies the system we want to probe. This corresponds to the first order perturbation of the energy due to the external potential U(x),<br /><div><br /></div><div>\(\Delta E =\int d^3 x {|\psi (x)|}^2 U(x)\) (1)<br /><br />This shows that \({|\psi(x)|}^2\) gives where the system is located. (Here, spin and similar indexes are omitted.)<br /><br />The argumentation for the Born rule relies on that one may ignore the presence of the system in regions, where integrated value of the wave function absolute square is very small. <br /><br />In order to have a well defined starting point I have formulated two postulates for Everett's quantum mechanics.<br /><br /><b>EQM1 </b>The state is a complex function of positions and a discrete index j for spin etc,<br /><br />\(\Psi = \psi_j (t, x_1, x_2, ...) \) (2)<br /><br />Its basic interpretation is given by that the density </div><div><br /></div><div>\(\rho_j (t, x_1, x_2,...) = {|\psi_j (t, x_1, x_2, ...)|}^2 \) (3) <br /><br />answers where the system is in position, spin, etc.</div><div><br />It is absolute square integrable normalized to one </div><div><br />\( \int \int···dx_1dx_2 ··· \sum_j {|\psi_j (t, x_1, x_2, ...)|}^2 = 1\) (4)</div><div><br />This requirement signifies that the system has to be somewhere, not everywhere. If the value of the integral is zero, the system doesn’t exist anywhere. <br /><br /><b>EQM2 </b>There is a unitary time development of the state, e.g.,</div><div><br /></div><div>\(i \partial_t \Psi = H\Psi \), <br /><br />where H is the hermitian Hamiltonian. The term unitary signifies that the value of the left hand side in (4) is constant for any state (2). <br /><br />Consider the typical measurement where something happens in a reaction and what comes out is collected in an array of detectors, for instance the Stern-Gerlach experiment. Each detector will catch particles that have a certain value of the quantity B we want measure.<br /><br />Write the state that enter the array of detectors as sum of components that enter the individual detectors, \(|\psi \rangle = \sum c_b |b\rangle\), where b is one of the possible values of B. When that state has entered the detectors we can ask, where is it? The answer is that it is distributed over the individual detectors. The distribution is </div><div><br /></div><div>\(\rho_b = {|c_b|}^2 \) (5)</div><div><br />This derived by integrate the density (3) over the detector using that the states \(|b\rangle\) have support only inside its own detector. </div><div><br />The interaction between \(|\psi \rangle\) and the detector array will cause decoherence. The total system of detector array and \(|\psi \rangle\) splits into separate "worlds" such that the different values b of the quantity B will belong to separate "worlds".<br /><br />After repeating the measurement N times, the distribution that answer how many times have the value \(b=u\) been measured is</div><div><br />\(\rho(m:N | u)= b(N,m) {(\rho_u)}^m{(\rho_{¬u})}^{N−m} \) (6)</div><div><br />where \(b(N,m)\) is the binomial coefficient \(N\) over \(m\) and \(\rho_{¬u}\) is the sum over all \(ρ_b\) except \(b=u\).</div><div><br />The relative frequency \(z=m/N\) is then given by<br /><br />\(\rho(z|u) \approx \sqrt{(N/(2\pi \rho_u \rho_{¬u}))} exp( −N{(z−\rho_u)}^2/(2\rho_u \rho_{¬u}) ) \) (7) <br /><br />This approaches a Dirac delta \(\delta(z − \rho_u)\). If the tails of (7) with low integrated value are ignored, we are left with a distribution with \(z \approx u\). This shows that the observer experiences a relative frequency close to the Born value. Reasonably, the observer will therefore believe in the Born rule.<br /><br />The palpability of the densities (6) and (7) may be seen by replacing the detectors by a mechanism that captures and holds the system at the different locations. Then, we can measure to what extent the system is at the different locations (4) using an external perturbation (1). In principle, also the distribution from N measurements is directly measurable if we consider N parallel experiments. The relative frequency distribution (7) is then also in principle a directly measurable quantity.<br /><br />A physicist that believes in the Born rule will use that for statistical inference in quantum experiments. According to the analysis above, it will work just as well as we expect it to do using the Born rule in a single world theory. <br /><br />A physicist who believes in a single world will view the Born rule as a law about probabilities. A many-worlder may view it as a rule that can be used for inference about quantum states as if the Born rule is about probabilities.<br /><br />With my postulates, Everett's quantum mechanics describe the world as we see it. That is what should be discussed. Not whether it pleases anybody or not.<br /><br />If the reader is interested what to do in a quantum russian roulette situation, I have not much to offer. How to decide your future seems to be a philosophical and psychological question. As a physicist, I don't feel obliged to help you with that. <br /><br />Per Arve, Stockholm June 24, 2017 </div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com34tag:blogger.com,1999:blog-3832136017893749497.post-638933484799339292017-06-18T18:22:00.000-04:002017-06-18T18:22:16.086-04:00<h2 style="text-align: center;">Impressions from FQMT 2017</h2><br /><img alt="Poster, FQMT" src="https://lnu.se/ImageVault/publishedmedia/sct81szxsxvuiy402jjl/FQMT_affisch_170612.jpg" /><br /><br />I just came back from Vaxjo where I had a marvelous time. It does sounds cliche, but this year was the best conference organized by Professor Khrennikov and I got many pleasant and unexpected surprises.<br /><br />The conference did had one drawback: everyday after the official talks we continue the discussions about quantum mechanics well past midnight at "The Bishops Arms" where we drank too many beers causing me to gained a few pounds :)<br /><br />At the conference I had a chance to meet and talk with Phillipe Grangier (he worked with Aspect on the famous Bell experiment) and I witness him giving the best cogent comments on all talks: from experimental to theoretical. He even surprised me when he asked at the end of my presentation why I am using time to derive Leibniz identity, where any other symmetry will do? Indeed this is true, but the drawback is that any other symmetry lacks generality later on during composition arguments. Suppose we compose two physical systems: one with with a continuous symmetry and another without, then the composed system will lack that symmetry. The advantage of using time is that it works for all cases where energy is conserved.<br /><br />Grangier presented his approach on quantum mechanics reconstruction using contextuality and continuity (like in Hardy's 5 reasonable axioms paper). The problem of continuity is that it lacks physical intuition/motivation. Why not impose right away the C* condition: \(||a^* a|| = {||a||}^2\) and recover everything from it?<br /><br />Bob Coecke and Aleks Kissinger book on the pictorial formalism: "Picturing Quantum Processes" was finally ready and was advertised at the conference. If you go to <a href="http://www.cambridge.org/pqp" target="_blank">www.cambridge.org/pqp</a> you can get with a 20% discount when you enter the code COECKE2017 at the checkout.<br /><br />Coecke 's talk was about causal theories and his main idea was: "time reversal of any causal theory = eternal noise". This looks deep, but it is really a trivial observation: you can't get anything meaningful and you can't control signals which have an information starting point because the starting point corresponds to the notion of false and anything is derivable from false.<br /><br />Robert Raussendorf from University of Vancouver had a nice talk about measurement based quantum computations where measurements are used to control the computation and he identified a cohomological framework.<br /><br />One surprise talk for me was the one given by Marcus Appleby from University of Sydney who presented a framework of equivalence for Quantum Mechanics between finite and infinite dimensional cases. This is of particular importance to me as I recovered quantum mechanics in the finite dimensional case only and I am searching for an approach to handle the infinite dimensional case.<br /><br />I made new friends there and I got very good advice and ideas - a big thank you. I also got to give many in person presentations of my quantum reconstruction program.<br /><br />There was one person claiming he solved the puzzles of the many worlds interpretation. I sat next to him at the conference dinner and I invited him to have a guest post at this blog to present his solution. As a disclaimer, I think MWI lacks the proper notion of probability and I am yet to see a solution but I am open to listen to new arguments. What I would like to see is an explanation of how to reconcile the world split of 50-50% when the quantum probabilities are 80-20%? I did not see this explained in his presentation to my satisfaction, but maybe I was not understating the argument properly.Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-65337870302605349522017-06-10T18:25:00.000-04:002017-06-10T18:25:27.893-04:00<h2 style="text-align: center;">Jordan-Banach, Jordan-Lie-Banach, C* algebras, and quantum mechanics reconstruction</h2><div><br /></div><div>This a short post written as waiting for my flight at Dulles Airport on my way to Vaxjo Sweden for a physics conference. </div><div><br /></div><div>First some definitions. a Jordan-Banach algebra is a Jordan algebra with the usual norm properties of a Banach algebra. A Jordan-Lie-Banach algebra is a Jordan-Banach algebra which is a Lie algebra at the same time. A Jordan-Lie algebra is the composability two-product algebra which we obtained using category theory arguments.</div><div><br /></div><div>Last time I hinted about this week's topic which is the final step in reconstructing quantum using category theory arguments. What we obtain from category theory is a Jordan-Lie algebra which in the finite dimensional case has the spectral properties for free because the spectrum in uniquely defined in an algebraic fashion (things gets very tricky in the infinite dimensional case). So in the finite dimensional case JL=JLB.</div><div><br /></div><div>But how can we go from Jordan-Banach algebra to C*? In general it cannot be done. C* algebras correspond to quantum mechanics and on the Jordan side we have the octonionic algebra which is exceptional. Thus cannot be related to quantum mechanics because octonions are not associative. However we can define state spaces for both Jordan-Banach and C* algebras and we can investigate their geometry. The geometry is definable in terms if projector elements which obey: \(a*a = a\). In turn this defines the pure states as the boundary of the state spaces. If the two geometries are identical, we are in luck. </div><div><br /></div><div>Now the key question is: <b>under what circumstances can we complexify a Jordan-Banach algebra to get a C* algebra?</b></div><div><br /></div><div>In nature, observables play a dual role as both observables and generators. In literature this is called <b>dynamic correspondence</b>. Dynamic correspondence is the essential ingredient: any <b>JB algebra with dynamic correspondence is the self-adjoint part of a C* algebra. This result holds in general and can be established by comparing the geometry of the state spaces for JB and C* algebras.</b></div><div><b><br /></b></div><div>Now for the punch line: a JL algebra comes with dynamic correspondence and I showed that in prior posts. The conclusion is therefore:</div><div><b><br /></b></div><div><b>in the finite dimensional case: JL is a JLB algebra which gives rise to a C* algebra by complexification and by GNS construction we obtain the standard formulation of quantum mechanics. </b></div><div><b><br /></b></div><div><b><u>Quantum mechanics is fully reconstructed in the finite dimensional case from physical principles using category theory arguments! </u></b></div><div><b><u><br /></u></b></div><div>By the way this is what I'll present at the conference (the entire series on QM reconstruction).</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-59230500583289665442017-06-04T22:06:00.000-04:002017-06-04T22:06:58.044-04:00<h2 style="text-align: center;">From composability two-product algebra to quantum mechanics</h2><div>Last time we introduced the composability two-product algebra consisting of the Lie algebra \(\alpha\) and the Jordan algebra \(\sigma\) along with their compatibility relationship. This structure was obtained by categorical arguments using two natural principles of nature:</div><div><br /></div><div>- laws of nature are invariant under time evolution</div><div>- laws of nature are invariant under system composition</div><div><br /></div><div>What we did not obtain were spectral properties. <b>However, in the finite dimensional case, we do not need spectral properties and we can fully recover quantum mechanics <u>in this particular case</u>. </b>The trick is to classify all possible two-product algebras because there are only a handful of them. This is achieved with the help of the <a href="https://en.wikipedia.org/wiki/Artin%E2%80%93Wedderburn_theorem" target="_blank">Artin-Weddenburn theorem</a>. </div><div><br /></div><div>First some preliminary. We need to introduce a Lie-Jordan-Banach (JLB) algebra by augmenting the composability two-product algebra with spectral properties:</div><div>-a JLB-algebra is a composability two-product algebra with the following two additional properties:</div><div><ul><li>\(||x\sigma x|| = {||x||}^{2}\)</li><li>\(||x\sigma x||\leq ||x\sigma x + y\sigma y||\)</li></ul><div>Then we can define a C* algebra by compexification of a JLB algebra where the C* norm is:</div></div><div><br /></div><div>\(||a+ib|| = \sqrt{{||a||}^{2}+{||b||}^{2}}\)</div><div><br /></div><div>Conversely from a C* algebra we define a JLB algebra as the self-adjoint part and where the Jordan part is:</div><div><br /></div><div>\(a\sigma b = \frac{1}{2}(ab+ba)\)</div><div><br /></div><div>and the Lie part is:</div><div><br /></div><div>\(a\alpha b = \frac{i}{\hbar}(ab-ba)\)</div><div><br /></div><div>From C* algebra we recover the usual quantum mechanics formulation by <a href="https://en.wikipedia.org/wiki/Gelfand%E2%80%93Naimark%E2%80%93Segal_construction" target="_blank">GNS construction</a> which gets for us:</div><div><br /></div><div>- a Hilbert space H</div><div>- a distinguished vector \(\Omega\) on H arising out of the identity of the C* algebra</div><div>- a representation \(\pi\) of the algebra as linear operators on H</div><div>- a state \(\omega\) on C* represented as \(\omega (A) = \langle \Omega, \pi (A)\Omega\rangle_{H}\)</div><div><br /></div><div>Conversely, from quantum mechanics a C* algebra arises as bounded operators on the Hilbert space.</div><div><br /></div><div>The infinite dimensional case is a much harder <b>open</b> problem. Jumping from the Jordan-Banach operator algebra side to the C* and von Neuman algebras is very tricky and this involves characterizing the state spaces of operator algebras. Fortunately all this is already settled by the works of Alfsen, Shultz, Stormer, Topping, Hanche-Olsen, Kadison, Connes. </div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-41965847399232794912017-05-21T23:32:00.000-04:002017-05-29T21:38:58.652-04:00<h2 style="text-align: center;">The algebraic structure of quantum and classical mechanics</h2><div style="text-align: center;"><br /></div><div style="text-align: left;">Let's recap on what we derived so far. We started by considering <a href="http://fmoldove.blogspot.com/2017/03/time-as-continous-functor-to-recall.html" target="_blank">time as a continous functor</a> and we derived Leibniz identity from it. Then for a particular kind of time evolution which allows a representation as a product we were able to derive two products \(\alpha\) and \(\sigma\) for which we derived the <a href="http://fmoldove.blogspot.com/2017/04/the-fundamental-bipartite-relations.html" target="_blank">fundamental bipartite relations</a>.<br /><br />Repeated applications of Leibniz identity resulted in proving \(\alpha\) as a Lie algebra, and \(\sigma\) as a Jordan algebra and an associator identity between them:<br /><br />\([A,B,C]_{\sigma} + \frac{J^2 \hbar^2}{4}[A,B,C]_{\alpha} = 0\)<br /><br />where \(J\) is a map between generators and observables encoding Noether's theorem.<br /><br />Now we can combine the Jordan and Lie algebra as:<br /><br />\(\star = \sigma\pm \frac{J \hbar}{2}\alpha\)<br /><br />and it is not hard to show that this product is associative (pick \(\hbar = 2\) for convenience):<br /><br />\([f,g,h]_{\star} = (f\sigma g \pm J f\alpha g)\star h - f\star(g\sigma h \pm J g\alpha h)=\)<br />\((f\sigma g)\sigma h \pm J(f\sigma g)\alpha h \pm J(f\alpha g)\sigma h + J^2 (f\alpha g)\alpha h \)<br />\(−f\sigma (g\sigma h) \mp J f\sigma (g\alpha h) \mp J f\alpha (g\sigma h) − J^2 f\alpha (g\alpha h) =\)<br />\([f, g, h]_{\sigma} + J^2 [f, g, h]_{\alpha} ±J\{(f\sigma g)\alpha h + (f\alpha g)\sigma h − f\sigma (g\alpha h) − f\alpha (g\sigma h)\} = 0\)<br /><br />because the first part is zero by associator identity and the second part is zero by applying Leibniz identity. In Hilbert space representation the star product is nothing but the complex number multiplication in ordinary quantum mechanics<br /><br />Now we can introduce the algebraic structure of quantum (and classical) mechanics:<br /><br /><b>A composability two-product algebra is a real vector space equipped with two bilinear maps</b> \(\sigma \) <b>and </b>\(\alpha \) <b>such that the following conditions apply:</b><br /><br />- \(\alpha \) is a Lie algebra,<br />- \(\sigma\) is a Jordan algebra,<br />- \(\alpha\) is a derivation for \(\sigma\) and \(\alpha\),<br />- \([A, B, C]_{\sigma} + \frac{J^2 \hbar^2}{4} [A, B, C]_{\alpha} = 0\),<br />where \(J \rightarrow (−J)\) is an involution mapping generators and observables, \(1\alpha A = A\alpha 1 = 0\), \(1\sigma A = A\sigma 1 = A\)<br /><br /><b>For quantum mechanics </b>\(J^2 = -1\). <b>In the finite dimensional case the composability two-product algebra is enough to fully recover the full formalism of quantum mechanics</b> by using the <a href="https://en.wikipedia.org/wiki/Artin%E2%80%93Wedderburn_theorem" target="_blank">Artin-Wedderburn theorem</a>.<br /><br /><b>The same structure applies to classical mechanics with only one change:</b> \(J^2 = 0\).<br /><br />In classical mechanics case, in phase space, the usual Poisson bracket representation for product \(\alpha\) can be constructively derived from above:<br />\(f\alpha g = \{f,g\} = f \overset{\leftrightarrow}{\nabla} g = \sum_{i=1}^{n} \frac{\partial f}{\partial q^i} \frac{\partial g}{\partial p_i} - \frac{\partial f}{\partial p_i} \frac{\partial g}{\partial q^i}\)<br /><br />and the product \(\sigma\) is then the regular function multiplication.<br /><br />In quantum mechanics case in the Hilbert space representation we have the commutator and the Jordan product:<br /><br />\(A\alpha B = \frac{i}{\hbar} (AB − BA)\)<br />\(A\sigma B = \frac{1}{2} (AB + BA)\)<br /><br />or in the phase space representation the Moyal and cosine brackets:<br /><br />\(\alpha = \frac{2}{\hbar}\sin (\frac{\hbar}{2} \overset{\leftrightarrow}{\nabla})\)<br />\(\sigma = \cos (\frac{\hbar}{2} \overset{\leftrightarrow}{\nabla})\)<br /><br />where the associative product is the <a href="https://en.wikipedia.org/wiki/Moyal_product" target="_blank">star product</a>.<br /><br /><b><u>Update:</u></b> Memorial Day holiday interfered with this week's post. I was hoping to make it back home on time to write it today, but I got stuck on horrible traffic for many hours. I'll postpone the next post for a week.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-75836958396042359732017-05-15T00:46:00.000-04:002017-05-15T07:27:59.121-04:00<h2 style="text-align: center;">The Jordan algebra of observables</h2><div><br /></div><div>Last time, from concrete representations of the products \(\alpha\) and \(\sigma\) we derived this identity:</div><div><br /></div><div>\([A,B,C]_{\sigma} + \frac{i^2 \hbar^2}{4}[A,B,C]_{\alpha} = 0\)</div><div><br /></div><div>Let's use this in a particular case when \(C = A\sigma A\). What does the left hand side say?</div><div><br /></div><div>\([A,B,C]_{\sigma} = (A\sigma B) \sigma (A\sigma A)) - A\sigma (B \sigma (A \sigma A))\) </div><div><br /></div><div>which if we drop \(\sigma\) for convenience sake reads:</div><div><br /></div><div>\((AB)(AA) - A(B(AA))\)</div><div><br /></div><div>If the right hand side is zero then we get the <a href="https://en.wikipedia.org/wiki/Jordan_algebra" target="_blank">Jordan identity</a>:</div><div><br /></div><div>\((xy)(xx) = x(y(xx))\) where \(xy = yx\)</div><div><br /></div><div>Now let's compute the right hand side and show it is indeed zero:</div><div><br /></div><div>\([A,B,A\sigma A]_{\alpha} = (A\alpha B) \alpha (A\sigma A)) - A\alpha (B \alpha (A \sigma A))\)</div><div><br /></div><div>Using Leibniz identity in the second term we get:</div><div><br /></div><div>\((A\alpha B) \alpha (A\sigma A)) - (A\alpha B) \alpha (A\sigma A) - B \alpha (A\alpha (A\sigma A))) = - B \alpha (A\alpha (A\sigma A))\)</div><div><br /></div><div>But \(A\alpha (A\sigma A) = 0 \) because</div><div><br /></div><div>\(A\alpha (A\sigma A) = (A\alpha A) \sigma A + A\sigma (A\alpha A) \)</div><div><br /></div><div>and \(A\alpha A = -A\alpha A = 0\) by skew symmetry.</div><div><br /></div><div>Therefore due to the associator identity, the product \(\sigma\) is a Jordan algebra. Now we need to arrive at the associator identity using only the ingredients derived so far. This is tedious but it can be done using only Jacobi and Leibniz identity. Grgin and Petersen derived it in 1976 and you can see the proof <a href="https://projecteuclid.org/download/pdf_1/euclid.cmp/1103900192" target="_blank">here</a>. </div><div><br /></div><div>The associator identity is better written as:</div><div><br /></div><div>\([A,B,C]_{\sigma} + \frac{J^2 \hbar^2}{4}[A,B,C]_{\alpha} = 0\)</div><div><br /></div><div>where \(J\) is a map from the the product \(\alpha\) to the product \(\sigma\). <b>The existence of this map is equivalent with Noether's theorem. </b>It just happens that in quantum mechanics case \(J^2 = -1\) and the imaginary unit maps anti-Hermitean generators to Hermitean observables. </div><div><br /></div><div>In classical physics case, \(J^2 = 0\) and this means that the product \(\sigma\) is associative (in fact it is the ordinary function multiplication) and the product \(\alpha\) can be proven to be the Poisson bracket, but that is a topic for another day as we will continue to derive the mathematical structure of quantum mechanics. Please stay tuned. </div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-38246176897686707642017-05-07T19:31:00.000-04:002017-05-08T00:44:08.433-04:00<h2 style="text-align: center;">Lie, Jordan algebras and the associator identity</h2><div><br /></div><div>Before I continue the quantum mechanics algebraic series, I want to first state my happiness for the defeat of the far (alt)-right candidate in France despite Putin's financial and hacking support. Europe has much better antibodies against the scums like Trump than US. In US the diseases caused by the inoculation of hate perpetuated over many years by Fox News has to run its course before things will get better.</div><div><br /></div><div>Back to physics, first I will show that the product \(\alpha\) is indeed a <a href="https://en.wikipedia.org/wiki/Lie_algebra" target="_blank">Lie algebra</a>. This is utterly trivial because we need to show antisymmetry and the Jacobi identity:</div><div><br /></div><div>\(a\alpha b = -b\alpha a\)</div><div>\(a\alpha (b\alpha c) + c\alpha (a\alpha b) + b\alpha (c\alpha a) = 0\)</div><div><br /></div><div>We already know that the product \(\alpha\) is antisymmetric and we know that the it obeys Leibniz identity:</div><div><br /></div><div>\(a\alpha (b\circ c) = (a\alpha b) \circ c + b\circ (a\alpha c) \)</div><div><br /></div><div>where \(\circ\) can stand for either \(\alpha\) or \(\sigma\). When \(\circ = \alpha\) we get:</div><div><br /></div><div>\(a\alpha (b\alpha c) = (a\alpha b) \alpha c + b\alpha (a\alpha c) \)</div><div><br /></div><div>which by antisymmetry becomes</div><div><br /></div><div><div>\(a\alpha (b\alpha c) = - c \alpha (a\alpha b) - b\alpha (c\alpha a) \)</div></div><div><br /></div><div>In other words, the Jacobi identity.</div><div><br /></div><div>Therefore the product \(\alpha\) is in fact a Lie algebra. Now we want to prove that the product \(\sigma\) is a <a href="https://en.wikipedia.org/wiki/Jordan_algebra" target="_blank">Jordan algebra</a>.</div><div><br /></div><div>This is not as simple as proving the Lie algebra, and we will do it with the help of a new concept: the <a href="https://en.wikipedia.org/wiki/Associator" target="_blank">associator</a>. Let us first define it. The associator of an arbitrary product \(\circ\) is defined as follows:</div><div><br /></div><div>\([a,b,c]_{\circ} = (a\circ b)\circ c - a\circ (b\circ c)\)</div><div><br /></div><div>as such it measures the lack of associativity. </div><div><br /></div><div>It is helpful now to look at the concrete realizations of the products \(\alpha\) and \(\sigma\) in quantum mechanics to know where we want to arrive. In quantum mechanics the product alpha is the commutator, and the product sigma is the anticommutator:</div><div><br /></div><div>\(A \alpha B = \frac{i}{\hbar}[A,B] = \frac{i}{\hbar}(AB - BA)\)</div><div>\(A\sigma B = \frac{1}{2}\{A, B\} = \frac{1}{2}(AB+BA)\)</div><div><br /></div><div>Let's compute alpha and sigma associators:</div><div><br /></div><div>\([A,B,C]_{\alpha} = \frac{-1}{\hbar^2}([AB-BA, C] - [A, BC-CB]) = \)</div><div>\(=\frac{-1}{\hbar^2}(ABC-BAC-CAB+CBA - ABC+ACB+BCA-CBA)\)</div><div>\(= \frac{-1}{\hbar^2}(-BAC-CAB +ACB+BCA)\)</div><div><br /></div><div><br /></div><div>\([A,B,C]_{\sigma} = \frac{1}{4}(\{AB+BA, C\} - \{A, BC+CB\}) = \)</div><div>\(=\frac{1}{4}(ABC+BAC+CAB+CBA - ABC-ACB-BCA-CBA) = \)</div><div>\(=\frac{1}{4}(BAC+CAB -ACB-BCA) \)</div><div><br /></div><div>and so we have the remarkable relationship:</div><div><br /></div><div>\([A,B,C]_{\sigma} + \frac{i^2 \hbar^2}{4}[A,B,C]_{\alpha} = 0\)</div><div><br /></div><div><b>What is remarkable about this is that the Jordan and Lie algebras lack associativity in precisely the same way and because of this they can be later combined into a single operation. The identity above also holds the key for proving the Jordan identity.</b></div><div><b><br /></b></div><div>Next time I'll show how to derive the identity above using only the ingredients we proved so far and then I'll show how Jordan identity arises out of it. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-24171504867372669852017-04-30T00:55:00.000-04:002017-04-30T00:55:56.784-04:00<h2 style="text-align: center;">The origin of the symmetries of the quantum products</h2><div><br /></div><div>Quantum mechanics has three quantum products: </div><div><ul><li>the Jordan product of observables</li><li>the commutator product used for time evolution</li><li>the complex number multiplication of operators </li></ul><div>The last product is a composite construction of the first two and it is enough to study the Jordan product and the commutator. In the prior posts notation, the Jordan product is called \(\sigma\), and the commutator is called \(\alpha\). We will derive their full properties using category theory arguments and the Leibniz identity. Bur before doing this, I want to review a bit the two products. The commutator is well known and I will not spend time on it. Instead I will give the motovation for the Jordan product. </div></div><div><br /></div><div>In quantum mechanics the observables are represented as self-adjoint operators: \(O = O^{\dagger}\) If we want to create another self-adjoint operator out of two self-adjoint operators A and B, the simple multiplication won't work because \((AB)^{\dagger} = B^{\dagger} A^{\dagger} = BA \ne AB\). The solution is to have a symmetrized product: \(A\sigma B = (AB+BA)/2\). A lot of the quantum mechanics formalism transfers to the Jordan algebra of observables, but this is a relatively forgotten approach because it is rather cumbersome (the Jordan product is not associative but power associative) and (as it is expected) it does not produce any different predictions than the standard formalism based on complex numbers.<br /><br />Now back to obtaining the symmetry properties of the Jordan product \(\sigma\) and commutator \(\alpha\), at first we cannot say anything about the symmetry of the product \(\sigma\). However we do know that the product \(\alpha\) obeys the Leibniz identity. We have already use it to derive the fundamental composition relationships, so what else can we do? <b>We can apply it to a bipartite system:</b><br /><br />\(f_{12}\alpha_{12}(g_{12}\alpha_{12}h_{12}) = g_{12}\alpha_{12}(f_{12}\alpha_{12}h_{12}) + (f_{12}\alpha_{12}g_{12})\alpha_{12} h_{12}\)<br /><br />where<br /><br />\(\alpha_{12} = \alpha\otimes \sigma + \sigma\otimes\alpha\)<br /><br />Now <b>the key observation is that in the right hand side, </b>\(f\) <b>and</b> \(g\) <b>appear in reverse order. </b>Remember that the functions involved in the relationship above are free of constraints, by judicious picks of their value can lead to great simplifications because \(1 \alpha f = f\alpha 1 = 0\). The computation is tedious and I will skip it, but what you get in the end is this:<br /><br />\(f_1\alpha h_1 \otimes [f_2 \alpha g_2 + g_2 \alpha f_2 ] = 0\)<br /><br />which means that the product alpha is anti-symmetric \(f\alpha g = -g\alpha f\)<br /><br />If we use this property in the fundamental bypartite relationship we obtain in turn that the product sigma is symmetric: \(f\sigma g = g\sigma f\)<br /><br />Next time we will prove that \(\alpha\) is a Lie algebra and that \(\sigma\) is a Jordan algebra. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-28911436502100318092017-04-16T23:52:00.001-04:002017-04-16T23:52:31.915-04:00<h2 style="text-align: center;">The fundamental bipartite relations</h2><div><br /></div><div>Continuing from where we left off last time, we introduced the most general composite products for a bipartite system:</div><div><br /></div><div><div>\(\alpha_{12} = a_{11}\alpha \otimes \alpha + a_{12} \alpha\otimes\sigma + a_{21} \sigma\otimes \alpha + a_{22} \sigma\otimes\sigma\)</div><div>\(\sigma_{12} = b_{11}\alpha \otimes \alpha + b_{12} \alpha\otimes\sigma + b_{21} \sigma\otimes \alpha + b_{22} \sigma\otimes\sigma\)<br /><br />The question now becomes: are the \(a\)'s and \(b\)'s parameters free, or can we say something abut them? To start let's normalize the products \(\sigma\) like this:<br /><br />\(f\sigma I = I\sigma f = f\)<br /><br />which can always be done. Now in:<br /><br /><div>\((f_1 \otimes f_2)\alpha_{12}(g_1\otimes g_2) = \)</div><div>\(=a_{11}(f_1 \alpha g_1)\otimes (f_2 \alpha g_2) + a_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)</div><div>\(+a_{21}(f_1 \sigma g_1)\otimes (f_2 \alpha g_2) + a_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)<br /><br />if we pick \(f_1 = g_1 = I\) :<br /><br /><div>\((I \otimes f_2)\alpha_{12}(I\otimes g_2) = \)</div><div>\(=a_{11}(I \alpha I)\otimes (f_2 \alpha g_2) + a_{12}(I \alpha I) \otimes (f_2 \sigma g_2 ) +\)</div><div>\(+a_{21}(I \sigma I)\otimes (f_2 \alpha g_2) + a_{22}(I \sigma I) \otimes (f_2 \sigma g_2 )\)<br /><br />and recalling from last time that \(I\alpha I = 0\) from Leibniz identity we get:<br /><br />\(f_2 \alpha g_2 = a_{21} (f_2 \alpha g_2 ) + a_{22} (f_2 \sigma g_2)\)<br /><br />which demands \(a_{21} = 1\) and \(a_{22} = 0\).<br /><br />If we make the same substitution into:<br /><br /> \((f_1 \otimes f_2)\sigma_{12}(g_1\otimes g_2) = \)<br /><div>\(=b_{11}(f_1 \alpha g_1)\otimes (f_2 \alpha g_2) + b_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)</div><div>\(+b_{21}(f_1 \sigma g_1)\otimes (f_2 \alpha g_2) + b_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)<br /><br />we get:<br /><br />\(f_2 \sigma g_2 = b_{21} (f_2 \alpha g_2 ) + b_{22} (f_2 \sigma g_2)\)<br /><br />which demands \(b_{21} = 0\) and \(b_{22} = 1\)<br /><br />We can play the same game with \(f_2 = g_2 = I\) and (skipping the trivial details) we get two additional conditions: \(a_{12} = 1\) and \(b_{12} = 0\).<br /><br />In coproduct notation what we get so far is:<br /><br />\(\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha + a_{11} \alpha \otimes \alpha\)<br />\(\Delta (\sigma) = \sigma \otimes \sigma + b_{11} \alpha \otimes \alpha\)<br /><br />By applying Leibniz identity on a bipartite system, one can show after some tedious computations that \(a_{11} = 0\). The only remaining free parameters is \(b_{11}\) which can be normalized to be ether -1, 0, or 1 (or elliptic, parabolic, and hyperbolic). <b>Each choice corresponds to a potential theory of nature</b>. For example 0 corresponds to classical mechanics, and -1 to quantum mechanics.<br /><br /><b>Elliptic composability is quantum mechanics! The bipartite products obey:</b><br /><b><br /></b><b><br /></b>\(\Delta (\alpha) = \alpha \otimes \sigma + \sigma \otimes \alpha \)<br />\(\Delta (\sigma) = \sigma \otimes \sigma - \alpha \otimes \alpha\)<br /><br /><b>Please notice the similarity with complex number multiplication. This is why complex numbers play a central role in quantum mechanics.</b><br /><b><br /></b>Now at the moment the two products do not respect any other properties. But we can continue this line of argument and prove their symmetry/anti-symmetry. And from there we can derive their complete properties arriving constructively at the standard formulation of quantum mechanics. Please stay tuned.</div></div></div></div></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-75737353711244180922017-04-09T20:49:00.001-04:002017-04-09T20:49:47.734-04:00<h2 style="text-align: center;">Time evolution for a composite system</h2><div><br /></div><div>Continuing where we left off last time, let me first point out one thing which I glossed over too fast: the representation of \(D\) as a product \(\alpha\): \(Dg = f\alpha g\). This is highly nontrivial and <b>not all time evolutions respect it</b>. In fact, the statement above is nothing but a reformulation of <a href="https://en.wikipedia.org/wiki/Noether%27s_theorem" target="_blank">Noether's theorem</a> in the Hamiltonian formalism. I did not build up the proper mathematical machinery to easily show this, so take my word on it for now. I might revisit this at a later time.</div><div><br /></div><div>Now what I want to do is explore what happens to the product \(\alpha\) when we consider two physical systems 1 and 2. First, let's introduce the unit element of our category, and let's call it "I":</div><div><br /></div><div>\(f\otimes I = I\otimes f = f\)</div><div><br /></div><div>for all \(f \in C\)</div><div><br /></div><div>Then we have \((f_1\otimes I) \alpha_{12} (g_1\otimes I) = f \alpha g\)</div><div><br /></div><div>On the other hand suppose in nature there exists only the product \(\alpha\). Then the only way we can construct a composite product \(\alpha_{12}\) out of \(\alpha_1\) and \(\alpha_2\) is:</div><div><br /></div><div>\((f_1\otimes f_2) \alpha_{12} (g_1 \otimes g_2) = a(f_1 \alpha_1 g_1)\otimes (f_2\alpha_2 g_2)\)</div><div><br /></div><div>where \(a\) is a constant. </div><div><br /></div><div>Now if we pick \(f_2 = g_2 = I\) we get:</div><div><br /></div><div>\((f_1\otimes I) \alpha_{12} (g_1 \otimes I) = a(f_1 \alpha_1 g_1)\otimes (I \alpha_2 I) \)</div><div>which is the same as \(f \alpha g\) by above. </div><div><br /></div><div>But what is \(I\alpha I\)? Here we use the Leibniz identity and prove it is equal with zero:</div><div><br /></div><div>\(I \alpha (I\alpha A) = (I \alpha I) \alpha A + I \alpha (I \alpha A)\)</div><div><br /></div><div>for all \(A\) and hence \(I\alpha I = 0\)</div><div><br /></div><div>But this means that <b>a single product alpha by itself is not enough! Therefore we need a second product </b>\(\sigma\)! Alpha will turn out to be the commutator, and sigma the Jordan product of observables, but we will derive this in a constructive fashion.</div><div><br /></div><div>Now that we have two products in our theory of nature, let's see how can we build the composite products out of individual systems. Basically we try all possible combinations:</div><div><br /></div><div>\(\alpha_{12} = a_{11}\alpha \otimes \alpha + a_{12} \alpha\otimes\sigma + a_{21} \sigma\otimes \alpha + a_{22} \sigma\otimes\sigma\)</div><div>\(\sigma_{12} = b_{11}\alpha \otimes \alpha + b_{12} \alpha\otimes\sigma + b_{21} \sigma\otimes \alpha + b_{22} \sigma\otimes\sigma\)</div><div><br /></div><div>which is shorthand for (I am spelling out only the first case):</div><div><br /></div><div>\((f_1 \otimes f_2)\alpha_{12}(g_1\otimes g_2) = \)</div><div>\(=a_{11}(f_1 \alpha g_1)\otimes (f_2 \alpha g_2) + a_{12}(f_1 \alpha g_1) \otimes (f_2 \sigma g_2 ) +\)</div><div>\(+a_{21}(f_1 \sigma g_1)\otimes (f_2 \alpha g_2) + a_{22}(f_1 \sigma g_1) \otimes (f_2 \sigma g_2 )\)</div><div><br /></div><div>For the mathematically inclined reader we have constructed what it is called a <a href="https://en.wikipedia.org/wiki/Coalgebra" target="_blank">coalgebra </a>where the operation is called a coproduct: \(\Delta : C \rightarrow C\otimes C\). <b>In category theory a coproduct is obtained from a product by reversing the arrows.</b></div><div><b><br /></b></div><div>Now the task is to see if we can say something about the coproduct parameters: \(a_{11},..., b_{22}\). In general nothing can constrain their values, but in our case we do have an additional relation: Leibniz identity which arises out the functoriality of time evolution. This will be enough to fully determine the products \(\alpha\) and \(\sigma\), and from them the formalism of quantum mechanics. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-17661262499437536722017-03-26T18:30:00.000-04:002017-04-09T17:53:59.273-04:00<h2 style="text-align: center;">Time as a continous functor</h2><div><br /></div><div>To recall from prior posts, a functor maps objects to objects and arrows to arrows between two categories. In other words, it is structure preserving. In the case of a monoidal category, suppose there is an arrow * from \(C\times C \rightarrow C\). Then a functor T makes the diagram below commute:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-CUUpUxprXSA/WNgTFMIrt-I/AAAAAAAABLA/PwSgLfe0WyckeG022zqPP7r13Fg0HMEfgCLcB/s1600/time.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="196" src="https://3.bp.blogspot.com/-CUUpUxprXSA/WNgTFMIrt-I/AAAAAAAABLA/PwSgLfe0WyckeG022zqPP7r13Fg0HMEfgCLcB/s320/time.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">This is all fancy abstract math which has a simple physical interpretation when T corresponds to time evolution: <b>the laws of physics do not change in time. </b>Moreover it can be shown with a bit of effort and knowledge of C* algebras that <b>Time as a functor = unitarity</b>.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">But what can we derive from the commutative diagram above? With the additional help of two more very simple and natural ingredients we will be able to reconstruct the complete formalism of quantum mechanics!!! Today I will introduce the first one: time is a continuous parameter. Just like in group theory adding continuity results in the theory of Lie groups we will consider <b><u>continous functors</u> and we will investigate what happens in the neighborhood of the identity element.</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In the limit of time evolution going to zero T becomes the identity. For infinitesimal time evolution we can then write:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">\(T = I + \epsilon D\)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We plug this back into the diagram commutativity condition \(T(A)*T(B) = T(A*B)\) and we obtain in first order the chain rule of differentiation:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">\(D(A*B) = D(A)*B + A*D(B)\)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">There is not a single kind of time evolution and \(D\) is not unique (think of various hamiltonians). There is a natural transformation between different time evolution functors and we can express D as an operation like this: \(D_A = A\alpha\) where \((\cdot \alpha \cdot)\) is a product.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">\(\alpha : C\times C \rightarrow C\)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Then we obtain the Leibniz identity:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">\(A\alpha (B * C) = (A\alpha B) * C + B * (A \alpha C)\)</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This is extremely powerful, as it is unitarity in disguise. Next time we'll use the tensor product and the second ingredient to obtain many more mathematical consequences. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-88643292734778322672017-03-19T19:13:00.000-04:002017-03-19T19:13:50.992-04:00<h2 style="text-align: center;">Monoidal categories and the tensor product</h2><div><br /></div><div><br /></div><div>Last time we discussed the category theory product which forms another category from two categories. Suppose now that we start with one category \(C\) and form the product with itself \(C\times C\). It is natural to see if there is a functor from \(C\times C\) to \(C\). If such a functor exists and moreover it respects associativity and unit elements, then the category \(C\) is called a <b>monoidal category. </b>By abuse of notation, the functor above is called the tensor product, but this is not the usual tensor product of vector space. The tensor product of vector space is only one concrete example of a monoidal product. To get to the ordinary tensor product we need to inject physics into the problem. </div><div><br /></div><div>The category \(C\) we are interested in is that of physical systems where the objects are physical systems, and arrows are compositions of physical systems. The key physical concepts needed are that of <b>time</b> and <b>dynamical degree of freedom </b>inside <b>Hamiltonian formalism.</b></div><div><br /></div><div>Time plays an distinguished role in quantum mechanics both in terms of formalism (remember that there is no time operator) and in how quantum mechanics can be reconstructed. </div><div><br /></div><div>The space in Hamiltonian formalism is a Poisson manifold which is not necessarily a vector space but because the Hilbert space \(L^2 (R^3\times R^3)\) is isomorphic to \(L^2 (R^3 ) \otimes L^2 (R^3 )\) let's discuss monoidal categories for vector spaces obeying an equivalence relationship. Hilbert spaces form a category of their own and there is a functor mapping physical systems into Hilbert spaces. This is usually presented as the first quantum mechanics postulate: each physical system is associated with a complex Hilbert space H.</div><div><br /></div><div>For complete generality of the definition of the tensor product we consider two distinct vector space V and W for which we first consider the category theory product (in this case the Cartesian product) but for which we make the following identifications:</div><div><ul><li>\((v_1, w)+(v_2, w) = (v_1 + v_2, w)\)</li><li>\((v, w_1)+(v, w_2) = (v, w_1 + w_2)\)</li><li>\(c(v,w) = (cv, w) = (v, cw)\)</li></ul><div>For physical justification think of V and W as one dimensional vector spaces corresponding to distinct dynamical degrees of freedom. Linearity is a property of vector spaces and we expect this property to be preserved if vector spaces are to describe nature. Bilinearity in the equivalence relationship above arises because the degrees of freedom are independent.</div></div><div><br /></div><div>Now a Cartesian product of vector spaces respecting the above relationships is a new mathematical object: a tensor product.</div><div><br /></div><div>The tensor product is unique up to isomorphism and respects the following <a href="https://en.wikipedia.org/wiki/Universal_property" target="_blank">universal property</a>:</div><div><br /></div><div>There is a bilinear map \(\phi : V\times W \rightarrow V\otimes W\) such that given <b>any</b> other vector space Z and a bilinear map \(h: V\times W \rightarrow Z\) there is a unique linear map \(h^{'}: V\otimes W \rightarrow Z\) such that the diagram below commutes.</div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-5hwrfRw5nIM/WM8KagnQ8wI/AAAAAAAABKo/iWiQyIfEqqU3GXb9epQIqqooEqTokZKuwCLcB/s1600/tensor.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://3.bp.blogspot.com/-5hwrfRw5nIM/WM8KagnQ8wI/AAAAAAAABKo/iWiQyIfEqqU3GXb9epQIqqooEqTokZKuwCLcB/s1600/tensor.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">This universal property is very strong and several mathematical facts follows from it: the tensor product is unique up to isomorphism (instead of Z consider another tensor product \(V\otimes^{'}W\) ), the tensor product is associative, and there is a natural isomorphism between \(V\otimes W\) and \(W\otimes V\) making the tensor product an example of a <a href="https://en.wikipedia.org/wiki/Symmetric_monoidal_category" target="_blank">symmetric monoidal category</a>, just like the category of physical systems under composition.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This may look like an insignificant trivial observation, but it is extremely powerful and it is the starting point of quantum mechanics reconstruction. <b>On one hand we have composition of physical systems and theories of nature describing physical systems. On the other hand we have dynamical degrees of freedom and the rules of quantum mechanics. The two things are actually identical and each one can be derived from the other. </b>To do this we need one additional ingredient: time viewed as a functor. Please stay tuned.</div><div><br /></div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-78499928397017430542017-03-13T00:21:00.000-04:002017-03-13T00:21:36.966-04:00<h2 style="text-align: center;">Category Theory Product</h2><div><br /></div><div>Before we discuss this week's topic, I want to make two remarks from the prior posts content. First, why we need natural transformations in algebraic topology? Associating groups to topological spaces (which incidentally describe the hole structure of the space) is done by the use of functors. Different (co)homology theories are basically different functors, and their equivalence is the same as proving the existence of a natural transformation. Second, the logic used in category theory is intuitionistic logic where truth is proved constructively. Since this is mapped into computer science by the Curry-Howard isomorphism, the fact that some statements have no constructive proof is equivalent with a computation running forever. In computation theory one encounters <a href="https://en.wikipedia.org/wiki/Halting_problem" target="_blank">the halting problem</a>. If the halting problem were decidable then category theory would have been mapped to ordinary logic instead of intuitionistic logic.</div><div><br /></div><div>Now back to the topic of the day. We are still in pure math domain and we are looking at mathematical objects from 10,000 feet disregarding their nature and observing only their existence and their relationships (objects and arrows). The first question one asks is how to construct new categories from existing ones? One way is to simply reverse the direction of all arrows and the resulting category is unsurprisingly called the opposite category (or the dual). Another way is to combine two category into a new one. Enter the concept of a product of two categories: \(\mathbf{C}\times \mathbf{D}\). In set theory this would correspond with to the Cartesian product of two sets. However we need to give a definition which is independent of the nature of the elements. Moreover we want to give it in a way which guarantees uniqueness up to isomorphism. </div><div><br /></div><div>The basic idea is that of a projection from the elements of \(\mathbf{C}\times \mathbf{D}\) back to the elements of \(\mathbf{C}\) and \(\mathbf{D}\). So how do we know that those projections and the product is unique up to isomorphism? Suppose that there is another category \(\mathbf{Y}\) with maps \(f_C\) and \(f_D\). Then there is a unique map \(f\) such that the diagram below commutes</div><div><br /></div><div><br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-RNLjwZadnjs/WMYT7wKhrFI/AAAAAAAABJE/IhaFCcX3po4pGii2Nm0iFq8cBfKcIt3VwCLcB/s1600/Prod.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="120" src="https://1.bp.blogspot.com/-RNLjwZadnjs/WMYT7wKhrFI/AAAAAAAABJE/IhaFCcX3po4pGii2Nm0iFq8cBfKcIt3VwCLcB/s320/Prod.png" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">This diagram has to commute for all categories \(\mathbf{Y}\) and their maps \(f_C\) and \(f_D\). From this definition, can you prove uniqueness of the product up to isomorphism? It is a simple matter of "diagram reasoning". Just pretend that Y is now the "true incarnation" of the product. You need to find a morphisms f from Y to CxD and a morphism g from CxD to Y such that \(g\circ f =1_{C\times D}\), \(g\circ f = 1_Y\). See? Category theory is really easy and not harder than linear algebra.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Now what happens if we flip all arrows in the diagram above? We obtain a coproduct category \(\mathbf{C}\oplus \mathbf{D}\) and the projections maps become injection maps. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">OK, time for concrete examples:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"></div><ul><li>sets: product = Cartesian product, coproduct = disjoint union</li><li>partial order sets: product = greatest lower bounds (meets), coproduct = least upper bounds (joins)</li></ul><div>So where are we now? The concept of the product is very simple, but we need it as a stepping stone to the concept of tensor product and (symmetric) monoidal category. Why? Because <b>physical systems form a symmetric monoidal category</b>. Using categorical arguments we can derive the complete mathematical properties of any theory of nature describing a symmetric monoidal category. And the answer will turn out to be: quantum mechanics. Please stay tuned.</div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0tag:blogger.com,1999:blog-3832136017893749497.post-87634074684798928382017-03-04T23:08:00.000-05:002017-03-05T00:44:53.316-05:00<h2 style="text-align: center;">The Curry–Howard isomorphism</h2><div><br /></div><div>Category theory may seem vary abstract and intimidating, but in fact it is extremely easy to understand. In category theory we look at concrete objects from far away without any regard for the internal structure. This is similar with Bohr's position on physics: physics is about what we can say about nature, and not decide what nature is. Surprisingly, a lot of information about the objects in category theory is derivable from the behavior of the objects and this is where I am ultimately heading with this series on category theory.</div><div><br /></div><div>Last time I mentioned the origin of category theory as the formalism to clarify when two homology theories are equivalent. But category theory can be started from two other directions as well, and those alternative viewpoints help provide the intuition needed to navigate the abstractions of category theory. One thread of discussion starts with the idea of computability and the work of <a href="https://en.wikipedia.org/wiki/Alonzo_Church" target="_blank">Alonzo Church</a> and <a href="https://en.wikipedia.org/wiki/Alan_Turing" target="_blank">Alan Turing.</a> Turing was Church's student and each started an essential line of research: <a href="https://en.wikipedia.org/wiki/Lambda_calculus" target="_blank">lambda calculus</a> and <a href="https://en.wikipedia.org/wiki/Universal_Turing_machine" target="_blank">universal Turing machines</a>. Those later grew into one hand functional languages like Java script, and the other hand into object oriented languages like C++. What one can do with lambda calculus can be achieved with universal Turing machines, and the other way around. The essential idea of computer programming is to build complex structures out of simpler building blocks. Object oriented programming starts from the idea of packaging together actions and states. An object is a "black box" containing actions (functions performing computations) and information (the internal state of the object). Functional programming on the other hand lacks the concept of an internal state and you deal only with functions which take an input, crunch the numbers, and then produce an output. The typical example is FORTRAN: FORmula TRANslation (from higher level human understandable syntax into zeroes and ones understandable by a machine).<br /><br />The second direction one can start category theory is <a href="https://en.wikipedia.org/wiki/Intuitionistic_logic" target="_blank">intuitionistic logic</a> and the foundation of set theory. The problem of naive set theory is that one can create paradoxes like <a href="https://en.wikipedia.org/wiki/Russell's_paradox" target="_blank">Russel's paradox</a>: the set of all sets which are not members of themselves. The solution Russel proposed was <a href="https://en.wikipedia.org/wiki/Type_theory" target="_blank">type theory</a>. Types introduce structure to set theory preventing self-referential constructions. In computer programming, types are semantic rules which tell us how to understand various sequences of zeros and ones from computer memory as integers, boolean variables, etc.<br /><br />In intuitionistic logic statements are not true by simply disproving their falsehood, but they are true by providing an explicit construction. Truth must be computed and the parallel with computer programming is obvious. There is a name for this relationship, the <a href="https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspondence" target="_blank">Curry-Howard isomorphism</a>. <b>The mathematical formalism needed to rigorously spell out this correspondence is category theory. </b>At high level:<br /><ul><li>proofs are programs</li><li>propositions are types</li></ul><div>More important is that we can attach logical and programming meaning to category theory constructions which helps dramatically reduce the difficulty of category theory to that of elementary linear algebra. </div><div><br /></div>There are two additional key points I want to make. First category theory ignores the structure of the objects: they can be sets, topological spaces, posets, even physical systems. As such <b>uniqueness is relaxed in category theory and things are unique up to isomorphisms. </b>Second, <b>we are strengthening uniqueness by seeking </b><a href="https://en.wikipedia.org/wiki/Universal_property" style="font-weight: bold;" target="_blank">universal properties</a><b>. </b>This gives category theory its abstract flavour:<b> </b>the generalization of standard mathematical concepts in category theory involve diagrams which must commute. The typical definition is something like: "if there is an "impostor" which claims to have the same properties as the concept being defined, then there exist a so and so isomorphism such that a certain diagram commutes which guarantees that the impostor is nothing but a restatement of the same concept up to isomorphism". Next time I will talk about the first key definition we need from category theory, that of a product, and by flipping the arrows that of a coproduct. </div>Florin Moldoveanuhttp://www.blogger.com/profile/01087655914212705768noreply@blogger.com0