It’s Entanglement, Not Complementarity
Issue #59
I find it amazing that 100 years down the line, all universities in all countries in the world still teach quantum physics the same old Copenhagen way. Why haven’t we learnt to do it better? The Copenhagen account of the famous double slit experiment usually goes like this: a quantum particle goes through two slits and produces interference (some would even question whether we can say that it goes through both slits – they would maintain that one would have to measure it in order to be able to say so). But, so the Copenhagen account continues, this is only true if we do not look at which slit the particle goes through. If, on the other hand, we do know which slit the particle goes through, then the interference disappears.

This account is misleading for a number of reasons. For starters, it gives us the wrong impression that we, as observers, make a crucial difference to how the particle behaves. Secondly, it seems to suggest that looking at superpositions collapses them and, at the same time, it hints at the fact that it is our knowledge of the way the particle behaves that matters. All of this is, to put it bluntly, wrong.
First of all, observers are not needed to remove quantum interference (or to create it for that matter). In addition, there is no such thing as a collapse of quantum superpositions. Lastly, it does not matter one iota what we know or do not know about the quantum system. Quantum physics is as objective as the classical physics of Newton (warning: not every quantum physicist agrees with this).
You might say, “Oh, well, that’s just your opinion. It’s just your own interpretation of quantum mechanics.” That statement too would be wrong and now we have a beautiful set of experiments to demonstrate why. I am talking about two recent Physical Review Letters where the results of Jian-Wei Pan’s (one of the leading quantum computational experts) and Wolfgang Ketterle’s (Nobel Prize in physics for achieving the first Bose Condensation of atoms) groups were announced.
The experiment they performed (independently, but the setups were similar) was a variant of the one endlessly discussed by Bohr and Einstein. It was one of Einstein’s many attempts to contradict quantum mechanics. He kept insisting so much that he “didn’t believe God played dice with the universe”, that at some point Bohr lost patience and told him to: “Please stop telling God what to do”.
Einstein’s idea was this: when a particle goes through the upper slit, it gives an upward momentum kick to the screen containing the slits, while when the particle goes through the lower slit, it gives a momentum kick in the opposite direction (this is just a consequence of momentum conservation). So, Einstein says, if you measure the momentum of the screen, it tells you about the momentum of the particle, and then you can measure the position of the particle directly. This looks like you can know a particle’s position and momentum at the same time. Ergo, Heisenberg’s uncertainty principle is violated and, since Einstein used quantum physics to prove that quantum physics is wrong, it must be that quantum physics is inconsistent. In other words, Einstein claimed that you can have interference while (seemingly at the same time) know which slit the particle goes through and this is a clear violation of quantum physics.
It’s an ingenious argument, like all of Einstein’s arguments, but (unlike most of Einstein’s arguments) it is wrong. The way that Bohr replied to these kinds of arguments, however, was always convoluted and in terms of complementarity. The commonest view is that Bohr won the debates; however, it is not always entirely clear how and why. Bohr would say something like this. The wave and particle aspects of a quantum system are complementary and can never be confirmed simultaneously in one and the same experimental setup. So, if we want to see interference, this requires an experiment to be set up that lets the system behave like a wave. If we want to confirm that it’s a particle, then we must set up a detector, but this then prohibits any wavelike behaviour. As I said at the beginning, though, this narrative places too much emphasis on our choices of what to do and whether to measure or not.
The experiments by Pan and Ketterle showed exactly what’s at stake (hint: think Schrödinger). They used a single atom that could be prepared in different states of momentum uncertainty: its momentum is in a superposition of different values. Now, a photon can be emitted by this atom, and we would like to see if this photon can interfere. The photon can be emitted in one direction, in which case the atom (like the screen) recoils in the opposite direction. But if the photon is emitted in another direction, then the atom recoils opposite to that direction. This way, the state of the atom becomes entangled to the state of the photon. More precisely, the photon being emitted in two different directions is entangled with two different corresponding recoils of the atom.
Now we are in the position to talk about when the photon interference takes place and we don’t need the wave-particle duality or any other complementarity-related language! If the photon is maximally entangled with the atom, it loses the ability to interfere. This happens when the kick that the photon gives to the atom when emitted is much larger than its initial momentum uncertainty. Then the kicks in the opposite directions become perfectly distinguishable states which maximizes the amount of entanglement between the two. If, on the other hand, the kick is small compared to the uncertainty, the two atomic states after the emission remain virtually one and the same state, in which case there is little entanglement between the atom and the photon – and this leads to the possibility of photonic interference. One can say that in the former case, the atom has measured the path of the photon while in the latter in has not.
Note, however, that all other states between the “no” and “maximum” entanglement are possible. In this case, the photon can interfere, and the degree of interference is reduced by the amount of entanglement. This “in-between scenario” is sometimes referred to as a weak measurement.
In general, the more entangled two systems are, the less quantumness (interference) each can exhibit individually. So the quantumness (entanglement) at the level of the total system is what prevents quantumness at the lower level of subsystems. This is an aspect of a more general principle, namely that quantum dynamics preserves quantumness. That’s all there is to it. No observers, no complementarity, no wave-particle duality. Or, as the philosopher A. J. Ayer might have put it, “boo Copenhagen and hurrah entanglement”.
Take care of yourselves,
Vlatko


I’m delighted to see high-caliber mathematicians and theoretical physicists getting interested in the theory behind deep learning.
One theoretical puzzle is why the type of non-convex optimization that needs to be done when training deep neural nets seems to work reliably. A naive intuition would suggest that optimizing a non-convex function is difficult because we can get trapped in local minima and get slowed down by plateaus and saddle points. While plateaus and saddle points can be a problem, local minima never seem to cause problems. Our intuition is wrong, because we picture an energy landscape in low dimension (e.g. 2 or 3). But the objective function of deep neural nets is often in 100 million dimensions or more. It’s hard to build a box in 100 million dimensions. That’s a lot of walls. There is a number of theoretical work from my NYU lab (look for Anna Choromanska as first author) and in Yoshua Bengio’s lab in this direction. This uses mathematical tools from random matrix theory and statistical mechanics.
Another interesting theoretical question is why multiple layers help. All boolean functions of a finite number of bits can be implemented with 2 layers (using the conjunctive of disjunctive normal form of the function). But the vast majority of boolean functions require an exponential number of minterms in the formulas (ie.e. an exponential number of hidden units in a 2-layer neural net). As computer programmers, we all know that many functions become simple if we allow ourselves to run multiple sequential steps to compute the function (multiple layers of computation). That’s a hand-wavy argument for having multiple layers. It’s not clear how to make a more formal argument in the context of neural net-like architectures.
Vlatko Vedral is right to reject the pedagogical mythology of Copenhagen. Interference disappears not because of observers, knowledge, or “looking,” but because which-path information becomes physically encoded in correlations. Entanglement, not complementarity-as-metaphor, does the real work here. On this point, the essay is both correct and necessary.
Where the argument overreaches is in presenting this clarification as a closure. Entanglement explains how local coherence is lost under interaction. It does not explain why particular decompositions into subsystems become physically or epistemically privileged, why some correlations function as records, or why classical appearances stabilize at all. These questions do not reintroduce observers or collapse; they arise precisely after those notions have been removed.
Vedral’s claim that quantum mechanics is “as objective as Newtonian physics” is therefore true only in a restricted dynamical sense. Quantum dynamics is objective, but quantum theory does not itself specify the conditions under which objectivity emerges as a shared, stable structure. Entanglement is necessary for decoherence, but not sufficient for appearance.
In short, replacing complementarity with entanglement is progress. Treating entanglement as the end of the foundational story is not. The problem has not vanished; it has shifted—from dynamics to the epistemic conditions under which dynamics become reality for anyone.
That is where the unfinished work remains.