This posting describes some of the History, which many people may be bypassing, in their appreciation of Quantum Mechanics.

About until the 1920s, ‘light’ was largely thought to consist of waves. But a problem with that was, how to explain, why light can travel through apparently empty space. After all, the light that reaches us from distant stars is not fundamentally different, from light that originates on Planet Earth. And until the 1920s, it was believed that there exists a mysterious “Aether“, which transmitted light through space.

A basic premise of wave-propagation, such as in the case of sound-waves, is that there must first be some sort of medium, to conduct the waves, which in the case of sound may be air. But the need for the existence of a medium, also explains why there is no sound in space.

But during the 1920s, the existence of an aether was disproved. Decisively. And so another explanation was needed, of what constitutes light. And the thought seemed more logical, that particles can easily travel through empty space – hence, photons. Even though this was not actually the first form in which photons were theorized.

But then obviously, this raises questions, about how these particles are supposed to relate to waves, where waves were at first easier to observe.

I think that the way many people today are presented, what Quantum-Mechanics consists of, is just, “Wave / Particle Duality”. But then what many students believe – and what I once believed myself – is, that Quantum Mechanics holds some sort of secret key, as to how Matter and Energy might simultaneously consist of particles and waves. And in reality, QM holds no such decisive, secret answers. The only real secret which QM may hold, is a detail that could be embarrassing to the present way in which QM works.

Quantum Mechanics teaches, that particles are a primary phenomenon, and that waves are secondary to the existence of particles. And QM specialists find this to be an embarrassing subject.

According to somebody named Richard Feynman, a chief architect of QM, photons have complex-numbered probability-functions, even though according to classical thinking, probabilities are only supposed to be real numbers from 0.0 to 1.0 ! Thus, the double-slit experiment, when applied to light, ‘works’, because the probabilities of the photons come into and out of phase with each other, and even cancel out, even though all the involved probabilities may be non-zero. According to complex numbers, numbers actually lie in a “complex plane“, that is complete with an absolute, and a phase-angle. This was widely adopted, even though according to common sense, it’s absurd.

But then, the phenomenon is confirmed experimentally, that an interference pattern will form when light is passed through a double-slit apparatus, even when it gets passed through as single photons – that do not interact with each other. This can be observed with a Geiger counter, if need be… The constructive and destructive interference even emerges over time, when one photon passes through *either* slit, at a time.

But now, QM goes further, and suggests that all fundamental particles have wave functions, not only photons. Only, photons are a case, in which the wavelike property is most evident.

And so Quantum Mechanics essentially solves the riddle of Wave / Particle duality, by getting rid of it.

But, doing so has led to many logic issues, Math issues, and is still the subject of intense study. It’s not a finished work of art.

(Edit 02/16/2018 : )

There’s an observation I’d like to add, about these complex-numbered probabilities, which are also called “Feynman Amplitudes“. Feynman asserted, perhaps accidentally, that the probability with which any photon would strike a region on the target, is the sum of the probabilities, with which it would pass through each slit, and land on that region of the target.

If somebody tosses two coins, the probability with which at least one coin lands face-up, is not just the sum of the probabilities, with which each coin lands face-up. It more-closely resembles that sum, minus the probability with which both coins could land face-up at the same time, i.e.,

0.5 + 0.5 – 0.25

Well, if photons are really particles, then the probability with which they could pass through both slits, is actually zero. And then one way or another, the probability with which a photon lands on the region of the target, is just the sum of the probabilities.

And so it would seem to me, that what may have started as an error, proved itself to be extremely accurate over decades of testing, and ended up confirming the hypothesis in a way its author did not specifically intend.

Because that description was rather informal, I’m going to follow it up with a more-formal description.

If (A) and (B) denote binary states that are random *and independent of each other*, then Let (a) be the real-numbered probability of (A), and Let (b) be the real-numbered probability of (B).

It would follow that

P( A AND B ) = ( a b )

P( NOT A ) = 1 – a

But, given the classical, logical OR operator,

P( A OR B ) = 1 – ( 1 – a ) ( 1 – b )

= 1 – ( 1 + ( a b ) -a -b )

= a + b – ( a b )

Note the -( a b ) term. The absence of this term from the Feynman Amplitude, implies that the two slits are not operating independently of each other, as far as ( a b ) still represents whatever probability there would be, of *both* passing the photon. The photon passing through one slit, somehow negates the possibility, that it could be passing through the other slit.

To my mind, this parallels the wave-based description, of how a ‘diffractive lens’ can sometimes *focus* light, thereby creating a hot-spot, which has greater intensity than the original beam of light.

(Edit 02/19/2018 : )

I suppose that one issue with the simplification I gave above would be, that it assumes – like intuitive reasoning would – that the probabilities (a) and (b) are real numbers, while QM theorizes that they would be complex numbers. And if one wanted to extend the reasoning to complex probabilities, one would need to redefine ( A AND B ) . This term could be defined as, ‘By how much A and B agree, but stated in their average phase-position.’ By how much complex numbers (a) and (b) would agree, is easy to define, but has as side-effect, that the logical answer will come in the real-component, while ( a + b ) might be in some arbitrary phase-position, which would therefore not be compatible with ( a ⋅ b ) . And so this term would need to be adapted somehow, to the phase-position of ( a + b ) . *I could call* the resulting logic operator ( A × B ) , just for now. And something like this might follow:

```
a = ( X
```_{a} + Y_{a} *i* )
b = ( X_{b} + Y_{b} *i* )
(a) As seen from (b) :
( a ⋅ b )
= ( X_{a} + Y_{a} *i* ) ( X_{b} - Y_{b} *i* )
X( a ⋅ b )
= ( X_{a} X_{b} + Y_{a} Y_{b} )
a = -b -->
P( A × B ) = -a |a| *i*
Otherwise,
P( A × B )
= X( a ⋅ b ) ( a + b ) / | a + b |
(Cubic Term) / (First-Order Term)

I think that there are 4 observations about this little thought-experiment, which might actually be relevant to the subject of QM:

- While the normalization is questionable, it suggests that frequently, there will be a non-zero result, based on R( a ⋅ b ) frequently being non-zero.
- This expression contains the term ( a + b ) , which the accepted Feynman Amplitudes also contain, and which might later factorize with them.
- In the case where | a + b | = 0 ; a = -b , one might think that the situation corresponds to (-1) in some way, which there wouldn’t be a dominating phase-position to adapt to. But because ( a + b ) is at right-angles to (a) close to where ( a + b ) becomes zero, it might make most sense to suggest that the result should be ( -a |a|
*i*) . Yet, the main goal in processing this discontinuous point in the domain of ( a + b ) should be, that it should not interfere, when many Amplitudes are summed. - If one chose to work with the full, complex value of ( a ⋅ b ) , then the order of the operands would affect the result, which it really should not.

But even though a non-zero result can be expected, this value is absent from the accepted Feynman Amplitudes, which experiments nevertheless show to be accurate.

I suppose we’d also need to define ( NOT A ) , and the result of that may not have a closed-form solution, as all I know about that is,

```
| P( NOT A ) | = 1 - |a|
```

(Update 09/06/2018, 16h50 : )

I suppose that I should mention the fact, that *some* Scientists have chosen to use the semantic convention, of stating that the wave function of a particle exists, but without stating that this wave-function is also its probability. But the mere existence of this wave-function has been confirmed for about half a century by now.

I possess entire textbooks, which go on to mention the wave-function of electrons, without ever stating that that wave-function is also their probability.

By saying that the wave-function is merely a wave-function, some Scientists have also removed an answer that was once suggested, about how the wave-function is related to the particle. In Science, the sort of questions we’d want answers to are of the form:

- Why do all particles have wave-functions, that have been confirmed experimentally, if the computed DeBroeglie wavelength would be such, that they can be measured?
- What aspect of the particle, or what property of the particle, do these wave-functions represent?

The assertion that the waves are the probability of the particle itself, seemed to answer that question, even if it did so in an unnatural way. The absence of this assertion just renders the question unanswered again.

I suppose that I should also mention, that because the Debroeglie wavelength is entirely the function of the momentum of the particle, being Plank’s Constant divided by that momentum, situations arise trivially, in which this wavelength is shorter than the particle itself is large. This is of special significance, when the thermal agitation in a medium already imports enough momentum to all its particles, for the wavelengths to be shorter than their size.

In such situations, the ‘matter waves’ are thought not to be measurable. Hence, electrons, having ‘a low mass’, exhibit wave-functions easily. But because neutrons are already much more massive than electrons, neutrons need to be cooled down – hence the use of ‘Cold Neutrons‘ in certain experiments, before their wavelengths become observable. *Buckyballs* actually need to be cooled to cryogenic temperatures, before their wavelengths become observable.

For this purpose, Protons and Neutrons do not count as fundamental particles. Instead, a proton is thought to consist of two Up Quarks plus one Down Quark, while a neutron is thought to consist of one Up Quark plus two Down Quarks. These Quarks are thought to be connected by Gluons, the force of which *increases with distance*. In short, the actual size of a proton or a neutron is not thought to be one of the wavelengths of either, but rather, the length of these Gluons, which is variable.

One concept which somebody once suggested to me was, that the True Size of a particle, is its “Compton Wavelength“. But because that wavelength is really just the wavelength of a photon, with the energy equivalent of the mass, of the particle in question, if we were to apply it consistently, we’d obtain the result that more-massive particles are always smaller, than less-massive particles. I leave it for the reader to decide, whether a proton is actually smaller than an electron.

But the reader should also be aware, that when Scientists formulated theories about the Strong Nuclear Force, they probably ‘normalized’ those theories in such a way, that the length of Gluons genuflects the Compton Wavelength of a Proton.

Dirk

## 2 thoughts on “The Myth of Wave / Particle Duality”