Locality and Determinism under Bell Tests and Delayed Choice Experiments

Daniel J. Castellano

(2020)

[Full Table of Contents]
10. Implications of Bell’s Inequality Violations
    10.1 Later Formulation of Bell’s Inequality
    10.2 Experimental Confirmation and the Observer Problem
11. Wheeler’s Delayed Choice Experiments
12. ’t Hooft’s Challenge to the Free Choice Postulate
    12.1 Problems with Conventional Interpretations of Quantum Mechanics
    12.2 Bell’s Theorem and the Assumption of Freedom
    12.3 Time Dependence of Operators
    12.4 Evolution of Physical Variables with Unconstrained Initial States
    12.5 Realist Interpretation of Operators
    12.6 Restoration of Determinism?
13. Free Will and the Big Bell Test

10. Implications of Bell’s Inequality Violations

Bell’s theorem had the stunning implication that a seemingly metaphysical dispute about the interpretation of quantum mechanics could actually be subjected to an empirically testable inequality relation. If this inequality is violated, it is impossible for any local deterministic hidden variables theory whatsoever to account for this. Violations of Bell’s inequality have in fact been verified by a variety of experiments, so that a purely Einsteinian account of quantum mechanics, upholding both locality and determinism, is generally agreed to be impossible.

Some have gone further and suggested that, since there is nothing preventing λ from being a random variable, that even non-deterministic systems obeying the principle of locality would be excluded by Bell’s inequality violations. On the contrary, regardless whether the parameters λ themselves are specified in a deterministic or non-deterministic manner, Bell’s proof supposes that, once specified, they should fully determine (in combination with the wavefunction) the outcomes of each individual measurement in the system under discussion (the singlet state). So we are only testing the supposition that the physical system is (proximately) deterministic.

The basic logic of Bell’s argument may be expressed as follows:

  1. Quantum theory yields anticorrelation of measurements for pairs of particles that were in the singlet state, so σ1a = - σ2a, for any arbitrary direction a.
  2. Suppose Einstein’s principle of locality holds for this system, so that measurement of one particle cannot instantaneously affect the outcome of measuring the other particle some distance away.
  3. (1) and (2) imply that the physical system is deterministic.
  4. The wavefunction does not completely specify the result of each individual measurement.
  5. (3) and (4) imply that there must be some parameters λ which, in combination with the wave function, completely specify the result of each individual measurement in advance.
  6. (1) and (5) and the axioms of probability theory imply Bell’s inequality.

If the inequality is violated, then at least one of the premises must be false. Granting the correctness of premise (1) as experimentally confirmed, Bell concluded that the violation of the inequality implied that any deterministic specification of the system, if it exists, must be non-local, so not Lorentz-invariant. Thus any deterministic account of the EPR paradox would violate special relativity. Bell considered, however, that the anti-correlation in singlet states (premise 1) might be observed only when the instrument settings are changed (oriented toward some a) so that signals less than or equal to the speed of light could be transmitted to the other particle. More rigorous experimental conditions were needed to rule out this possibility.

Also implicit in premise (1) is a certain freedom of choice in the direction a, at least to the extent that it is independent of any physical process that affects the other particle. This is critical to our interpretation of premise (2), since what is relevant is the local time at which the orientation of the magnet is decided, if not by free will, at least by a process that is statistically independent of the system under observation.

Lastly, the singlet anti-correlation has the special condition of being stationary at the extreme of -1. This allows us to infer determinism from locality, for, once the first particle’s spin is measured in a certain direction, we can predict with certainty that any future spin measurement (where future is defined with respect to the first measurement) of the other particle in that direction will result in the opposite value.

Violations of Bell’s inequality have been confirmed in various experiments. These have often been interpreted as disproving realism, ignoring that realism in the context of Bell’s theorem entails both locality and determinism. As Bell himself recognized, deterministic accounts such as Bohm’s are still viable, as long as they are non-local. Alternatively, we might accept that quantum mechanics is fundamentally non-deterministic, while still obeying the locality principle as required by relativity. Historically, most physicists adopted the latter option, disbelieving not only in determinism, but also (quite unnecessarily) any underlying realism, while accepting the principle of locality. The apparently non-local anticorrelation was reconciled with locality by arguing that no information about one measurement could be conveyed to the other location, so there was no faster-than-light signalling, or else by repeating Bohr’s denial that the system was resolvable into parts.

10.1 Later Formulation of Bell’s Inequality

Modern discussions of Bell’s inequality use a later formulation in terms of the probabilities of paired measurement outcomes rather than expectation values. This follows a model conceived by E.P. Wigner (not to be confused with the Wigner’s friend thought experiment), as presented by J.J. Sakurai,[70] a personal friend of Bell who discussed the paradox with him at length.

Suppose it were possible to prepare particles of a type such that, if the z-spin is measured, we certainly get a value of +1, and if the x-spin is measured, we certainly get a value of -1. We denote that particle type as (z+, x-), and so on for other combinations of values +1 and -1. By non-commutativity of spin operators, we can only measure one or the other spin coordinate, and in doing so destroy the uniformity of the other coordinate spin value. Nonetheless, prior to measurement, anti-correlation requires that particles prepared in such types should be opposite-valued, i.e., if particle 1 in an entangled pair is of type (z+, x-), then particle 2 must be of type (z-, x+), and so on for other permutations of + and -.

Einstein’s principle of locality is incorporated by assuming that a measurement of z-spin for a (z+, x-) particle or a (z+, x+) particle will certainly yield +1, regardless of whether we measure x- or z-spin on its distant partner. So we are not testing realism alone, but a specific kind of realism, in combination with locality. The specific kind of realism is that supposed by our ability to prepare these particle types (z±, x±) so that a definite value for each spin coordinate is guaranteed, regardless of which coordinate is chosen for measurement. These hypothetical types also involve a supposition of determinism, as the value of spin, once measured, remains the same for all time, being fully determined by the choice of measurement axis and prior particle type.

We might explain the statistical results of EPR experiments on the supposition that we begin with a mix of particles that are 25% in each of the four possible types (z±, x±), paired with opposite-valued partners. This replication of quantum theory is not possible, however, if we are dealing with three measurement axes that are not necessarily orthogonal. We now have eight possible types: (a±, b±, c±).

Suppose we conduct an EPR experiment on a sample of pairs of particles belonging to these eight types, and that the relative frequency of the eight types for particle 1 of the pair is Ni. Let P(a+, b-) be the probability of particle 1 having a measured a-axis spin of +1 and a b-axis spin of -1, with similar expressions for the probability of other possible outcomes for measurements of a-, b, or c-spin. Since all Ni ≥ 0, this inequality relation necessarily follows:

P(a+, b+) ≤ P(a+, c+) + P(c+, b+).

If, for simplicity, we choose a, b, c to all be coplanar, and c bisects the angle formed by a and b, then the quantum mechanically predicted distribution of outcomes will necessarily violate the inequality as long as the angle between a and b is less than 90°.

Violations of this ineequality imply a failure of one of the premises. We must either deny the principle of locality, or deny any form of realism where particles can be of types with definite predetermined values of a-, b- and c-spin measurements. This realism need not suppose that non-commuting observables can be measured simultaneously, nor even that a definite value is realized prior to measurement, but only that some intrinsic property of the particle determines the outcome of such a measurement with certainty, regardless of which axis is measured. As in the original version of Bell’s theorem, the essential criterion is determinism, so once again we are left with a choice between denying determinism or denying locality.

10.2 Experimental Confirmation and the Observer Problem

Bell’s inequality violations were confirmed experimentally with anticorrelated spins in protons and anticorrelated polarizations in photons.[71] Thus EPR’s version of realism, which entails both determinism and locality, would seem to be decisively refuted. We can make no appeals to ignorance or incompleteness of quantum theory, for Bell’s theorem countenances a highly generalized notion of determining factors λ (which make possible the preparation of the types in Wigner’s version). Thus a violation of Bell’s inequality is incompatible with any deterministic system whatsoever, no matter how abstruse or metaphysical, if we accept the axioms of probability theory for real-valued probabilities. Random sampling error can be ruled out in the better experiments, one of which showed a violation of over nine standard deviations. That means, if the underlying system really did not violate Bell’s inequality, the probability of getting a result at least as extreme as what was observed is around 1 in 1020!

This leaves three logical alternatives: local non-determinism, non-local determinism, and non-local non-determinism. Initially, most physicists adopted the first option, preferring to accept randomness as a fundamental reality rather than contradict relativity with non-locality. Thus most physicists continued to believe in locality long after violations of Bell’s inequalities, choosing instead to disbelieve in realism (i.e., ontological definiteness of properties betwen observations), or at least to discard determinism, which all but a few holdouts had left behind even before the 1960s. By the 1990s, however, non-local interpretations were openly held by many mainstream physicists, even those who were not trying to save determinism. Alternative ontological interpretations have generally accepted Bell’s dilemma, either preserving determinism while accepting non-locality, or preserving locality while accepting non-determinism. The first alternative was taken by David Bohm, who acknowledged non-locality in the quantum information determining particle trajectories. Robert Griffiths, by contrast, preserved locality in his consistent histories interpretation, while accepting randomness and ontological ambiguity, even arguing that the quantum world is governed by a different kind of logic.

All the early EPR experiments were devised with static setups, so measurement settings could not be changed, i.e., one always measured the same axis for repeated trials. Bell had assumed that the choice of axis for measuring the first particle could not affect the outcome of measuring the other particle when it was some distance away. This assumption is equivalent to the relativistic principle of locality only if the two measurement events are spatially separated in the relativistic sense that there can be no signaling between them at light or sublight speeds. If a light-speed or sublight speed signal between these events is possible, then it is at least conceivable, though physically implausible, that the act of setting the first detector in a certain direction, or the act of measuring the first particle, can somehow convey a signal to the second particle that alters its state and affects its measurement outcome.[72] Bell himself had noted this loophole:

Conceivably [quantum mechanical predictions] might apply only to experiments in which the settings of the instruments are made sufficiently in advance to allow them to reach some mutual rapport by exchange of signals with velocity less than or equal to that of light. In that connection, experiments of the type proposed by Bohm and Aharanov, in which the settings are changed during the flight of the particles, are crucial.

To close this loophole, Alain Aspect and colleagues conducted a polarization experiment where the polarizers changed orientation at frequencies near 50 MHz, so that the time between such changes was short compared with the photon transit time.[73] The detection event for one subsystem and the switch in orientation in the other subsystem were separated by a spacelike interval, so that one could not causally affect the other under relativistic locality. A Bell’s inequality violation was still observed to five standard deviations, which means, if the underlying system did not really violate Bell’s inequality, the probability of obtaining a result this extreme is 1 in 3.5 million.

The experiment by Aspect et al. was less than ideal in that the changes in orientation was quasiperiodic instead of being at random time intervals. Conceivably, this regularity might compromise the independence of the detection of one particle from the switching of the other at some deeper causal level, even though it is implausible that the switching of orientations for the two detectors should be causally linked, since they were powered by independent generators operating at different frequencies.

In 1998, Gregor Weihs et al. conducted an experiment with more rigorous enforcement of the Einstein locality condition.[74] The changes in settings were randomized, the measurement stations were at greater physical distance (not that this is strictly necessary to achieve spacelike separation), and data at each station was registered completely independently.

Bohm & Hiley (1993) conjectured that the signal transmission speed could be much greater than the speed of light, in which case the Aspect study would not be sensitive enough to rule this out.[75] This deeper locality, however, would require postulating physics beyond relativity, with a privileged reference frame. The system still would not have Lorentz invariance, so the system would be nonlocal in Einstein’s sense, which is what Aspect et al. set out to test. No experiment of this type could possibly rule out Bohm & Hiley’s objection, since one could always posit the supposed signal transmission speed as arbitrarily high yet finite.

Another speculative loophole is to suggest that the events of changing settings, even when ostensibly random or made by free will, are nonetheless co-determined on some deeper level. This superdeterminism is a specific kind of strong determinism where we suppose that seeming unrelated causal chains conspire in a way to give us Bell’s inequality violations. Even if the event of setting the axis for detector 2 is spatially separated from the event of measuring particle 1, or from the event of emitting the two particles, we could always go further back into the past histories behind these two events to where their antecedents were not spatially separated. If we are using cosmic radiation as our randomizer for detector 2, we might even have to go back as far as the Big Bang. Some primordial event determined the outcomes of two utterly disparate chains of events, one of which led to the setting of detector 2, and the other leading to the emission of the two particles and measurement of particle 1, in such a way that we will have the observed dependence on choice of measurement axis. If we make a human experimenter the chooser, we could say that his choice was predetermined by some event where his evolutionary or material past combines with the past history of the emission of the two particles. In other words, it is not enough to conjecture that random numbers are not really random, or that free will is not really free, but also that they are predetermined in the remote past in such a way that the two disparate effects, many times removed from their common origin, will always occur in such a way as to give us the observed correlation between choice of measurement axis and measurement outcome for the other particle. This is so staggeringly implausible and devoid of parsimony that it is rightly called a loophole rather than a serious objection. If we admitted such conjectures in scientific methodology, we could never arrive at knowledge of anything.

Nonetheless, the freedom of choice loophole at least highlights that the mode of choosing what to measure is an important condition of Bell’s theorem. Bell himself required only that this choice be a free variable in the mathematical sense, i.e., not dependent on the variables in question. This means that whatever variable (e.g., cosmic radiation incidence or human choice) is used to choose the detector setting must not be a parameter of the system being measured. The superdeterministic objection amounts to arguing, hyper-holistically, that every physical process in the universe is part of the same system. Yet if the universe really were so integrated to such a degree that remotely disparate effects could retain correlations, it ought to be impossible to do science as we know it. Practically all experiments require us to make suppositions about the independence of systems, variables or parameters. Without such suppositions, we could never make any progress, for we could always posit some deeper fatalistic correspondence between any two ostensibly independent processes. The superdeterminist objection posits not merely that everything is deterministic under locality, but that it is so in a conspiratorial way to give us results that, on their face, contradict local determinism. No credible reason is given why such conspiracies should never result in a non-violation of Bell’s inequality, so we can only regard this as an extreme form of special pleading to uphold local determinism.

Due to the symmetry of EPR experiments, we want the setting of detector 2 to be spatially separated from the events of emission or detection of particle 1, rather than in the absolute future. While an event in the absolute future obviously should have no effect on those events, the problem is that we might posit a reverse influence, where the emission or detection of particle 1 transmits a signal to the second system. Only spacelike separation guarantees the impossibility of signal transmission in either direction.

It may seem that Bell’s inequality violations do not merely contradict determinism, but also all locality whatsoever, insofar as the anticorrelation of spins or polarizations, combined with the apparent dependence of statistical outcomes on a spacelike separated event, appear to entail some kind of non-local causation or influence. Sakurai, among others, suggests that locality is not violated as long as no useful information is transmitted superphotonically.

If there is no prior definite type such as (x+, z-), then we have the following conundrum. Before measuring the spin of either particle, the z-spin of particle 2 is indeterminate, and could conceivably yield positive or negative value when measured. It is only if we choose to measure z-spin for particle 1 that the z-spin of particle 2 will become definitely one value. So how does particle 2 know to align itself oppositely from particle 1 if it cannot know which axis of measurement is used in particle 1, assuming the principle of locality? Aspect’s experiment shows that Bell’s inequality holds even when there is not enough time for particle 2 to have received any such signal.

In practice, however, the observers at each detector do not obtain any knowledge of the state of the other particle. This can only be determined afterward by comparing statistical results at both sites. Suppose, Sakurai says, that both observers A and B decide in advance to measure z-spin. When A measures particle 1, he instantly knows what opposite result B is getting at detector 2, even if the measurement events are spatially separated. This is purely suppositional knowledge, however, based on their prior agreement, and does not require superphotonic signaling. Each observer, with repeated trials, observes a random sequence of positive and negative z-spins, and cannot know that each outcome is actually anticorrelated with its partner until after A and B reconvene and compare notes.

Suppose, Sakurai continues, that A suddenly breaks the agreement and decides at some point to measure the x-spin instead at detector 1. B will not be able to discern this change from his measurements of particle 2, which will remain a random sequence of plus and minus. Only afterward, when doing a statistical analysis of measurements for pairs before and after the change of axis by A, can this be known.

The reason there is no signaling is that anticorrelation does not let us know definitely which particle will be positive and which will be negative. If that were predetermined, then there would be definite signaling upon change of axis. We would see a continuous flow of like values suddenly disrupted into a random sequence, or vice versa.

Does this suffice to show that locality is upheld? Since when has relativity been restricted to the mere transfer of useful information? Superphotonic causality is impossible because of the structure of spacetime, so any physical effect that is superphotonic is problematic.

It is certainly true that the observers at each detector cannot exploit this anticorrelation to transmit any information or signal. What is in doubt whether the particles may be considered to receive or transmit information via anticorrelation. The presence of correlation indicates some sort of influence, which has to do with conservation of angular momentum. It is not necessarily that the outcome for one particle causes the outcome for the other, but that both outcomes have a joint cause. We would like to say that this joint cause is at the point of emission, but we have seen that this cause does not determine definite effects unless the choice of measurement is also specified; there are no definite predetermined values upon emission.

We must accept, then, either that the common cause of anticorrelation cannot be situated at some definite point in time, or else that the emission event imparts only its total angular momentum without determining particular values. This is to say that, if we uphold causality, it must be either non-local or non-deterministic. What we are further considering is if, even on the non-deterministic assumption, there must be some non-locality involved, since values do not become definite until after there is a choice of measurement axis.

Perhaps another approach is to regard this anticorrelation, though it involves a sort of dependence or influence that is categorically different from causality. Then we would say that only (physical) causality is mediated by Einsteinian locality. Denying that the the dependence or anticorrelation is causal would still leave it unexplained.

There can be situations where a fundamentally local interaction is approximated as a non-local interaction. For example, the relativistic Dirac equation gives the local interaction between an electron and a Coulomb field, but we may derive from that equation a non-relativistic approximation of the Hamiltonian of the hydrogen atom, which is weakly relativistic (γ = 1.0000266). This non-relativistic Hamiltonian has a non-local interaction in the Darwin term. It is implausible, however, that the non-locality of the EPR experiments might be explained in this way. The original thought problem appeals directly to basic principles of quantum theory and chooses a simple system. The proof of Bell’s theorem seems to preclude absolutely any circumvention by local deterministic action.

Another possible objection to accepting the experimental confirmations is the so-called sampling loophole. It is impossible to detect absolutely every particle that is emitted in these experiments, as some particles will be lost (due to random collision or deflection). It might be claimed that the losses between emission and detection occur in a way that result in non-representative samples, thus invalidating the statistical results. Again, this is considered a loophole rather than a strong objection, since it is implausible that losses would conveniently occur in a way that will always result in violations of Bell’s inequality. We can close this loophole by keeping both wings of the apparatus close together, thereby minimizing loss, but then this reopens objections about spatial separation.

Absent some more substantive objection, the experimental confirmation of Bell’s inequality violations would seem to close the door on any local deterministic explanation of the EPR phenomenon.

11. Wheeler’s Delayed Choice Experiments

Ascertaining the moment when a choice of measurement is made is crucial to eliminating the possibility of hidden local influences. Even when this loophole is closed, we remain at a loss to account for the apparent dependence of anti-correlated measurements on the choice of measurement. The mode in which choice of measurement may alter physical systems is explored further in Wheeler’s delayed choice experiments.[76] In these thought experiments, some of which have been realized in practice, we make a choice of measurement that ostensibly requires a photon to behave in either a wave-like or particle-like manner, and in the latter case that it has followed one or another path. Remarkably, we may delay our choice of what to measure, and therefore how the photon must behave, until after the photon has begun its interaction. This phenomenon may be interpreted as retroactive alteration of the behavior of the photon, or as John Wheeler himself suggested, as indicating that there is no definite reality to the photon’s behavior prior to measurement.

Let us consider an experimentally realized Wheeler’s delayed choice experiment, namely the interferometer experiment conducted by Jacques et al. in 2006.[77] Photons are sent one at a time through a beam-splitter, typically some semi-reflecting mirror with a dielectric medium, with two possible output paths. Using the model of particle-like behavior, we say there is a 50/50 chance the photon will follow Path 1 or Path 2. Using the model of wave-like behavior, we say the packet splits and takes both paths. Each path, with the aid of a mirror at 45 degrees, forms half of a rectangle. Due to the beam-splitter medium, the paths are slightly unequal in length, causing a phase difference between the wave packets associated with each path. At the end of each path, we have detectors 1 and 2. Regardless of what happens before measurement, it is an empirical fact that we will only detect each photon at one or the other detector, not both.

Suppose, at some time tm in the laboratory frame of reference, we insert a second beam-splitter at the intersection of Paths 1 and 2. Interaction with the beam-splitter will induce wave-like behavior whereby the probability of the photon hitting one or the other detector is affected by the phase difference between wave packets along both paths, resulting in an interference pattern much like in Young’s double-slit experiment. This interference pattern will result even if we only emit one photon at a time. Each photon only strikes one or the other detector. The interference pattern comes from repeated measurements, and seeing how the ratio of photons detected on detectors 1 or 2 varies with the phase shift (which can be varied in magnitude experimentally).

The conventional Copenhagen interpretation of this interference phenomenon is to say that the photon, in its wave-like nature, follows both paths. As discussed in a previous essay, this is not the only possible interpretation, and it makes more sense to regard the wavefunction components (i.e., the Path 1 and Path 2 wave packets) as potentialities that are only resolved upon interaction (with the second beamsplitter, or with the detectors). Copenhagen theorists would deny that the photon splits its energy in half between two paths, or that it multiplies in two, thereby doubling in energy and violating energy conservation, so even they are not altogether clear as to what they mean by passing through both paths, other than to say that the photon behaves in a wave-like manner here.

If we do not insert a second beam-splitter, there is no interference between wavefunction components, so the behavior of the photon can be modeled more simply as particle-like, at least after its path is resolved by the first beam-splitter. Using the Copenhagen interpretation, where both components of a superposition (i.e., the wavefunction in Paths 1 and 2) are treated as physically existential states, one might say that whether a particle actually behaves in a wave-like or particle-like manner, i.e., following both paths or one or the other, depends on the presence or absence of the second beam-splitter.

Wheeler (1984) simply took the implications of the Copenhagen interpretation of this interferometry at face value. He considered the interference phenomenon to be proof that the photon goes through both paths (though each photonic pulse is detected entirely at one or the other detector). Further, he considered that if you were to detect a photon midway along Path 1 or Path 2, this gain of which-way information would cause there to be no longer any interference at the second beamsplitter. This follows Bohr’s interpretation of observation as collapsing the wavefunction. As long as we do not make this prior detection, however, it should be possible to determine retroactively whether the photon is in a both paths superposition or a one or the other path state, depending on whether you insert or remove the second beamsplitter.

Empirically, all we really know is that each photonic pulse is detected entirely at one or the other detector. The both paths interpretation is an inference made from the frequentist probabilities gathered from repeated single-photon measurements, noting that the probabilities are analogous to what we would expect from wave interference between paths. As noted in a previous essay, this can be explained by the mathematically wavelike structure of the wavefunction, considered as a potentiality, without supposing that the photon really travels both paths, only to recollapse mysteriously into one or the other detector.

If we conduct repeated single-photon experiments without the second beamsplitter, i.e., in the open configuration, there will be a simple 50/50 distribution of photons in Detectors 1 and 2. If we conduct repeated single-photon experiments with the second beamsplitter, i.e., in the closed configuration, the probability distribution will reflect an interference pattern depending on phase differential, though each photon will arrive at only one or the other detector.

The wrinkle introduced by the Wheeler gedanken experiment, as realized by Jacques et al., is to alternate between the open and closed configurations unpredictably, not deciding on the configuration until after each photon passes through the first beam splitter. If we then take all the measurements that were done in the open configuration, we find a 50/50 distribution between detectors. If we take all the measurements done in the closed configuration, we find that phase-dependent interference distribution. We get the same results mixing the order of open and closed configuration as if we did all open measurements first and then all the closed measurements. That is to say, the different wave-like and particle-like behaviors occur even if the choice of configuration is made after the photon encounters the first beam-splitter, and presumably would have to resolve itself in a wave-like both paths or particle-like single-path behavior.

There are several possible interpretations of this result. One is that the choice of measurement retroactively determines the past history of the photon. That this is logically absurd should be evident from its implications of temporal paradox, but physicists have long considered quantum theory to be unbound by logic. Wheeler himself rejected the retrocausal interpretation, if only by denying any definiteness to physical reality prior to measurement, taking a position more radically subjectivist than Bohr’s. We should note that, in the 2006 experiment, the event of passing through the beam splitter was not in the absolute past of the choice of measurement event. Rather, the two events were spacelike separated in the relativistic sense. This suffices to eliminate the possibility of any local causation between events in either direction, but falls short of indicating a change in the past in the strictest sense. Indeed, if it were arranged for the beam-splitter encounter to be in the absolute past of the choice of measurement, we would be open to the loophole that the photon might communicate a signal to the random number generator that chooses the configuration.

Another interpretation is that it is impossible to say anything about whether the photon is in a wave-like or particle-like condition until after measurement. In other words, reality is unresolved before measurement. This interpretation may have its own problems, discussed in a previous essay, but it is at least better than the nonsense of reverse causality.

Instead, we might challenge the supposition that the photon must become either particle-like or wave-like when passing through the first and second beam-splitters. Both the particle-like and wave-like aspects of the photon persist throughout in all cases, but in ontologically distinct manners. The particle-like aspect is the only aspect that ever becomes existentially actual, and that it is always upon interaction with one of the elements of the apparatus. The wave-like aspect is its probabilistic potentiality to realize each of various possible particle states. Passing through one or the other path does not abolish the real propensity indicated by the other possible paths it might have realized.

The significance of the Wheeler’s experiment for our purposes is its introduction of the possibility that choice of measurement may at least force us to change our knowledge of the past retroactively, even if it is not proper retroactive causality. In other words, our determination of the past form of the wavefunction is influenced by the presence or absence of the second beam-splitter. This would seem to be contradictory to Bohm’s treatment of the wavefunction as something objectively real. Nonetheless, since Bohmian mechanics is overtly non-local, it would still be consistent with the experimental result, though it would be differently interpreted.

The event of passing through the second beam-splitter must be in the absolute future of passing through the first beam-splitter. The distribution of outcomes is determined (at least in part) by both events. We have only inferential knowledge of what occurs between these two events. The form of the wavefunction is determined by the second event no less than the first. This is problematic only if we regard the wavefunction as an absolute existent, rather than a predictor of how a system will behave under certain conditions. Thus, if anything, it emphasizes the weakness of the conventional both paths interpretation, insofar as this is understood to mean an existential state of the photon in transit, for that would depend on an unknown future event (or at least a spatially separated event).

The stronger claim of retrocausality, apart from its inherent incoherence, is inconsistent with the non-determinism generally espoused by quantum theorists. If we were to accept the retrocausal interpretation, we would have a sort of fatalism determined by randomness! It would be better for physicists to abstain from philosophical interpretation altogether than to proffer such explanations.

We should further note that, under the Copenhagen interpretation, the Wheeler’s experiment result is significant precisely because one makes the relativistic assumption that no signals can be transmitted between spatially separated events. It is therefore inconsistent to interpret it as implying faster-than-light or reverse-time signal transmission. If such were possible, then we could have hidden variables after all, contrary to the Copenhagen interpretation.

Naturally, interpretations that make no use of hidden variables, but simply recast the ontological status of the wavefunction, may still be consistent with the Wheeler’s experiment result.

For this type of interferometry, we can only approximate the single photon state, due to practical limitations of beam attenuation, background noise, and detector timing. This is why Jacques et al. refer to coincident detections, showing that the coincident detection rate was sufficiently low as to imply that they were approximating one photon at a time emission.

In both EPR and Wheeler’s Delayed Choice Experiments, the moment of choosing the measurement setting and its ostensible indepndence from the other measurement are critical for excluding local determinism. How and when we choose to configure our measurement device are critical elements for determining outcomes. Thus the mode by which such choices are made, whether by ostensibly random natural process or by human free will, are subject to scrutiny in order to examine whether there might be hidden signalling between this choice and the system being observed.

12. ’t Hooft’s Challenge to the Free Choice Postulate

In both the Bell and Wheeler experiments, there is an implied supposition that the choice of apparatus configuration is made independently of the system being observed. This supposition is accepted by nearly all parties, ranging from Einstein to Bohr to Bohm to Bell to Griffiths. Nonetheless, it has been strenuously challenged by a most notable physicist, Gerard ’t Hooft (whose breakthroughs in renormalization made possible an entire generation of theoretical development). Without a free choice postulate, an observed violation of Bell’s inequality would not disprove local determinism. The use of ostensibly random natural processes to make configuration choices is subject to the criticism that these processes can be traced back in time to where they are confluent with the processes anteceding the observed system, so they need not be truly independent, and the time of choice might actually be in the absolute past of the observed system. This leaves changes of configuration made by human choice. In his paper On the Free Will Postulate in Quantum Mechanics (2007), ’t Hooft argues that most discussions of Bell’s inequalities tacitly employ a faulty notion of free will that is incompatible with the supposition of determinism ostensibly being tested.[78] Specifically, when an experimenter changes his choice of which variable to measure, while the particle is still distant from the detector, this supposedly would alter the wavefunction, implying non-local interaction of a sort. Such an interpretation makes assumptions about the timing and causal efficacy of free choice, which ’t Hooft wishes to dispute.

Although this paper preceded the publication of the results of Jacques et al. (2007) by several months, the latter does not seriously affect ’t Hooft’s argument. A delayed-choice experiment introduces no new paradox for ’t Hooft, we shall see, since he already allows that a choice of measurement may affect the time propagation of a measurement operator into the past. He does not consider this to be a kind of reverse causality, since in no case is the outcome of a past measurement altered by a present event.

It remains to be seen, however, if ’t Hooft’s presentation is itself cogent, not merely in mathematical terms, but also in conceptual rigor and self-consistency. We are not presently interested in his opinion as to whether human free will is reducible to strong determinism, but only on the implications of free choice in the improper sense of changing the determination of which variable to measure. As shown in the experiment by Weihs et al. (1998), the measurement selection can be made by an ostensibly random physical process rather than a human being. What matters is not so much the exact modality by which the determination is made, but when and where that process or its antecedents determine the selection outcome, for these are the essential questions that decide whether there can be local influences between this selection process and the observed system. While ’t Hooft seems to think that human freedom is reducible to some microscopic determinism, this is not essential to arguing for the bare possibility of local determinism, even in systems where Bell’s inequality is violated.

12.1 Problems with Conventional Interpretations of Quantum Mechanics

As I have argued at length in an earlier essay, the conventional or Copenhagen interpretations of quantum mechanics suffer from conceptual incoherence and inconsistency. While most physicists today either ignore such problems or pretend that quantum mechanics is itself a new, scientific philosophy (free from the burdens of logical consistency and metaphysical rigor), earlier physicists took these contradictions seriously, as indications of a substantial defectiveness in our account of reality. Einstein considered that a clear conceptual understanding in terms of physical principles was essential to real advancement in physics. Mere mathematical formalism or quantitative description does not suffice.

Even current researchers are not blind to this problem, and ’t Hooft indicates three classes of perceived flaws in how quantum mechanics is presented. First, some side with Einstein in asserting that a comprehensive account of physics should, at least in principle, be able to predict single events with certainty in ideal conditions. Uncertainties in actual predictions should be entirely reducible to imprecision in our knowledge of initial conditions and the values of natural constants. Yet quantum mechanics, as commonly interpreted, indicates that the outcome of a single event can only be known probabilistically, even in principle.

Second, it is unclear at what point quantum mechanical uncertainty transitions into classical certainty. At what point does the quantum wavefunction, an unobservable but supposedly real entity, become just a classical probability function? At what point do observations become classical certainties instead of mere probabilities? Some might argue that classical determinism is just a limiting approximation for large statistics, or invoke the notions of decoherence or wavefunction collapse. Yet all of these accounts seem to ignore or downplay the supposition that quantum mechanics, at least in principle, should apply even on the macroscopic scale, including the events of data collection and analysis. (’t Hooft, 2007, p.1.) Here ’t Hooft seems to suggest that this has the self-stultifying implication of casting our data and theories into doubt.

Third, the current formulation of quantum mechanics, having only statistical predictive ability, cannot explain reality in terms of rigorous equations of motion. This failure has had practical implications for superstring theory, which is likewise lacking any kind of strong foundations in rigorous logic. Despite this strong criticism, ’t Hooft has high hopes for the superstring approach to quantizing gravity. Yet it cannot succeed at being a theory of everything without filling the gaps left by the current formulation of quantum mechanics.

12.2 Bell’s Theorem and the Assumption of Freedom

Bell’s inequality violations contradict any local deterministic account of the physical system only insofar as they imply a genuine non-locality between the event of choosing which non-commuting observable to measure on one particle and the event of the alteration of the wavefunction of the other particle to become anti-correlated to its partner (either at the time of emission or some time afterward). This choice-dependent anti-correlation is mysterious and non-local only if we assume that the observer, sentient or otherwise, can choose at any time or place which among the non-commuting observables to measure. If there were no freedom in this choice, there would be no mystery, because we would not have to wonder that the distant partner would have been anti-correlated to some other variable had another measurement variable been chosen, since there is really no other choice that could have been made.

In effect, some sort of free will postulate is necessary in order for Bell’s inequality violations to refute local determinism. Any such postulate is contradictory of strong determinism, so strong determinism is not refuted except by circular reasoning. We have noted, however, that strong determinism would be sustainable only if the universe behaved in a highly contrived, conspiratorial manner, a criticism that ’t Hooft will need to address.

Some physicists have inferred from Bell’s inequality violations that local realism is refuted; i.e., it is impossible for each particle to have a definite value for one of the non-commuting variables at all times, or at least that it is impossible for this value to be always independent of the spatially separated event of choosing which variable to measure on its counterpart. In this view, it is possible to uphold locality only if we accept both non-determinism and ontological ambiguity. We must dispense with the notion that particles have some definite value of spin along a given axis before they actually interact with a magnetic gradient. There is nothing wrong with this if we regard spin as merely a potentiality prior to interaction. It is unnecessary to follow Griffiths in arguing that the quantum world is governed by a different kind of logic. Those of us with training in philosophy or mathematical theory understand that classical bivalent logic is foundational to both physics and mathematics, and the formalism of quantum mechanics is by no means exempt. We can only agree with ’t Hooft that there exists only one kind of logic, even if the observed phenomena are difficult to interpret. (p.2.)

’t Hooft takes issue with the implied assumption of Bell’s theorem that the observer has the freedom to choose which variable to measure. If this is understood in the usual sense of free will, then the supposition is contrary to what would exist in a completely deterministic world. From the outset, then, the common interpretation of Bell’s theorem assumes what it supposedly sets out to demonstrate, namely that the world is not deterministic, at least not without abandoning locality.

Instead, let us take a completely deterministic world as our hypothesis. Even in such a world, ’t Hooft argues, we should expect changes in the setup of an apparatus to affect the object measured. If, instead of human brains and equipment, we treated planets as detection devices, this can be made clearer. (It also excuses us from pronouncing on ’t Hooft’s assumption that human free will is strongly determined.) If one planet, while detecting another (i.e., by gravitation), should, by whatever mechanism, move to somewhere else (i.e., reconfiguring itself), then the movement of the other planet will also be perturbed. Moreover, the entire past of the planetary system would have to be modified… one cannot modify the present without also modifying the past (and of course the future). (p.3.)

The striking inference that the past is modified requires some explanation. The supposition that the first planet (Mercury in ’t Hooft’s example) is somewhere else in its orbit would entail that the second planet’s orbital history—past, present and future—should also be altered, since its motion depends on the location of the first. Here we are not necessarily assuming that the first planet instantaneously moves. Rather, we are revising our assumption about where it is, in which case our knowledge of the orbital history of the second planet is accordingly revised. As long as this is a statement about knowledge or logical contingency, there is nothing illogical or bizarre here.

Free will, meaning a modification of our actions without corresponding modifications of our past, is impossible. (p.3.) Again, setting aside the self-stultifying implications of denying human free will, we may at least consider this statement as applied to changes in detector configuration. Regardless of the exact mechanism of this change, the mere fact of the change necessarily entails a corresponding modification of implied past history. This holds even in classical mechanics. Recall that Newtonian equations of motion are time-reversible, having the same form if you replace t with –t. So if you accept that detection imposes a constraint on the position of the object detected, and therefore its future trajectory, its past trajectory should also be modified. This is not reverse causality as long as we consider detection to define a contingency, i.e., a material condition or state of knowledge, but not acting as a cause of physical change.

Other authors are frustratingly unclear about what is meant by freedom, though it generally seems to mean that a decision or choice of realizable options has not yet been made beforehand, whether by the experimenter or the universe. Notably, R. Tumulka finds that the assumption of freedom is necessary to avoid a conspiratorial explanation, i.e., that initial conditions are so contrived that pairs always know in advance which magnetic field the experimenters will choose. (pp.4-5.) Evidently, the concern is that a local deterministic explanation would require an implausibly well-orchestrated set of initial conditions, such that entangled pairs are already disposed to behave according to the remote event of the configuration of the apparatus. This is decidedly unparsimonious. ’t Hooft, however, takes the objection to be toward modification of the past by the choice of measurement variable, and remarks that there is nothing objectionable about this on a purely deterministic supposition.

Assuming the objection is about retroactive changes to the wavefunction, ’t Hooft replies: in a deterministic theory there are no wave functions, in particular no phases of wave functions; the phases we use to describe them are artifacts of our calculational procedures, and they could well be determined by what happened in the past. (p. 4) All the strangeness of quantum mechanics comes from the off-diagonal terms in operators, also expressible as phases in the wavefunction, making different possible measurements mutually dependent. ’t Hooft considers that this aspect of the wavefunction is merely a computational artifact, so there is no difficulty in having it retroactively altered. We may revise our estimation of the phase based on revised knowledge of the past, or we may revise our estimation of the phase propagated back in time based on revised knowledge of the present, i.e., an observation. The phase, after all, being a purely imaginary component, is unobservable even in principle. Changing our estimation of its value retroactively does not retroactively change the outcome of any observation.

Freedom of choice among actions, whatever it may be, does not, according to ’t Hooft, refute that the action chosen is determined by laws of physics. One is free to choose what to do, but this does not mean that his decision would have no roots in the past. (p.4.) It is unclear what is intended here. If it is simply that a physical action, once chosen, must obey the laws of physics, there would seem to be nothing controversial. Recalling, however, the implications of physical determinism in both directions of time, one would expect that the action chosen may be followed backward in time with a deterministic sequence of events. This would seem to make freedom of choice a dead letter, merely giving conscious assent to physical necessity. It is one thing to have roots in the past, but if the choice is fully determined by the past, it is not made freely in any meaningful sense.

While ’t Hooft’s dismissal of free will is problematic, and betrays an astonishing ignorance of the wealth of literature on this subject, we may nonetheless take to heart that there is no need to introduce it into a general theory of physics, at least when we are dealing with the mechanics of inanimate objects. It should not be necessary to rely on such an assumption to interpret violations of Bell’s inequality, especially since the formal notion of quantum measurement does not require a free-willed human observer.

12.3 Time Dependence of Operators

Most philosophical interpretations of quantum mechanics implicitly adopt the Schrödinger picture, in which quantum mechanical states or wavefunctions evolve over time, and can be abruptly altered by the imposition of a measurement, represented by application of a time-independent operator for the observable measured. Yet the Heisenberg picture has equal mathematical validity (and its formal equivalence can be proved). In the Heisenberg picture, time dependence is in the operators or observables, while the state vector is time-independent. By giving due regard for the validity of this picture, the notion of modifying the past becomes more intelligible.

Application of a time-dependent operator to a state vector at some time t, ’t Hooft remarks, yields a different state, in which both the future and the past development of operators look different from what they were in the old state! A state can only be modified if both its past and future are modified as well. (p.4.)

This remark requires some clarification. The state that is changed is not the vector |Ψ⟩, which remains the same after measurement in the Heisenberg picture, being equal to |Ψ(t0)⟩ from the Schrödinger picture, i.e., the initial state vector. The time of measurement, t, is sometime after t0, and what changes then (t) is the time evolution of the operator. If we take the case where the Hamiltonian is time independent, the evolution of an operator A can be expressed as:

A(t) = eiH(t – t0)/ℏA(t0) e-iH(t – t0)/ℏ

After measurement at time t, the time evolution of A becomes:

A(t') = eiH(t' – t)/ℏA(t) e-iH(t' - t)/ℏ

With the reset at time t as the new starting point, the time evolution of A both forward and backward now looks different.

What ’t Hooft’s interpretation conveniently ignores is that, as d’Espagnat (1992) noted, A(t) in the Heisenberg picture does not correspond to any actual state of the system.[79] The form of A at various times does not give us snapshots of the state of the system at each point in time. Instead, it gives a structural or relational reality of the various values the observable may take if a measurement were to be made at various points in time. It does not give us the evolution of values of the observable, as in its classical mechanical analogue. It only gives us the relational form of equations of motion, without implying that there are properties with values over time.

Still, the fact that time dependence may be placed in the operators rather than the state vectors, or even partially in each (in the interaction picture), shows that dynamical physical reality is not contained in either the operators or the state vectors, but in both taken collectively. This should be further evident from the fact that neither the operators nor the state vectors are, in general, gauge-invariant. We only get functions of physically real variables when we combine both, so that expectation values and computed probabilities are gauge-invariant.

12.4 Evolution of Physical Variables with Unconstrained Initial States

’t Hooft has replaced the conventional notion of free will with that of an unconstrained initial state in his interpretation of freedom to choose what to measure. This freedom or lack of constraint is essential to realistic models of physics. Such models should not make reference to observers, the apparatus, the observed system, or measurement results, but the objective physical variables or physical states. He evidently rejects Heisenberg’s insight that quantum mechanics prevents us from sharply distinguishing the observed system from the act of measurement. What is real for him is the physical variables or observables, for which the Heisenberg picture is best suited, giving us equations of motion akin to those of Lagrangian mechanics. As d’Espagnat observes, however, these equations of motion do not suffice to show that operators in the Heisenberg picture give the values of physical variables at different points in time. The intrinsic dependence of the values of observables on the act of observation is not avoided, since the time evolution of the observable depends on the time t0 when the last measurement was made.

’t Hooft does not really abolish this interdependence, but reinterprets it, dispensing with the notions of system and measurement. He considers that there really are values that physical variables take, and that the possible values that can be taken over time obey certain laws.

The notion of time has to be introduced if only to distinguish cause from effect: cause must always precede effect. If we would not have such a notion of time, we would not know in what order the ‘laws of nature’ that we might have postulated, should be applied. Since laws of nature tend to generate extremely complex behavior, their effects will surely depend on the order at which they are applied, and our notion of time will establish that order. (p.5.)

Due to this complexity, we cannot know the true values of the physical variables, but can only make educated guesses. ’t Hooft clearly has in mind the complexity of quantum electrodynamics and chronodynamics. In quantum field theory, there are infinitely many degrees of freedom, so a realist interpretation of such variables would imply the philosophically problematic notion of infinitely many existents. Even assuming a realist interpretation, we can only approximately determine present values from past values, and conversely, given a present state of affairs, only make similar educated guesses concerning the past that led to this state. In our model, we will only be able to perform such tasks if we possess some notion of the complete class of all possible configurations of our variables. (p.5) Then, for any determinate configuration, some probabilistic prediction can be made. Even though we may only realize some possible configurations, the model is complete in that it can describe, in principle, any eventuality. The completeness of this model depends on a complete absence of constraint on our ability to specify an initial condition.

Note that this freedom to choose an initial condition has nothing to with human free will. It means only that there is nothing in principle preventing a particular initial condition from being realized. It does not mean that all possible initial conditions will in fact be realized. Second, revisions of the past really mean revisions of our guesses about the past, based on refined knowledge of the present, once an initial condition (i.e., an observation) is specified. Thus freedom of choice is freedom to choose the initial state, regardless its past, to check what would happen in the future. ‘Choice’ here should be understood in a formalistic sense, as in the Axiom of Choice in mathematics, without positing a person who makes a judgment.

Although we can specify any initial state, regardless of its past, it does not follow that such specification will not modify its past. In fact, some modification of the past is a necessary assumption. Yet this modification would seem to be purely with respect to our knowledge, not in reality. ’t Hooft’s interpretation founders on a contradiction resulting from taking a realist interpretation of observables in the Heisenberg picture. If we can really change the past retroactively, this would create impossible causal paradoxes. We avoid such paradox only by saying that our knowledge of the past is modified, but then this contradicts the realist interpretation of time-dependent physical observables.

12.5 Realist Interpretation of Operators

’t Hooft considers that the same operator may admit of two different interpretations depending on how it is used. When it refers to an observable, it describes reality and is a beable, employing the term favored by J.S. Bell and followed by D. Bohm, to indicate that the pluralistic reality described is dispositional or potentiality. Yet the same operator may be used to effect a replacement, analogous to instantly positing that a body has moved to another location. When used in this way, the operator is a changeable. This double interpretation is intelligible when we treat spin as a single three-dimensional operator, rather than as three two-dimensional operators (Pauli matrices). Using this σ3 operator to measure the z-spin is an example of the operator as a beable, while using it to switch spin, i.e., choosing whether to measure spin in the x or y direction, makes the operator a changeable.

It is not altogether clear if there really is an ontological distinction between these uses of operators. After all, the assumption that we have not changed the spin when measuring something z-oriented presupposes that the spin is aligned with the z-axis, or equivalently, that we have chosen a measurement orientation in alignment with the spin. This is no less of an arbitrary choice of basis than when we choose some other direction not in alignment, thereby changing the spin orientation.

’t Hooft argues that the use of operators as changeables does not by itself make quantum mechanics non-deterministic, since we could use similar replacement operators in classical mechanics. Yet the classical example would be a purely formalistic application, changing the supposed position of a planet from one location to another. A planet cannot in fact instantaneously change location, and the proposed analogy implies that spin need not instantaneously change from one orientation to another. ’t Hooft seems to confirm that changeables are merely formalistic applications, when he considers their utility to be establishing rotational invariance in quantum mechanics, and notes that even a classical model can have enhanced symmetry by using such operators.

Beables and changeables are distinguished by commutation relations. A beable does not affect the ontological status of a system, and therefore, by fiat, all beables commute with one another at all times. (p.6.) Operators that do not commute with the beables are changeable. Application of a changeable at a given time, when an observer decides what to measure, implies application at all times, in the Heisenberg picture. This attempt to explain away free choice of what to measure has perhaps too broad an application, since it applies to any non-commuting observables whatsoever.

The hypothetical employed by ’t Hooft, though perhaps possible in principle, is impossible in reality. He has a beam of electrons with spin +1 in the y direction split with a magnetic field gradient in the z direction. Then the +1 and -1 z-spin beams are recombined, yielding a +1 y-spin beam, contrary to the loss of y-spin information we find in actual Stern-Gerlach experiments. Indeed, this hypothetical result is contrary to what we should find with any non-commuting observables. ’t Hooft explains this scenario by regarding neither the y-spin nor the z-spin as beables. Whatever the beables are, they cannot be expressed in terms of these operators, and they are highly complex. (p.7.) This does not seem to contribute anything beyond conventional interpretations, since the beables are unknowable, except that the usual arguments against non-locality of these beables no longer apply.

12.6 Restoration of Determinism?

’t Hooft evidently believes that, in reality, nature is governed by deterministic equations, as in classical mechanics, while wave functions are man-made artifacts. The latter are computational devices that use parts of Nature, enabling us to calculate with some degree of accuracy. Even when we have deterministic equations to work with, we face the limits of computational speed, while Nature does her own calculations much faster than any man-made construction. (p.7.) So-called free will need only imply that we cannot compute ahead of time what someone will do, because Nature will have made the computation much faster, i.e., caused the result to happen. Regardless of what we think of this tidy dismissal of free will, we may consider biological complexity as excusing us from requiring true free will in our discussion of choice of measurement paradoxes. Instead one need only hold that it is impossible for an experimenter to modify settings without affecting the wave functions. These wave functions do not appear in whatever deterministic equations truly describe reality. They are just computational devices relative to our knowledge, so there is nothing illogical about their alteration being linked to past events, conspiratorially or otherwise. After all, the act of altering the experiment depends on those past events, or at least is conditioned by those past events.

To take a less controversial example: Fixing the gauge by some gauge condition may generate field configurations that depend in a conspiratorial way on the past or the future, but this has no effect on the physically observable event, just because these are gauge-independent. (pp.7-8.) There is no retro-causality, since there is no effect on the outcomes of physically observable events. Choice of gauge is purely formalistic, and the gauge-dependent aspects of fields are likewise non-physical. ’t Hooft posits that the wave functions, like gauges, are man-made artifacts, fundamentally unobservable. It is quite alright for them to conspire with the past, since they are calculating tools for exercises in conditional probability, not physical entities themselves.

Performing a measurement can, without paradox, retroactively change our knowledge of the past, but only if we abandon a realist interpretation of quantum operators as representing the time evolution of physical observables. As ’t Hooft notes, that particular realist interpretation is unsupported in the Heisenberg picture anyway.

Perhaps we could sustain a more moderate realism with respect to quantum operators, where sometimes they relate to physical observables, but at other times they do not, as they instead select which observables we will measure. We distinguish operators by their use as beables or changeables. Beables do not alter a system, for they are truly just measurements of physical observables. Use of operators as changeables, however, do alter the system, albeit in a purely formalistic sense, retroactively changing the past in the non-paradoxical sense aforementioned. This distinction would admit a realist interpretation only for the beables. Yet even the beables presuppose some measurement selection, so it seems that they too ought to be regarded formalistically.

’t Hooft posits some unknowable deterministic system, and then shows that our paradoxical results regarding wavefunctions do not exclude the existence of such a deterministic system. He evidently allows that the wavefunctions may exhibit non-locality, even to the point of being alterable in the past, in apparent contradiction with relativity. Yet this does not preclude a truly local deterministic physics if the wave functions are merely computational devices, and if the application of operators (at least in the sense of changeables) is purely formalistic, a revision in our assumptions about prior conditions. Nothing we do with our revised knowledge about a wavefunction actually changes the outcome of a physical event. These outcomes are determined by nature in highly complex, yet deterministic ways. Our quantum formalism permits us to give only approximate solutions, which is as good as we can do, given the limitations of measurement accuracy.

If such an assertion regarding the possibility of deterministic complexity behind quantum approximations came from anyone else, we might be dismissive, but surely ’t Hooft knows quantum theoretical formalism inside and out as thoroughly as anyone. If even someone as he can be unconvinced that the observable-state formalism should be taken as prescriptive of physical reality, we may likewise at least be given pause.

13. Free Will and the Big Bell Test

The Big Bell Test conducted in 2018 confirmed violation of Bell’s inequality even when measurement choices were made by the ostensibly free-willed decisions of 100,000 anonymous contributors. Contrary to media misrepresentations, this experiment did not prove acausality, but at best it merely reconfirms long-accepted physics, i.e., that there are no local hidden variables. Previous Bell tests made measurement choices based on random numbers generated by unpredictable physical processes such as spontaneous emission, thermal fluctuation or classical chaos. This required a physical assumption that these processes are non-deterministic. This left open the loophole that such processes might be influenced by the same hidden variables that determine measurement outcomes. By enlisting humans, this freedom-of-choice loophole is closed, since it is assumed that human choices, if not absolutely free, are at least independent of each other, and not determined by whatever hidden variables might determine measurement outcomes.

The Big Bell Test does not prove that human choices are free, but instead assumes this as an interpretive principle, and of course it is invalid circular reasoning to prove what you have assumed. As the investigators admit, there is no way to test superdeterminism, i.e., the hypothesis that all measurement outcomes and choices are fully deterministic. If that were the case, both the outcomes and choices could be determined by a common factor in their shared remote past. Although we have closed the locality loophole in the sense that A’s choice cannot affect B’s or vice versa because they are space-like separated, the supposition of superdeterminism always leaves open the possibility that both choices have a common causal factor in their shared past.

So the Big Bell Test cannot refute ’t Hooft’s strong determinism, but do the investigators make the errors he claims regarding the definition of free choice? Following Bell himself, the authors remark that a choice need only be free in the sense of a free variable in mathematics. That is, it must be independent of all the variables that determine the measurement outcomes in question. The choice need not be absolutely free or absolutely non-deterministic. In fact, the experimenters took into account the fact that humans do not choose numbers with true randomness, but have biases toward asymmetric distributaions and toward alternation over repetition. They used games to incentivize more uniform distributions by improving their scores of unpredictability compared to a computer algorithm. It could be said that this made the measurement choices influenced by the algorithm.

This eliminated the circularity of assuming that physical random number generators are truly random, i.e., that each measurement choice is thoroughly uncorrelated to all other choices, and indeed all other physical processes, particularly those antecedent to the observed system. The result of the Big Bell Test still allows for non-locality and non-deterministic causality. As noted, it did not empirically prove the reality of human free will, but instead the authors assumed freedom of choice for the purpose of interpreting the experiment. Nonetheless, we may say that the observed violation of Bell’s inequality is at least consistent with the assumption that ostensibly free-willed choices are not correlated with one another, at least when each of them is made by a different willing agent.

The authors claim:

Project outcomes include closing the ‘freedom-of-choice loophole’ (the possibility that the setting choices are influenced by ‘hidden variables’ to correlate with the particle properties…

The supposed closure of the loophole depends on the assumption that human choices are truly free, originating within seconds of reaching our consciousness, and not fully determined by antecedent physical processes, at least none that are confluent with the absolute past of the system being observed. These are highly reasonable assumptions, contradicted only by conspiratorial superdeterminism. Yet we have seen that such conspiracy is less implausible if we discard a realist interpretation of wavefunctions and operators, instead regarding them as formal devices for calculating conditional probability. If a change in a time-dependent wavefunction or operator is merely a revision in our knowledge, not a change in physical reality, then we have not truly introduced non-local causality, and there could well be a local deterministic system, albeit one that is unknowable through wavefunction-operator mechanics.

Yet any such local deterministic account must still reckon with Bell’s theorem, no matter how deeply hidden it may be or how far removed from the wavefunction-operator algebra we use in quantum theory. The only way to circumvent this is to proposal a highly contrived, conspiratorial super-determinism. ‘t Hooft has ably argued that there would be nothing implausible about such conspiracy on the level of wavefunctions and operators, if they are merely formalistic constructs of our knowledge. Nonetheless, his beables or whatever local deterministic structure is at the heart of reality would be subject to the same limitation, and could escape Bell’s theorem only by conspiratorial coordination between system preparation and selection of measurement variable.

‘t Hooft’s argument is more successful in defusing any threat to local determinism, or causality in general, posed by delayed-choice experiments. A purely formalistic account of wavefunctions and operators would cogently explain how they can be revised retroactively without altering physical reality in a way that is causally paradoxical.


Notes

[70] Sakurai, J.J. Modern Quantum Mechanics. Reading, Mass.: Addison-Wesley, 1994. pp.227-29.

[71] For a thorough review of the early EPR polarization experiments, see: Clauser, J.F. and Shimony A., Rep. Prog. Phys. (1978) 41, 1981.

[72] Bell’s locality criterion is somewhat different from that of Einstein, as he requires only that there be no signal from the change in setting of detector 2 to the event of measuring particle 1. Einstein had a stronger criterion, namely that the real physical state of particle 1 prior to measurement could not receive such a signal. This distinction has little practical import, as all the experiments meeting Bell’s locality criterion also met Einstein’s, i.e., the measurement setting event could not convey a signal to any event in the first particle’s history between its emission and its detection.

[73] Aspect, A., Dalibard, J., Roger, G. Experimental Test of Bell’s Inequalities Using Time-Varying Analyzers. Physical Review Letters (1982) 49(25):1804-1807.

[74] Weihs, G., Jennewein, T., Simon, C., Weinfurter, H., Zeilinger, A. Violation of Bell’s Inequality under Strict Einstein Locality Conditions. Phys. Rev. Lett. (1998) 81:5039-5043.

[75] Bohm, D., Hiley, B.J. The Undivided Universe. London: Routledge, 1993. pp. 293-294.

[76] Wheeler, J.A., in Quantum Theory and Measurement, J.A. Wheeler, W.H. Zurek, eds. Princeton, NJ: Princeton Univ. Press, 1984. pp. 182-213.

[77] Jacques, V., Wu, E., Grosshans, F., Treussart, F., Grangier, P., Aspect, A., Roch, J.-F. Experimental Realization of Wheeler’s Delayed-Choice Gedanken Experiment. Science 315 (2007):966-968. Also: arXiv:quant-ph/0610241v1.pdf 28 Oct 2006.

[78] ’t Hooft, G. On the Free-Will Postulate in Quantum Mechanics arXiv:quant-ph/0701097v1 15 Jan 2007.

[79] d’Espagnat, B. Heisenberg Picture and Reality. Foundations of Physics (1992) 22, 1495-1504.


© 2020 Daniel J. Castellano. All rights reserved. http://www.arcaneknowledge.org