Coherence vs decoherence: Comparing alternative approaches

Coherence vs decoherence: Comparing alternative approaches

V/sta$/nAstronomy,Vol.37, pp. 211-216, 1993 Printedin GreatBritain.All rights res~ved. 0083-6656/93 $24.00 © 1993 Pergamon Press IAd COHERENCE VS DE...

334KB Sizes 0 Downloads 48 Views

V/sta$/nAstronomy,Vol.37, pp. 211-216, 1993 Printedin GreatBritain.All rights res~ved.

0083-6656/93 $24.00 © 1993 Pergamon Press IAd

COHERENCE VS DECOHERENCE: COMPARING ALTERNATIVE APPROACHES Savedo Paseazio Dipartimcnto di Fisiea, Univcrsitb di BaH and Istimto Nazionale di Fisiea Nucleate, Sezion¢ di BaH, 1-70126 BaH, Italy

The concept of coherence of a quantum system can be given a quantitative definition within the framework of the Many-Hilbert-Spaces (MHS) theory, proposed by Machida and Namild in 1980 in order to derive the so-called collapse of the wave function in a measurement process. Let us sketch a framework for this approach, emphasizing what are its characteristics and the main differences with alternative theories. In a typical double-slit experiment, an incoming wave function is split into two states 91 and 92, corresponding to the two possible trajectories. If a detector D is placed along the second path, the relative wave function is modified according to 95 --* T92, where T is the detector's "transmission coefficient". The wave function is 9 = 91 + T92,

(1)

and the intensity after recombination

z ~ 191= = 191 + T9212 = =

19112+ ITI219~12 + 2 Re (9~T92) 19,12 +tl921 ~ + 2vqrte(¢;e'a92),

(2)

where T = [TI¢~, and t = IT[ 2 is the transmission probability. These are standard quantum mechanical formulae: Notice that the behaviour of the macroscopic apparatus D has been ignored, and its effect on the wave function has been "summarized" by introducing the "constant" T in eq.(1). On the other hand, according to the MHS theory, the internal structure of macroscopic apparatuses cannot be neglected, and must be taken into account. The idea is the following (Namild and Pascazio, 1991): Equations (1) and (2) hold for every single incoming particle. Let us therefore label the incoming particle with j (j = 1 , . . . , Np, where Np is the total number of particles in an experimental run), and rewrite the transmission coefficient as

T---+Tj,

j = 1,...,Np.

(3)

In this way, every incoming particle is described by the same wave function ¢ = 91 + 92, immediately before interacting with the detector. But after the interaction, the detector transmission coefficient T will depend on the particular detector state at the very instant of the passage of the particle. Being the detector subject to random fluctuations (which reflect the internal motion of its elementary quantum constituents and its MHS structure), the same macroscopic state of the detector will correspond to many different microscopic states. Consequently, different incominu

S. Pascazio

212

particles will be affected differently by the interaction with the detector, and will be described by slightly different values of T. Accordingly, eq.(2) becomes I(i) o¢ I¢¢"12 = I¢1 + ~¢212 = 1¢112+ 1~121¢212 + 2 Re (¢;~¢2).

(4)

After many particles have been detected, the average intensity will be given by

i~

Np ] N p j =~1 ZCs) = I¢112 + ITI21¢212+ 2 Re (¢;T¢2),

(5)

where we have defined the average transmission probability and coefficient 1 ~rp

~P ITI2 = ~

z_,, ~ , ,

=

Tj.

(6)

l0 j = l

This leads us to define the parameter c= 1

[T[2 ITP

0 < ~< 1

(7)

'

that gives a quantitative estimate of the (lack of) coherence between the two branch waves: Indeed, from eqs.(5) and (7), a necessary and sufficient condition for observing no interference (wave]unction collapse) is T = 0, or equivalently e = 1. These conditions correspond to a total loss of coherence between the two branch waves. Equation (5) becomes

i ~ I¢112 + ITI21¢21~ + 2X/ITP ~/1 - e Re (¢~ei~¢2),

(8)

where we have written T = ITle~. The meaning of this approach should be apparent: In eq.(8), the interference term contains the new factor x/1 - e, which is absent if the MHS structure of the detector is not taken into account, namely, if its statistical fluctuations are neglected. The standard quantum-mechanical formula for the intensity at the screen (eq.(2)) is recovered from eq.(8) in the limit e = 0, provided we redefine the transmission probability in the presence of fluctuations as t - ]TI2. Notice that t is the experimentally measured value for the transmission probability, and is necessarily a statistically defined quantity. On the other hand, from eq.(8), we find that interference is lost, and hence the wavefunction collapse takes place, in the limit e = 1. For this reason, e was named decoherence parameter. It is worth stressing that this approach incorporates in a natural way the possibility of investigating those situations in which coherence is partially lost or, stated differently, the wave function is partially collapsed. These intermediate cases correspond to the values 0 < e < 1. The parameter e is in fact an order parameter for the wave-function collapse. We can throw new light over the meaning of the above formulae if we formulate an ergodic hypothesis: The average over many particles going through the detector (denoted hitherto with a bar) will be assumed equal to the statistical ensemble average over all the possible detector's microstates. If we denote the latter with < • .. >, our assumption reads ....

<.-.

>.

(9)

Observe that the above ergodic assumption makes sense only if Np, the total number of particles in an experimental run, is very large. In turn, this clearly calls for a statistical interpretation of the wave-function collapse and is one of the main characteristics of the present approach to the

Coherence vs Decoherence

213

quantum measurement problem: It makes no sense to speak of wave-function collapse for a single particle. It is possible to show (Machida and Namiki, 1980; Namild and Pascazio, 1991) that a random sequence of transmission coefficients ~ , for which T = 0, or alternatively e = 1, can be obtained for a detector with a huge number of degrees of freedom. On the other hand, for a completely random sequence of complex numbers, one has Z3#k T3T~ ,2_ O, so that, by eqs.(6), (7), one obtains c=l --ITI2 ~ _ 3 - - - 1 (10) ITl~ N/ Therefore, there appear to be two distinct criteria for the complete loss of quantum-mechanical coherence. First, one needs a "good" detector, characterized by a huge number of degrees of freedom. Second, one must perform the experiment many times (Np >> 1), in order to be able to ascertain whether the quantum-mechanical coherence is lost. If the latter requirement is not satisfied, in general one may not be able to give an operationally meaningful definition of (loss of) coherence, as shown by eq.(10) above. The analysis we have just sketched is easily extended to the density matrices of the object particle and the detection system. It is very important to realize that the loss of coherence (collapse of the wave function) is derived without taking any partial trace over the D-states or the environment. Indeed, coherence is lost on the average, after the same experiment has been performed many times, within the limits of applicability of the ergodic assumption (9). This constitutes probably the most important difference between our approach and other theories. For example, in the so-called environment theory, put forward by Zurek (1981, 1982), the final density matrix describing object system, measuring apparatus and "environment", is given by

p~ -- ~ Ickl~lCk > < ¢kl ® IOk > < 0~1 ® IEk > < Ekl k

+ ~_, c~c~1¢, >< ¢Jl ® I¢~ >< +jl ® IE~ > < E,I,

(11)

i,j#i

where Ck, ¢k and Ek are states of the particle, the detection system and the environment, respectively. The operator PF is nothing but a projection operator representing a pure state with non-vanishing off-diagonal components (the last term in the r.h.s.). This can never yield the wave-function collapse as a physical process. Zurek proposes then to "ignore" the uncontrolled (and unmeasured) states of the environment, under the assumption that the environment states corresponding to different outcomes of the measurement are orthogonal, i.e. < EilEj > = ~j: Practically, this correspond to trace over its degrees of freedom. Obviously, the off-diagonal terms of the density matrix disappear, and quantum coherence is lost. It is difficult, for us, to accept lightheartedly this point of view. To trace over the environment states can be regarded as a useful and convenient mathematical tool, but cannot be taken as a pretext to derive the loss of quantum coherence. If we have to accept new assumptions in order to justify, a posteriori, von Neumann's rules, then we are much more contented with the original von Neumann-Wigner approach, that has fewer postulates, and therefore should be preferred on the basis of Occam's razor. The problem of the "decohering histories" proposed by Gell-Mann and Hartle (Gell-Mann and Hartle, 1990) is somewhat more subtle. It seems to us that the standard Copenhagen interpretation cannot be applied to their theory. More so, in our opinion, the very introduction of the wave function of the universe (Hartle and Hawking, 1983) goes beyond the quantum formalism. If one is willing to accept such physical concepts as the ~ of the universe within the present theoretical quantum-mechanical framework, one must also be able to construct (or at least conceive) a probability ensemble whose distribution is IgY[2. This is hard to imagine, because the universe is

214

~

S. Pascaz/o.

unique. The so-called many-world interpretation (Everett, 1957) is of no help, in out opinion, becanse it simply postulates a world splitting upon observation, recasting yon Neumann's projection rules into the equivalent assumption that the probability of finding a particular world is given by the standard quantum values. Why should we accept such an explanation? If the price to pay in order to understand the loss of quantum coherence is so high, why should we not be happy with the standard Copenhagen interpretation? Once more, Occam's criterion forces us to look for an alternative solution. Analogous arguments can be put forward to counter the approach proposed by Menski (1975, 1990). In that case, one should even accept ideas like "self-measurement" of the universe, in order to explain the loss of quantum coherence. We believe that the very ideas underlying the concept of a single • describing the whole universe, although fascinating, axe difficult to justify. At present, no theoretical framework can account for the consequences they entail. More to this, no experimental proof supports them, and, to the author's knowledge, no experimental test has been conceived to check their validity. Experiments are just what makes physics different from philosophy, and we firmly believe that the issue of quantum measurements should be settled only on experimental grounds. We conclude by recalling that the MI-IS approach yields quantitative predictions that seem to be in agreement with some delicate experiments of neutron interferometry recently performed by the Vienna group (Rauch et al., 1990; Summhammer et al., 1987). By generalizing Goldberger's formula (1974) for the refraction index in neutron optics, it is possible to prove that the visibility of the interference pattern, when the neutron absorption along one of the two paths is very high, is considerably lower than the standard value (Namiki and Pascazio, 1990; Nabekawa et al., 1992). This effect is liable to experimental check and constitutes the most important difference between the MI-IS approach and other theories. To the author's knowledge, it is the first time that a theory can be discriminated experimentally from yon Neumann's and other approaches, after many years of philosophical discussions. The problem of measurement in quantum mechanics belongs to Physics, and not to philosophy. REFERENCES

Everett III, H. (1957) Rev. Mod. Phys. Pg, 454. Gell Mann, M. and Hattie, J.B. (1990) In Complexity, Entropy, and the Physics of Information, SF1 Studies in the Science of Complexity, Vol. VIII, Ed. Zurek, W., Addison-Wesley, Reading; (1990) In Proceedings of the Third International Symposium Foundation of Quantum Mechanics, Eds. Kobayashi, S. et al., p. 321, Physical Society of Japan, Tokyo. Goldberger, M.L. and Seitz, F. (1974) Phys. Rev. 71, 294. Haxtle, 3.B. and Hawking, S.W., (1983) Phys. Rev. DP8, 2960. Machida, S. and Namiki, M. (1980) Pro#. Theor. Phys. 63, 1457; 1833. Menski, M.B., (1979) Phys. Rev. D20, 384; (1990) Phys. Left. 146A, 479. Nabekawa, Y., Na.rniki M. and Pascazio, S. (1992) Phys. Lett. A, in print. Namiki, M. and Pascazio, S. (1990) Phys. Lett. 147A, 430. Namiki, M. and Pascazio, S. (1991) Phys. Rev. 44A, 39. Rauch, H., Summhammer, J. Zawisky. M. and Jericha E., (1990) Phys. Rev. A4~, 3726. Summhammer, J., Ranch, H. and Tuppinger, D. (1987) Phys. Rev. A36, 4447. yon Neumann, J. (1932) Die Mathematisehe Grundlagen der Quantenmechanik (Springer Verlag, Berlin). Zurek, W.H. (1981) Phys. Rev. D~{, 1516; (1982) D96, 1862.

Coherence vs Decoherence

215

Q u e s t i o n (Y.$. Mashkevich) Where do many Hilbert spaces appear from? Is this connected with the yon Neumann construction of the infinite product of Hilbert spaces? A n s w e r : The description in terms of many Hilbert spaces arises from the macroscopicity of the measuring apparatus. A macroscopic system, such as a measuring device, is in fact an open system, and has therefore neither definite energy nor definite number of elementary constituents. Observe that a system made up of a definite number N of elementary constituents is described by the direct product space "H(N) of the component Hilbert spaces ?/1 ® 7/2 ® • .'. The situation we are endeavoring to describe is different: The macroscopicity of the apparatus, which is an open system,

forces us to consider a sum over N with an appropriate probabilistic weight. Consequently, we have a direct sum of direct products of component spaces. The N --~ co limit is delicate and leads to a continuous direct sum of many Hilbert spaces. Q u e s t i o n (V. Bra~insky) Please give comments of the experimental program of prof. H. Walther. May we use your calculations for his program? A n s w e r : You are referring to some experiments recently proposed by Scully and Walther (Phys. Rev. A 3 9 (1989) 5229). In the case they analyzed, an object particle is split into two states, one of which interacts with a maser. The two states are then recombined and interference is observed. Depending on the initial state of the maser, the situation can be drastically different: For instance, under suitable conditions, if the maser is initially in a number state no interference can be observed. On the other hand, if the maser is initially in a coherent state, interference is perfect. The above-mentioned experiment is one of the most interesting proposals of the last few years. The MHS approach can tackle such a situation, and in fact a calculation has been performed (Found. Phys. 22 (1992) 451). The conclusion is the following: Even ~hough i~terference disappears, the state of the total system (particle ÷ maser) remains pure, and therefore quantum mechanical coherence is fully

preserved. This conclusion holds true in both cases considered by Scully and Walther. Loss on interference does not imply decoherence. Q u e s t i o n (J. Hartle) Dr. Itano described experiments in which there were time sequences of measurements on a single atom. Presumably the wave packet is reduced at each measurement in the sequence. How are such experiments described in the many Hilbert space approach? Does the Hilbert Space of the apparatus change from moment to moment? A n s w e r : The results obtained by Itano ill his beautifll] experiment can be explained without making

use of the concept of "reduction of the wave function". A fully quantum mechanical calculation, based on unitary evolutions of the total system, is perfectly able to explain the experimental results. See

216

K Pasvaz/o

Petrosky et al., Phys. Left. A 1 5 1 (1990) 109; Ballentine, Phys. Rev. A 4 3 (1990) 5156; Inagaki et al., Phys. Lett. A 1 6 6 (1992) 5. I can see no need to use the MHS approach in order to describe this experiment. One Hilbert space is enough. Your last question is subtle and leads us to the very core of the problem. My answer is: No, if you consider a siagle event. Yes, if you consider several events. Notice that, if we exclude the (trivial) case of a measurement performed on a system that is initially in an eigenstate of the observable to he measured, a "measurement" always consists of an accumulstioa of many events, pertaining to the "same" experimental situation. Since a macroscopic system, such as a measuring device, cannot be put exactly in the same microscopic state every time the "same" experiment is reperformed, we are led to (and in fact must) consider many Hilbert spaces in order to describe the measurement process. Incidentally, [ would stress that the state of the rest of the Universe is suppressed, in the above description.