Computer Physics Communications 40 (1986) 181—188 NorthHolland, Amsterdam
181
QUANTUM FIELD THEORY AND SUPERCOMPUTERS
*
K.J.M. MORIARTY Institute for Computational Studies, Department of Mathematics, Statistics and Computing Science, Daihousie University, Halifax, Nova Scotia B3H 4H8, Canada
and C. REBBI Department of Physics, Brookhaven National Laboratory, Upton, NY 11973, USA Received 24 October 1985
We discuss the computational requirements for meaningful numerical calculations in the theory of elementary particles. We consider in some detail the problem of determining the spectrum of masses in quantum chromodynamics. We illustrate current calculational techniques and approximations and present the results of a recent calculation of very large scale.
fD4 exp {
Z
Numerical simulations of quantum field theories are among the applications which demand the largest computational resources and which can profit most from the vectorized and parallel architectures of modern supercomputers. The big cornputational requirements follow from the fourdimensional structure of the problems considered and also from the fact that all the degrees of freedom of the field, over a vast range of scales, appear to be relevant for the dynamics of the process. Goal of the calculations is to evaluate quantum mechanical expectation values of several observables. These in turn relate to important physical properties of the particles described by the assumed quantum field theory. Quantum mechanical expectation values are given by averages over all possible fourdimensional space—time evolutions of the field, weighted by the exponential of the negative of the action: z~’(D~e’ ( ~) exp { s( ~)} (1)
(We are using units where Planck’s constant h 1. Otherwise, the weight factor would be written as exp ( S/h }. This puts into evidence that in the classical limit h 0 the value of ~? is determined by those evolutions which minimize the action, i.e. solve the classical equations of motion. Also, an analytic continuation from Minkowskian to Euclidean space—time, t it, to be undone at the end of the calculations if necessary, has been performed in order to achieve a better defined measure.) In principle, the functional integrals in eqs. (1) and (2) are over all configurations of the fields, including oscillations of arbitrarily high wavenumbers; but this leads to wellknown ultraviolet divergences, which must be regularized to give a meaning to the integrals. In numerical calculations the regularization is performed replacing the continuum of points of space—time with the vertices of a lattice, normally a hypercubical lattice with lattice spacing a [1—31.In the process of renormalization the lattice spacing a is sent to zero, while correspondingly readjusting the coupling constant which governs the strength of the interaction, so as
=
*
—
Talk presented at the New Force in Supercomputers Executive Seminar held at the Beach Plaza Hotel, Monte Carlo, Monaco, 18—20 September 1985.
00104655/86/$03.50 © Elsevier Science Publishers B.V. (NorthHolland Physics Publishing Division)
=
—
S
( ~)}.
1. Introduction and outline of the theory
(2) =
—
—*
—*
182
K.J.M. Moriarty, C. Rebbi
/
Quantum field theory and supercomputers
to make all physical quantities approach finite continuum limits. In practice, one proceeds to values of a small enough that the lattice give a sufficiently accurate approximation to the continuum limit. In the theory of strong interactions it has been found that one is reasonably close to the continuum limit when a ~ 0.1 fm (1 fm iO~ cm), whereas the typical size governing lowenergy hadronic phenomena (stronglyinteracting partides, i.e. baryons and mesons, are called hadrons) is of the order of 1 fm. Thus, we see that lattices extending for more than 10 sites in each of the four direcitons of space—time are needed for a sufficiently accurate description of hadrons. A lattice 16~x 32, such as the one considered in calculations which will be reported about later, has 131 072 sites and 524288 links joining the sites, In quantum chromodynamics (QCD), the gauge theory which is believed to describe strong interactions at a fundamental level, the dynamical variables are 3 x 3 complex, unitary unimodular matrices (elements of the color group SU(3)), which are defined over the oriented links of the lattice: U~E SU(3) defined over the link from x to x + fta. Thus, the lattice mentioned above requires the storage (not necessarily in fast memory) of 524 288 3 X 3 complex matrices, i.e. over 9400000 real variables, to record a configuration. There are also fermionic degrees of freedom, fields ~ and 4~transforming like triplets and antitriplets of color SU(3), defined over the sites of the lattice. These fields describe the quarks and antiquarks which, together with the gluons described by U~, form the building blocks of QCD. Fermionic fields, which are anticommuting elements of a Grassman algebra rather than ordinary cnumbers, are not easy to handle by computer. Fortunately, the integrals over the fermionic degrees of freedom can be done explicitly. This introduces, however, additional factors in the measure for the gauge degrees of freedom U’2, which render the subsequent numerical simulation over the U’2 variables computationally much more involved. The motivation for performing numerical calculations is that all the above degrees of freedom of the field appear to take part into the dynamics, with nonlinear couplings which interconnect the =
modes of the fields at all scales, from the ultraviolet cutoff a up to the infrared cutoff, given by the overall size of the space—time volume considered. There are instances, in quantum field theones, where the contributions from the various modes of the field can be neatly organized into a perturbative expansion. Then predictions from the theory can be derived by analytical means or, if the expansion is carried to sufficiently high order, again with the help of the computer, but resorting to different techniques. However, there are other problems, especially in the theory of strong interactions, which cannot be treated by perturbative methods. There the numerical integration over all degrees of freedom, which is performed by methods of stochastic sampling, also called Monte Carlo simulations, appears as the only viable alternative for obtaining quantitative results [13]. The very large number of the degrees of freedom which must be handled by computer, together with the complexity of the operations, explains which such calculations require the power of supercomputers and have become feasible only during the last few years [4]. In an actual calculation one stores all the U~ matrices in memory. This defines a configuration C of the system. Then, a specially designed algorithm, based on the extraction of pseudorandom numbers, is used to upgrade, one by one, all of the U~dynamical variables, U~—s U(, thus producing a new configuration C’. This procedure, called a Monte Carlo iteration, is then repeated many times, giving origin to a stochastic sequence C. C. —p C. . (3) —~
...
~
.
.
The algorithm is such that, in statistical equihbrium, the probability of encountering any configuration C in the sequence in eq. (3) becomes proportional to the measure, exp{ 5), with which .it would enter the exact definition . of the quantum averages (eqs. (1) and (2)). Then the expectation value of the observable ~Vis approximated by an average taken over several configurations in the sequence .
.
.
.
.
.
—
.
~
/
—
i
—
~
IO+fl
4
The upgrading procedure is always computa
KiM. Moriarty, C. Rebbi
/
Quantum field theory and supercomputers
tionally very intensive. But its degree of complexity depends on whether the dynamics of the fermionic degrees of freedom (quarks) is also taken to influence the measure of the gauge degrees of freedom (Ufl, or whether those dynamical effects from the fermions are neglected as a first approximation. The whole algorithm aims at simulating, within the computer, the effects of quanturn fluctuations. In a theory like QCD one expects that many of the most relevant dynamical properties are already a consequence of the quanturn fluctuations of the gauge field alone. For example, it has been demonstrated, by numerical simulations [5—7],that the interaction with the quantized gauge field determines a linearly increasing confining potential binding quark and antiquark. However, the quantum fluctuations of the fermionic fields complicate the picture, as they can give origin to processes (called virtual processes) where a quark—antiquark pair is ternporarily created out of the vacuum, without enough energy to materialize, and then annihilated again, The simulation of the quantum fluctuations of the gauge field alone can be done by a local algorithm, i.e., an algorithm where the updating of the variable U~requires the values only of the U~ variables associated to the neighboring links. A few thousand floating point operations are required for the upgrade ~ —s U~. The quantum fluctuations of the fermionic field introduce nonlocal factors in the measure. An exact upgrading U~ U~’would then involve consideration of all the U~variables in the lattice, through the calculation of the variation of the determinant of a matrix, with a number of rows and columns a few times larger than the number of lattice points, Even though this matrix is very sparse, the calculation of its determinant cannot clearly be done at every upgrading step. Thus an exact simulation of the full theory, including dynamical fermions, cannot be done. Ways have been found to incorporate the effects of dynamical fermions approximately [8—11]; one of the current research efforts is to understand how good such approximations can be. As stated earlier, there are many physical quantities which are not very sensitive to the inclusion of dynamical fermions and which can be calcu—
183
lated neglecting all the complications that fermions introduce in the upgrading process. The term of “quenched calculations” has prevailed to characterize such computations. Quenched calculations are rather well established. The time required to upgrade a full configuration, even with a lattice as large as 16~x 32 (or larger), with modem supercomputers is measured in seconds, and simulations involving thousands of configurations in the sequence, leading to results with a high degree of statistical accuracy, can be performed. Nonquenched calculations are much more demanding. The number of operations required to upgrade a configuration is of a few order of magnitude larger and, since the inclusion of the dynamical effects of fermions is in any case approximate, consensus has not yet been reached on how accurate the approximation can be. So, nonquenched calculations are at an earlier stage, involving generally smaller lattices, but progress is being made and interesting results are being obtamed. Whatever the type of simulation, quenched or nonquenched, the structure of the calculation is ideally suited for vectorized or parallel computation. The same set of operations has to be performed repeatedly on a large number of variables; the flow of data in and out of memory is very regular and organized. The vector length for vectorization can be taken quite long; sustained performance rates in excess of 50% of peak performance can be obtained (see, for instance, what was reported in ref. [4]). Quenched calculations have been performed to obtain information on the potential binding heavy quarks, on the masses of quantum mechanical excitations of the pure gauge medium, of the temperature at which, because of screening effects due to thermal fluctuations, quark becomes deconfined, on the masses of mesons and baryons and other important physical quantities. Here we would like to outline in some detail a largescale computation recently performed [12] to evaluate masses in the hadron spectrum, because it serves well to illustrate the present possibilities and limitations of supercomputer applications to highenergy theory and what we may expect from the future.
184
KiM. Moriarty, C. Rebbi
/
Quantum field theory and supercomputers
2. Hadron mass calculations The strategy to extract masses from the first principles of the theory by numerical calculations has been established for some time [13], but only more recently it has been possible to perform calculations on lattices sufficiently large as to give trustworthy results for the masses [12,14]. An ingredient of the calculation is the generation of the sequence of configurations C1 (within the quenched approximation or not). The propagation of the quarks over the lattice, in presence of the gauge field U,~,is then given by an equation ~D(U)~,~.+ ~
~
~xx0~cc0’
the Green’s functions exhibit an exponential rate of decay and the coefficient in the exponential relates to the mass of the propagating meson or baryon. This is illustrated in fig. 1 (from ref. [12]), which displays three typical propagators (for the ‘rr meson, the p meson and the nucleon) evaluated on a periodic 16~X 32 lattice. (A few clarifying remarks: the exponential rate of decay is apparent in the intermediate range; the final levelling off is determined by the periodic boundary conditions, by which the propagator turns out to be the sum of a decreasing exponential and an increasing one; to the plot is not for ~ but rather for G(m) G~, z 1010,i.e. the propagator summed over spatial locations, which gives a clearer exponential decay; the oscillations from site to site are a subtlety L..~
=
(5)
related to the treatment of spin in the model,
where x, x’, x0 denote coordinates of lattice points, c, c’, c0 ranging from 1 to 3 denote color indices, spin and other possible indices (flavor) of the quark fields are left implicit, D(U) is a very sparse matrix (it connects only lattice points differing by 1 lattice spacing in any direction) linear in the variables U~or U~tand m is the mass of the quarks. D + m is the matrix which transcribes onto the lattice the Dirac operator of the continuum theory. G is the propagator of the quark (clearly G coincides with the inverse of the Dirac operator: ~ (D + ~ over the definite configuration specified by U~. From the propagators of the quarks one constructs propagators for the mesons and baryons. For instance, a meson could be created by a bilinear gauge invariant source operator and annihilated at a different point on the lattice by ~C5I1XCiPXC. The propagation of the meson is then described by the Green’s function
which need not concern us further.) We performed a quenched calculation on a 16~x 32 lattice, using the CDCETA CYBER 205 supercomputer, with 2 Mwords of memory, 2 vector pipelines, at the Institute for Computational Studies, Colorado State University, Fort Collins, —_—.
I
02
=
~C0~PX0C0~PX0C0
G~° ~
(6) C
=
(C,)
2
106

NN
Co
which can be deduced from the propagators of the quarks ~
~ o
~ GXCXOC0GXCXOCO. cc0
IO~ 
(7)
The sum is over the configurations in the sequence generated by the Monte Carlo simulation. Because the propagation occurs in Euclidean space—time,
0
I
I
I
5
10
IS
1/a Fig. 1. Numerical results and fits to three typical propagators.
K.J.M. Moriarty, C. Rebbi
/
Quantum field theory and supercomputers
Colorado. The initial lattice configuration had already been equilibriated by a few thousand Monte Carlo iterations in the course of previous investigations [7]. We worked at a value of the coupling parameter (/3 6), where, in the relationship which is established between /3 and a by the renormalization procedure, the lattice spacing is 0.1 fm. As mentioned earlier, this is where several independent calculations have found the onset of the continuum limit to take place (in the quenched approximation). Our lattice thus extends for about (1.6 fm)3 x (3.2 fm/c) (c being the speed of light), We calculated the propagators for the quarks in 5 different configurations, separated each one from the next by 100 Monte Carlo iterations, i.e., by 100 further configurations in the sequence. Three different, widely separated source points have been used within each configuration. Thus the total number of propagators entering into the averaging in eq. (7) is 15. A discussion of the Monte Carlo upgrading algorithm and related computational requirements has been presented elsewhere (see e.g. refs. [4,15]). In the calculation under consideration it turns out that the largest amount of computational resources is spent for the calculation of the quark propagators, even if this is performed only on a rather small subset of configurations and with a limited number of source points. For each source point (x 0, c0) one needs to solve the system of linear equations in eqs. (5). It is a system of 6 x 16~x 32 786 432 simultaneous linear equations. The matrix is very sparse and .this allows one to use powerful iterating procedures. We have used the method of conjugate gradients, applied after a suitable similarity transformation of the matrix, rooted in the physical nature of the problem (it is a gauge transformation) is performed to reduce by about 25% the number of nontrivial elements. We found that the algorithm of conjugate gradients, as applied to our problem, vectorizes extremely well, achieving 130 Mflops sustained rate in 32bit arithmetic with the 2 Mword 2 vector pipeline CDC CYBER 205 [16]. 32bit arithmetic turned out to be sufficient, and therefore used, but the code has been developed and tested with full 64bit precision, A major problem we encountered has to do =
185
with the required I/O operations. The variables needed for the calculation, i.e. the U,~matrices, the propagator ~ and a few auxiliary arrays with similar dimensionality, cannot all fit simultaneously in fast memory and must be repeatedly I/O’ed to disk. The data structure allows to do this in a very neat fashion. Basically the whole configuration is timesliced and individual subsets of data, corresponding to definite time coordinate, are brought in and out of fast memory as needed. Time slicing had to be done also for the upgrading procedure, and there it was found to be no problem. However, in the calculation of the propagators, the number of operations to be performed per variable brought into fast memory is relatively smaller, thus the stream of required I/O operations ends up being the real limiting factor for the speed of the computation. We tried, of course, to reduce the I/O operations to the bare minimum and a special singlepass version of the conjugate gradient algorithm was also developed for the purpose [16]. Nevertheless the ratio of elapsed versus CP time could not be reduced below a factor of 6 or 7. This puts into evidence the need of very fast access to large mass storage areas, for I
=
~ o.~

0 Fig. 2. Numerical result for the mass of the lowest iTlike particle as function of the square root of the quark mass and interpolation.
K.J.M. Moriarty, C. Rebbi
186
/
Quantum field theory and supercomputers
I
3.
~

2
I 
a variety of largescale calculations, and is a factor which should be kept in mind in the development of future machines. The results of the calculation are illustrated in figs. 2 and 3 and in table 1. From the calculation the nature of the pion, as Goldstone boson associated with the spontaneous breaking of chiral symmetry, emerges very clearly from the linear dependence of miT versus (fig. 2). Of the other curves for the masses (fig. 3) it is to be noticed that all masses (but the 0 mass) tend to finite values as mq 0, with intercepts reasonably quarks, parallelingwell experimental determined, observations. and that The the ratios of the baryon masses to the meson masses change from numbers 3/2 for heavy quarks (additivity in mq), to numbers 1 for light —‘
E 
pmass can be used to determine the lattice spacC
0
0.2
mq0
ing, i.e. to obtaining set the scale the renormalization procedure, a~ in(1659 ±134) MeV at
0.4
=
Fig. 3. Numerical results for the particles with the quantum numbers of the ‘TT(A), p(X), S(0~~)(D),A 1(1~)(+), nucleon
(0) and N!(~)and interpolations,
Table 1 Particle (J~)
Masscalculated (MeV)
p(l ~)
input a) input b) 1497±162 1063 ± 79
A1(1 ~) S(0~) p,n()
~)
Experimental
(MeV) 769
137 1275 975
1073 ± 91
939
N(~ )
1300±130
1405
K(0) K*(1 )
input ~> 890 ± 72 d)
1005 ± 82 d)
~
892 1020
__________________________________________________ *) Used to fix the lattice spacing in the renormalization proceb)
dure a~ (at ~
= 6) (1659±134) MeV. Used to fix the average mass of the u and d light quarks
c)
m~ 2.0 MeV. Used to fix the strange quark mass m~ 49.7 MeV.
d) The statistical error on these figures is difficult to estimate,
because, once the p mass has been assumed as input, the masses of the other vector meson states depend on the relative slopes of the curves for the 1 and 0 masses, rather than the intercepts. The statistical errors quoted, deduced from the errors in the intercepts, should be considered in excess.
/3
=
6. From the calculations of the potential be
tween heavy quarks [6,7] one would estimate a~ in the range 1750—1880 MeV, with reasonable agreement. The calculations are of course affected by errors of various nature. First of all the statistical errors, inherent in the method. These can be evaluated by standard procedures and can be reduced by enlarging the sample. There are then systematic errors due to the finite lattice spacing, the limited volume of the lattice and limited extent of propagation in time. All of these might be brought under control by repeating the calculations with different size and coupling parameters. Finally, an important possible source of error can be neglect of the q—4 virtual creation and annihilation processes. The effect of such processes is more difficult to estimate, because it requires going beyond the quenched approximation, which is very demanding on computational resources. Calculations with dynamical quarks have been done, with lattices of smaller size, especially for the study of thermodynamical effects [17]. Recently, a calculation of masses on a 12~X 24 lattice has also been performed [18]. One interesting aspect of this calculation is that it combined the resources of a specially built machine and a supercomputer. The cosmic cube processor devel
KiM. Moriarty, C. Rebbi
/
Quantum field theory and supercomputers
oped at Caltech was used to upgrade configurations on a 12~x 8 lattice, incorporating dynamical quark effects by an approximate method known as method of pseudofermions [8]. The configurations so obtained were then fed into the CYBER 205 at Fort Collins, Colorado, where the quark propagators were computed on 12~x 24 lattices obtamed by iterating three times the original configurations. Unfortunately, the rate at which the homogeneous machine, in its 128 node configuration, could proceed to the upgrade of the lattice was still too slow to produce a rich sample of configurations, so the statistical errors are somehow larger than in the quenched calculation presented above. Nevertheless, the whole project represents an important illustration of the fact that dedicated processors and commercial supercomputers must not necessarily be seen as antagonists, but that their characteristics and resources can complement eacii otiier. 3. The outlook If we consider future developments, the outlook for calculations such as the one outlined above and, in general, in quantum field theories is extremely promising. We are already at a stage where, albeit within the quenched approximation, several important physical quantities can be calculated from first principles with a precision of the order of 10%. With the much larger memories, with reasonably fast access times, foreseen for the near future, and the dramatic increases in the computational rates to be achieved through faster clock periods and especially concurrent processors, it will be possible to reduce substantially those margin of errors and to go beyond the schemes of approximation which still limit the most advanced calculations presently performed.
187
Colorado State University at Fort Collins, Colorado, Robert M. Price of Control Data Corporation for his continued interest, support and encouragement, the Control Data Corporation PACER Fellowship (Grant No. 85PCR06) for financial support and the Natural Sciences and Engineering Research Council of Canada (Grant No. NSERC A8420) for further financial support. One of the authors (K.J.M.M.) would like to thank ETA Systems, Inc. for a grant which made his visit to Monte Carlo possible.
References [1] KG. Wilson, Phys. Rev. D10 (1974) 2445; Phys. Rep. 23 (1975) 331. [2] M. Creutz, L. Jacobs and C. Rebbi, Phys. Rep. 95 (1983) 201. [3] Lattice Gauge Theories and Monte Carlo Simulations, ed. C. Rebbi (World Scientific, Singapore, 1983). [4] D. Barkai, K.J.M. Moriarty and C. Rebbi, Comput. Phys. Commun. 36 (1985) 241. [5] M. Creutz, Phys. Rev. D21 (1980) 2308; Phys. Rev. Lett. 45 (1980) 313. [6] E. Pietarinen, Nucl. Phys. B190 (1981) 349. M. Creutz and K.J.M. Moriarty, Phys. Rev. D26 (1982) 2166. R.W.B. Ardill, M. Creutz and K.J.M. Moriarty, ibid. 27 (1983) 1956. M. Fukugita, T. Kaneko and A. Ukawa, ibid. 28 (1983) F. Guibrod, P. Hasenfratz, Z. Kunszt and I. Montvay, Phys. Lett. 128B (1983) 415. G. Parisi, R. Petronzio and F. Rapuano, ibid. 128B (1983) 418. D. Barkai, M. Creutz and K.J.M. Moriarty, Phys. Rev. 29 N.A.Campbell, C. Michael and P. Rokow, Phys. Lett. 139B (1984) 288. S. Otto and iD. Stack, Phys. Rev. Lett. 52 (1984) 2328. A. Hasenfratz, P. Hasenfratz, U. Heller and F. Karsch, Z. Phys. C25 (1984) 191. M. Flensburg and C. Peterson, Phys. Lett. 153B (1985) 412. Ph. de Forcrand, G. Shierholz, H. Schneider and M. Teper, Phys. Lett. 160B (1985) 137.
Acknowledgements We would like to thank Lloyd M. Thorndyke Bobby Robertson, L. Kent Steiner and John E. Zelenka of ETA Systems, Inc. for access to the 2 Mword 2 vector pipeline CDC CYBER 205 at
Ph. de Forcrand, Proc. Los Alamos Conf. Frontiers of Quantum Monte Carlo, Los Alamos, New Mexico (September 1985). . . [7] D. Barkai, K.J.M. Monarty and C. Rebbi, Phys. Rev. D30 (1984) 1293, 2201. [8] F. Fucito, E. Marinari, G. Parisi and C. Rebbi, Nucl. Phys. B180 [FS2] (1981) 369.
188
KiM. Moriarty. C. Rebbi
/
Quantum field theory and supercomputers
[9] D.H. Weingarten and D.N. Petcher, Phys. Lett. 99B (1981) 333. H.W. Hamber, Phys. Rev. D24 (1981) 951. [10] D.J. Scalapino and R.L. Sugar, Phys. Rev. Lett. 46 (1981) 519. [11] J. Polonyi and H.W. Wyld, Phys. Rev. Lett. 51(1983) 2257. [12] D. Barkai, K.J.M. Moriarty and C. Rebbi, Phys. Lett. 156B (1985) 385. [131 H. Hamber and G. Parisi, Phys. Rev. Lett. 47 (1981) 1792. D. Weingarten, Phys. Lett. 109B (1982) 57. E. Marinari, G. Parisi and C. Rebbi, Phys. Rev. Lett. 47 (1981) 1795. A. Hasenfratz, Z. Kunszt, P. Hasenfratz and C.B. Lang, Phys. Lett. 11OB (1982) 289. [14] H. Lipps, G. Martinelli, R. Petronzio and F. Rapuano, Phys. Lett. 126B (1983) 250. Ph. de Forcrand and C. Roiesnel, Phys. Lett. 143B (1984) 453. K.C. Bowler, DL. Chalmers, A. Kenway, K.D. Kenway, G.S. Pawley and D.J. Wallace, Nucl. Phys. B240 [FS12] (1984) 213.
J.P. Gilchrist, G. Shierholz, H. Schneider and M. Teper, NucI. Phys. B248 (1984) 29. A. Billoire, E. Marinari and R. Petronzio, Nucl. Phys. B251 [FS13] (1985) 141. A. Konig, K.H. MUtter and K. Schilling, Nucl. Phys. B259 (1985) 33. 0. Martin, KiM. Moriarty and S. Samuel, Nucl. Phys. B261 (1985) 79. [151 D. Barkai, K.J.M. Moriarty and C. Rebbi, Comput. Phys. Commun. 32 (1985) 1. [16] D. Barkai, K.J.M. Moriarty and C. Rebbi, Comput. Phys. Commun. 36 (1985) 1. [17] F. Fucito, C. Rebbi and S. Solomon, NucI. Phys. B248 (1984) 615. J. Polonyi, H.W. Wyld, J.B. Kogut, J. Shigemitsu and D.K. Sinclair, Phys. Rev. Lett. 53 (1984) 644. R.V. Gavai and F. Karsch, NucI. Phys. B261 (1985) 273. F. Fucito and S. Solomon, Phys. Rev. Lett. 55 (1985) 2641. [18] F. Fucito, K.J.M. Moriarty, C. Rebbi and S. Solomon, Phys. Lett. 172B (1986) in press.