Applications of Fluorescence Correlation Spectroscopy: Polydispersity Measurements

Applications of Fluorescence Correlation Spectroscopy: Polydispersity Measurements

Journal of Colloid and Interface Science 213, 479 – 487 (1999) Article ID jcis.1999.6128, available online at on Applicati...

112KB Sizes 1 Downloads 122 Views

Journal of Colloid and Interface Science 213, 479 – 487 (1999) Article ID jcis.1999.6128, available online at on

Applications of Fluorescence Correlation Spectroscopy: Polydispersity Measurements Konstantin Starchev,* ,1 Jacques Buffle,* and Elı´as Pe´rez† *Department of Inorganic, Analytical and Applied Chemistry, University of Geneva, Sciences II, 30 Quai Ernest Ansermet, 1211 Geneva 4, Switzerland; and †Universita¨t Konstanz Fakulta¨t fu¨r Physik, Fach M 696, D-78457 Konstanz, Germany Received September 30, 1998; accepted January 28, 1999

cules. In a previous publication, the effect of the finite size of fluorescent particles was studied (10). The present paper examines how size polydispersity can be extracted from FCS measurements. Both computer-generated and real FCS data from latex particles are used to demonstrate the application of this method of investigation. The method of histograms (11) previously developed for determination of polydispersity from photon correlation spectroscopy (PCS) data is applied. Basically the diffusion time scale of FCS is fractionated in a number of time intervals (bars). The corresponding particle fraction is represented by the bar height. The bar heights are varied to minimize the differences between the calculated and experimental correlation functions. Such an approach, however, is known as an ill-posed problem (12–14); i.e., several different distributions indistinguishable from each other within the experimental error may be found from the same data. In addition, these calculations are unstable; i.e., small changes in the correlation curves, for example, in replicate measurements, may give rise to considerable changes in the obtained polydispersity distribution. Such problems are common not only in polydispersity measurements but also, for example, in image reconstruction (15), calculation of affinity distribution of heterogeneous sorbents (16), and determination of spatial distribution of fluorescent groups in latex particles (17). A good strategy for solving this problem is to impose additional constraints and regularization conditions which respectively eliminate the nonphysical solutions and the solutions that are not acceptable on the basis of empirical knowledge of the system. For example, for natural samples a constraint condition is that the concentration of the particles cannot be negative and a regularization condition is that the size distribution must be smooth since usually the size of the particles varies continuously. This paper discusses the strategy and its results.

The method of histograms is applied to the determination of polydispersity of particles and molecules in solution from fluorescence correlation spectroscopy (FCS) data. This is an ill-posed problem, which can be overcome by using a common strategy for imposed regularization and constraint conditions. The method developed for evaluating the polydispersity is tested on both computer-generated correlation curves and real FCS data. The results obtained show that FCS measurements can be successfully used for the determination of polydispersity of suspensions, with an efficiency comparable to that of photon correlation spectroscopy (PCS). The advantage of FCS, however, is its better sensitivity to small particles (size <50 nm) and molecules in dilute solutions, as well as its better selectivity. The usefulness of FCS for environmental chemistry is discussed with regard to the obtained results. © 1999 Academic Press Key Words: fluorescence correlation spectroscopy; FCS; polydispersity; suspensions; method of histograms; latex particles; Rhodamine 6G; inverse problem; Pareto distribution; Gaussian distribution.


Fluorescence correlation spectroscopy (FCS) (1, 2) has been enhanced recently, due mainly to the application of confocal microscopy technology and single-photon detection devices in equipment design. As a result, the sensitivity has been substantially increased and it has become possible to perform single-molecule detection measurements (3, 4). Until now the FCS method has been applied mainly in biochemical and biophysical investigations such as measurement of lateral transport on cell surfaces (5), nucleic acid hybridization (6, 7), and receptor/ligand interaction (8, 9). The method, however, can also be very helpful in environmental studies, both for analytical determination and interaction studies of small particles and trace components in natural water and for investigation of the particle dynamics of model systems. Some additional developments, however, are necessary for environmental applications. These developments are related mainly to the large size, shape, and structure heterogeneity of natural colloids and macromole1


To whom correspondence should be addressed.

Fluorescence correlation spectroscopy is a method where by the spontaneous number fluctuations of colloidal size fluorescent particles and/or molecules are measured by the fluctuations of the emitted fluorescent light. In the absence of chem-


0021-9797/99 $30.00 Copyright © 1999 by Academic Press All rights of reproduction in any form reserved.



ical transformations, fluctuation of the emitted light will be observed when the fluorescent molecules diffuse in a nonuniform illuminated sample. This illumination is usually accomplished by means of a sharply focused laser beam with an assumed 3D Gaussian intensity profile (18), I~r! 5 I 0 e 2~ x

2 1y 2 !/

2 v 21

e 2z


2 v 22



where v 1 and v 2 are the radii of the beam in the x,y plane and the z direction, respectively (v 1 ! v 2), and I 0 is the maximum intensity. Under these conditions and with the particle radius taken to be smaller than v 1 (in practice v 1 ' 250 nm), the intensity correlation function of a polydisperse suspension is given by

g~t! 5




C~ t ! 1 1


t t



t p t 2


21/ 2



where t is the delay time, t is the particle characteristic diffusion time, and p is a known constant named the structure parameter ( p 5 v 2 / v 1 ). C( t ) is related to the mean fluorescent particle intensity i( t ) and the number of particles n( t ) in the open illuminated volume defined by the laser beam, named the sample volume (SV): C~ t ! 5

i 2 ~ t !n~ t ! . ~* `0 i~ t !n~ t !d t ! 2


a constant. In this simple case C( t ) is proportional to the particle number distribution. For a polymer molecule uniformly labeled, i( t ) } t 1/b , where the exponent b is expected to vary between 1.5 and 2 depending on nature of the solvent; for a particle labeled by fluorescent molecules uniformly distributed on the surface, it is expected that i( t ) } t 2 . The function i( t ) must be studied in each particular experimental system. This falls outside the scope of the present work. The normalizing condition, Eq. [4], states that the total number of particles in solution is constant. In the simple case where i( t ) is constant, m 5 1/N, where N is the total number of fluorescent particles in the SV (N 5 * 0` n( t )d t ). This normalization is different from that commonly used (11, 16, 17), where the distribution is normalized to unity. However, using m is more appropriate for the FCS method, since m can be simply determined from the intercept of the correlation curve, i.e., m 5 g(0). Since the maximum sensitivity of FCS occurs when there are few particles in the SV, m is usually of the order of unity, which is convenient for calculation. According to the method of histograms the diffusion time scale is divided into M intervals with width D t i . Therefore Eq. [2] can be written as

O c S 1 1 tt D S 1 1 p tt D 21


g~t! 5


21/ 2







O c 5m M

The rotational diffusion (18) of the particles is neglected in Eq. [2]. This approximation is valid for small particles (R , 250 nm) for which the rotational characteristic time is much shorter than t and thus does not influence the measurements significantly. The cross-correlation terms are also neglected in Eq. [2] as it is assumed that there is no interaction between the different particles. Our purpose is to determine the distribution function C( t ) from an experimentally measured correlation function g(t) by taking into account the normalizing condition


and c i 5 C( t )D t i , where D t i is the width of the ith bar. The number M of bars should be optimized since the distribution is poorly described for small M, but the fit becomes unstable for large M. We find experimentally that M 5 33 gives reasonably good results. To obtain c i we minimize the function R(c) with respect to each c i value simultaneously,


C~ t !d t 5 m,



where m 5 g(0). In mathematics this is known as an inversion problem (19). Note that the function C( t ) differs from the actual particle number distribution (see Eq. [3]). Since C( t ) depends on the square of the intensity of the particles, the obtained distribution will be biased to the most fluorescent particles. The number distribution can be obtained from C( t ) only when i( t ) is known. The function i( t ) depends mainly on the number of fluorophores per particles or molecule. For example, for a DNA molecule labeled only at the end, i( t ) is




R~c! 5

1 t2 2 t1




[email protected] g e~t! 2 g~t!# 2 dt


1 lm 1 2

¥ ic i m



1 lr

O c0 , 2




where g e(t) is the experimentally determined correlation function, g(t) is the correlation function calculated by Eq. [5] for each step of the minimization, w(t) is a weighting function related to the experimental error of each data point of the correlation function, m is the total “mass” equal to g e(0), c 0i is the second derivative of c i with respect to t, and l m and l r are



Lagrange multipliers (l r is also called the regularization parameter). In addition, nonnegativity constraints are imposed: c i $ 0.


The first term of Eq. [7] is the x 2 value, which compares the experimental correlation function with the function calculated from the recovered distribution (19). The second term represents “mass” conservation; i.e., we impose that during the calculation process the distribution must satisfy Eq. [6]. The third term represents the regularization condition, which biases the solution toward a function with some expected properties. In our case it is expected that the particle size distribution will be smooth; i.e., it does not include any discontinuity. Several other regularization functions could be used (16). In fact, by minimizing Eq. [7] we minimize the x 2 value of our problem as well as two supplementary conditions. These conditions are weighted by the two Lagrange multipliers, l m and l r. The role of the second and third terms in Eq. [7] as well as that of constraints (8) is to eliminate the nonphysical solutions and make the problem well posed. However, the price to pay is that some prior knowledge of the distribution is required, which in some cases can distort the real distribution curve. For example, by giving much weight to the last term of Eq. [7], one can obtain a very smooth and good-looking distribution, which, however, could barely be related to the measured correlation curve, since the weight of g e(t) is then made negligible in Eq. [7]. Usually in the literature (13, 16, 19) l r and l m are free parameters whose magnitudes are set on the basis of authors’ experience. It is clear from the above considerations that to obtain consistent results on polydispersity is nontrivial and comparison with different techniques is important. 3. METHODS AND EXPERIMENTS

(a) Minimization of Eq. [7] To minimize Eq. [7] we set the first derivative with respect to c j equal to zero:

bj ;

FIG. 1. (a) FCS correlation curves corresponding to Gaussians distributions. Curves with polydispersity ratios 0.01, 0.02, and 0.03 are indistinguish-

­R~c j ! 5 0. ­c j


able from curves of the monodisperse case and are presented with a solid line. Curves correspond to polydispersity ratios of 0.06 (dotted line), 0.1 (dotted and dashed line), and 0.2 (dashed line). (b) PCS correlation curves corresponding to the same Gaussian distributions as in (a). The monodisperse case is shown by a thick solid line. Polydispersity ratios are indicated on the corresponding curves. (c) Sum of the mean squared differences between a monodisperse correlation function and polydisperse functions with different polydispersity ratios vs the polydispersity ratio for PCS (squares) and FCS (circles). The noise limit for PCS is given with a horizontal dashed line and that for FCS with a horizontal solid line.



FIG. 2. (a) Normal Gaussian distribution (polydispersity ratio 0.1) fitted with different l r and sharp initial distribution (polydispersity ratio 0.01). (b) Normal Gaussian distribution with polydispersity ratio 0.1 (solid line) fitted with sharp (dotted line) and large (dashed line) initial distributions. l r 5 0. The polydispersity ratio of the initial distributions is printed on the figure. (c) Correlation function corresponding to the distributions shown in (a) and (b). (d) Polydispersity ratio obtained from the fit vs polydispersity ratio of the generated normal Gaussian distributions. The solid line with slope 1 represents the “ideal” correlated data.

Different methods for solving Eq. [9] could be explored. The Newton–Raphson method (19) is an iterative procedure, in which the solution is searched by starting from an initial approximation. The method has a good convergence when the initial approximation is good (the method is locally convergent). The Newton–Raphson method allows us to write the following linear set of equations for the elements of the increment vector d c k ,

O a dc 5 b , M






where a jk are the components of a matrix containing the second derivative of R(c j ),

a jk ;

­ 2 R~c! . ­c j ­c k


The increments of the distribution components are calculated at each step from Eq. [10], and the nth distribution vector is then obtained: ~c k ! n 5 ~c k ! n21 1 ~ d c k ! n21 .




If the new c k is found to be negative, it is then set to zero according the constraints specified in Eq. [8]. The process of iteration is stopped when the relative difference in two consecutive R(c) values is less than a preset value. A limit of 10 26 gives good results, since for larger values the best approximation is not reached, but for smaller values the process was not stopped. In addition, if a very small preset value is used, the correlation function can be “overfitted.” The parameter l m was automatically calculated at each step, as described in more detail in Ref. (17). The weight function w(t) depends on the uncertainties, ( d g(t)) 2 (or the error), in the data points of the measured correlation function: w~t! 5

1 . ~ d g~t!! 2


This error depends in a nontrivial way on t (20 –22). For this work we have set w(t) 5 1, which is reasonable for the computer-generated data because the corresponding signal-tonoise ratios were constant. For the experimental data we have found that the error does not vary much from the signal, so w(t) 5 1 is a reasonable first approximation. Nevertheless, the function ( d g(t)) 2 should be a subject of further investigation in order to optimize the fitting of the data. Another parameter required for the fitting is m. In the case of the computer-generated data the exact value of m is known. For the experimental measurements m was evaluated from a fit of g(t) assuming the presence of either one or two components using the equipment software developed by Evotec. The possible error in calculated m is evaluated to be 10 22 from replica measurements of 100-nm latex particles. The uncertainty in m value is taken into account for calculation of l m (17). In contrast l r was a free parameter. The calculation was performed with several fixed values of l r as described under Results and Discussion. The experimentally measured correlation curve also contains an exponential decay component with a characteristic time of several microseconds which is usually attributed to triplet interactions (23). This component is fitted by the Evotec software and is introduced in Eq. [5] assuming that the triplet interaction is independent from the size of the particles. The reader is referred to Ref. (17) for more details on the application of the inversion method. (b) Computer-Generated Data Correlation curves were generated on the basis of Eq. [5] for Gaussian distributions and the Pareto law (24). A Gaussian noise with a signal-to-noise ratio equal to 0.004 is added to

FIG. 3. (a) Fitting of bimodal distribution with peak positions which differ by a factor of 2. The initial approximation was a large single-mode distribution with a polydispersity ratio of 0.3. l r values are plotted on the graph. (b) Fitting of bimodal distribution with peak positions which differ by a factor of 2. The

initial approximation was a two-mode distribution with sharp peaks at the right positions. Different l r values are shown on the graph. (c) Fitting of bimodal distribution with peak positions which differ by a factor of 10. Dotted line, initial approximation a bimodal distribution with sharp peaks on the right; dashed line, initial approximation a large peak in the middle of the scale.



ware developed by Evotec. Photon correlation spectroscopy and transmitted electron microscopy (TEM) were used to obtain independent determinations of particle distributions. The PCS apparatus was a Malvern PCS4700. TEM observation was done on a Philips EM 410 microscope on Pt metal replica of the latex particles. The PCS and the FCS apparatus are described in more details in Ref. (10). The Rhodamine 6G (R6G) was purchased from Sigma. Carboxylate functionalized regular latex microspheres were purchased from Bangs Laboratories. The latex beads were labeled by adsorption of R6G on their surface as described in Ref. (10). 4. RESULTS AND DISCUSSION

(a) Comparison of PCS and FCS It is interesting to compare the potential capability of FCS for resolving polydispersity problems with that of PCS, since the methods are similar. From a mathematical point of view, the main difference is a polyexponential decay of the PCS correlation function (11) instead of a polyhyperbolic decay (see Eq. [5]) in FCS. Since an exponential dependence on time is stronger than a hyperbolic one, it is expected that the sensitivity of PCS to polydispersity will be stronger than that of FCS. To some extent this is actually the case, as illustrated in Fig. 1a–1c. Normal Gaussian distribution curves with different polydispersity ratios s 2/t# 2 are generated according to c~ t ! 5

FIG. 4. (a) Correlation function generated from the Pareto law size distribution (given by Eq. [15]) and corresponding fits. (b) Fit of the Pareto distribution. Different l r values are shown on the graph. A log–log plot of the data is shown in the inset.

each curve, which is reasonable from an experimental point of view. As discussed in the previous section and in several publications (20 –22), the shape and amplitude of the real noise are more complicated. This can influence our results somewhat, but the main effect occurs in the short-time range (shot noise), which does not play a very important role for the whole fit. (c) Experimental The FCS studies were carried out with a ConfoCor apparatus manufactured by Carl Zeiss and based on the confocal microscopy technology. The measurements were controlled by soft-


s Î2 p


exp 2


~ t 2 t# ! 2 , 2s 2


where s is the standard deviation or second moment of the distribution and t# is the mean value. The corresponding FCS and PCS correlation curves were generated without adding noise and are shown in Figs. 1a and 1b, respectively. For a monodisperse system, the PCS correlation function is linear in the semi-log scale used in Fig. 1b. Small deviations from this linearity indicate some degree of polydispersity. This property is the basis of the method of cumulants (25) for evaluating polydispersity from PCS data. The application of this method in FCS, however, is not evident since the FCS correlation function could not be converted to a form linear in the parameters even for monodisperse case. Figures 1a and 1b suggest that the correlation curves are more sensitive to the polydispersity ratios in PCS than in FCS. This can be compared quantitatively in Fig. 1c, which shows the sum of the meansquared differences between the correlation functions of polydisperse and monodisperse systems as a function of the polydispersity ratio. This sum is indeed larger for PCS than for the FCS data. However, the two curves are not very different. More important is that both dependencies are linear, which means that there is no functional difference between the two methods in the solution of the polydispersity problems. The sum of the squared differences between a noisy correlation



function and the smooth “best” fit is the sensitivity limit of each method. We have measured this sum experimentally with 100-nm latex particles. It is shown by horizontal solid and dashed lines for FCS and PCS, respectively, in Fig. 1c. For the points below these lines the polydispersity is indistinguishable from statistical errors. In other words, FCS does not allow one to determine size distributions with polydispersity ratio ,0.03. This is also seen from Fig. 1a, since the correlation curves are practically the same for ratios 0.03, 0.02, and 0.01 and for a monodisperse system. For PCS this limit is lower ('0.01), but it must be emphasized that the signal-to-noise ratio in FCS can be enhanced by using longer accumulation times. Therefore, both methods can be considered as having comparable capabilities. FCS, however, could be more useful for measurements of small particles and molecules (size ,50 nm) which scatter light weakly, in which case the signal-to-noise ratio in FCS is much better than that in PCS. In the following measurements, however, we have used 100-nm latex particles to enable easier comparison of FCS results with those of PCS as well as TEM. (b) Evaluation of the Polydispersity from ComputerGenerated Data The correlation functions corresponding to Gaussian distributions were generated with a noise simulated as described in Section 3b. The fit of these curves was performed as described in Sections 2 and 3. The parameter l r was varied from 0 to 10 24 in order to obtain the best fitting of the polydispersity distribution curves. For 0 or small values of l r the fit was unstable and the distribution curves obtained showed peculiarities (Fig. 2a). In addition, the distributions were strongly dependent on the approximated initial distribution (Fig. 2b). However, as l r increases, the stability of the fit increases. The best results were obtained with l r 5 5 3 10 25. The corresponding correlation curves are plotted in Fig. 2c. Note that all fittings are close to each other and to the real curve. The results show that the deviation of the fit from the experimental data or x 2 criteria is not very sensitive to l r. For very large l r ($5 3 10 24) the x 2 value increases. This is in agreement with the theory, since the regularization term is independent from the squared differences of g(t) function, as discussed in Section 2. It must be noted that a minimum value of x 2 does not necessarily correspond to the optimum evaluation of the real distribution, since the correlation curve could be overfitted; i.e., the noise could also be fitted. Clearly the best distribution is not easily discriminated on the basis of the x 2 criteria only. A possible solution to this problem is to fit the data with several l r values and take that distribution (i) which is most likely to be the expected one and (ii) for which the x 2 criterion is still minimal. Other useful criteria to optimize l r are discussed in Ref. (16). The real polydispersity ratio is plotted against that obtained by fitting in Fig. 2d. The correlation of the data is excellent but only for polydispersity ratios .0.03. This is in agreement with the conclusions of Fig. 1c, which shows that for distributions with ratio ,0.03 the differences in correlation curves cannot be discriminated from the noise.

FIG. 5. (a) Polydispersity distributions of latex particles obtained with TEM (dotted line), PCS (solid line), and FCS (dashed line). (b) FCS correlation functions of 100-nm latex particles (solid line) and the corresponding fit (dashed line).

Bimodal distributions are relevant in FCS studies, because working with solutions containing fluorescent species with two different sizes is common. This is the case, for example, when particles are labeled by adsorption of a fluorescent dye in equilibrium. It is generally accepted that two monodisperse components can be distinguished if their diffusion times differ at least by a factor of 2. It is therefore of interest to study this case but for two polydisperse components. Figures 3a and 3b show such a bimodal distribution. Both modes are Gaussian with polydispersity ratio equal to 0.06 and mean values that differ by a factor of 2. Different fittings from different initial distributions and l r values are also given. All of them are significantly different from the real distribution even though all correlation functions are fitted relatively well and the x 2 criteria are rather low (data not shown). In addition, the shape of the



final distribution was strongly dependent on the initial shape. For example, by starting from a single-mode distribution, a bimodal distribution cannot be obtained (Fig. 3a). Starting from a bimodal distribution a bimodal distribution is obtained, but with peak heights and widths different from those of the real one (Fig. 3b). In such a case, it is never possible to find the shape of the real distribution (one or two modes) on the basis of FCS data only. The situation is different, however, when the modes are at large distances from each other, as shown in Fig. 3c, where two Gaussian distributions are generated with mean values differing by a factor of 10 and the same polydispersity (0.06). The fit gives reasonably good results for the final distribution (Fig. 3c) starting from a distribution with two sharp peaks at the expected position (dotted line) or one broad peak in the middle of the scale (dashed line). However, when the initial distribution is very far from the real one, for example, one very strong peak in the middle of the scale, the fit is incorrect (data not shown). This is due to the local convergence of the Newton–Raphson method. The power distribution known also as the Pareto law is important in the environmental sciences (24, 26). Power laws in nature are often related to fractal geometries and situations where the size of the particles varies greatly or their shape is irregular. In particular, a power law distribution is obtained from fractal aggregations of particles known as reaction-limited cluster aggregation (27), which is common in natural water systems (28). It is now recognized that in most cases the size distribution of particles in natural waters follows a power law distribution with an exponent of about 4.0 (24, 26). Such a strong dependence is difficult to study with FCS, since the maximum sensitivity occurs when the number of particles in the SV is between 1 and 10. This gives only a Î4 10 times variation of the size scale (exponent 4), which is not sufficient for evaluation of the power function. Nevertheless, by the use of sophisticated equipment (3) with low background light intensities the concentrations can be measured over several orders of magnitude; i.e., even low concentrations of large particles can be measured. To demonstrate the capability of the FCS method, in this case a Pareto distribution was generated according to dc 5 A t 24 d t ,


where A is a constant set to unity. The correlation function corresponding to this distribution was generated with a signalto-noise ratio of 0.02. Although the fit is not perfect (Fig. 4b), the general shape of the Pareto law is recovered (inset to Fig. 4b). In this case, the initial distribution was a large bell-shaped curve and l r was varied between 0 and 0.01. The best results are obtained for l r 5 0.001. Data generated with a signal-tonoise ratio of 0.004 give similar results. Considering the very large asymmetry of the Pareto law distribution and the difficulty in fitting such a distribution, the results obtained are rather good.

(c) Experimental Measurements The method was also tested by means of experimental FCS measurements with latex particles (Figs. 5a and 5b). According to the manufacturer the particle size was 100 nm with a sharp distribution (polydispersity ratio 0.01 as measured by TEM). This was checked by TEM measurements in our laboratory by counting 358 single particles. A mean diameter of 92.6 nm and a polydispersity ratio of 0.005 were obtained. Both PCS and FCS methods gave values significantly greater than the TEM values (Fig. 5a). The average size and polydispersity were respectively 103.6 nm and 0.13 for PCS and 104.1 nm and 0.15 for FCS. The fitting of the FCS correlation curve is shown in Fig. 5b and is satisfactory. The distributions obtained by FCS were weakly dependent on the quantity of R6G used for labeling the particles, on the value of l r used for the fitting, and on the initial distribution, but the polydispersity ratio was always in the range of 0.10 to 0.17, and the average size did not vary by more than 10%. The reproducibility of PCS measurements was better. Our conclusion is that the PCS and FCS measurements are consistent and the differences obtained with TEM measurements are due to aggregates which are always present in the system. Such aggregates are also seen on TEM pictures but are not taken into account in the TEM distribution because it is not clear which aggregates exist in the solution and which are due to aggregation during preparation of the TEM grids. ACKNOWLEDGMENTS The cooperation of Evotec and Carl Zeiss is acknowledged. The work was supported by the Swiss National Foundation (Project 2000-050629.97/1).

REFERENCES 1. Elson, E., and Magde, D., Biopolymers 13, 1 (1974). 2. Magde, D., Elson, E. L., and Webb, W., Biopolymers 13, 29 (1974). 3. Rigler, R., Mets, U., Widengren, J., and Kask, P., Eur. Biophys. J. 22, 169 (1993). 4. Edman, L., Mets, U., and Rigler, R., Proc. Natl. Acad. Sci. U.S.A. 93, 6710 (1996). 5. Elson, E. L., Schlessinger, J., Koppel, D. E., Axelrod, D., and Webb, W. W., Prog. Clin. Biol. Res. 9, 137 (1976). 6. Kinjo, M., and Rigler, R., Nucleic Acids Res. 23, 1795 (1995). 7. Schwille, P., Oehlenschla¨ger, F., and Walter, N. G., Biochemistry 35, 10182 (1996). 8. St-Pierre, P. R., and Petersen, N. O., Biophys. J. 58, 503 (1990). 9. Klingler, J., and Friedrich T., Biophys. J. 73, 2195 (1997). 10. Starchev, K., Zhang, J., and Buffle, J., J. Colloid Interface Sci. 203, 189 (1998). 11. Gulari, E., Gulari, E., Tsunashima, Y., and Chu, B., J. Chem. Phys. 70, 3965 (1979). 12. Tikhonov, A., and Arse´nine, V., in “Me´thodes de re´solution de proble`mes mal pose´s.” Editions Mir, Moscow, 1976. 13. Provencher S., Comput. Phys. Commun. 27, 213 (1982). 14. Baveda, V., and Morazov, V., in “Proble`mes incorrectement pose´s.” Masson, Paris, 1991. 15. Smith, R. C., and Grandy, W. T., Eds., “Maximum Entropy and Bayesian Methods in Inverse Problems.” Reidel, Dordrecht, 1985.

APPLICATIONS OF FCS FOR POLYDISPERSITY MEASUREMENTS 16. Cernik, M., Borkovec, M., and Westall, J., Environ. Sci. Technol. 29, 413 (1995). 17. Pe´rez, E., and Lang, J., J. Phys. Chem., in press. 18. Aragon, S. R., and Pecora, R., J. Chem. Phys. 64, 1791 (1975). 19. Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannery, B. P., “Numerical Recipes in C,” 2nd ed., Chap. 18. Cambridge Univ. Press, Cambridge, UK, 1992. 20. Koppel, D., Phys. Rev. A. 10, 1938 (1974). 21. Qian, H., Biophys. Chem. 38, 49 (1990).


22. Kask. P., Gunther, R., and Axhausen, P., Eur. Biophys. J. 25, 163 (1997). 23. Widengren, J., Mets, U., and Rigler, R., J. Phys. Chem. 99, 13368 (1995). 24. Lerman, A., “Geochemical Processes, Water and Sediment Environments” Chap. 5m. Wiley, New York, 1979. 25. Koppel, D., J. Chem. Phys. 57, 4814 (1972). 26. Buffle, J., “Complexation Reactions in Aquatic Systems,” Chap. 2. Ellis Horwood, Chichester, 1990. 27. Meakin, Adv. Colloid Interface Sci. 4, 249 (1988). 28. Filella, M., and Buffle, J., Colloids Surf. A 73, 225 (1993).