Extensivity of Rényi entropy for the Laplace–de Finetti distribution

Extensivity of Rényi entropy for the Laplace–de Finetti distribution

Physica A 441 (2016) 23–31 Contents lists available at ScienceDirect Physica A journal homepage: www.elsevier.com/locate/physa Extensivity of Rényi...

484KB Sizes 0 Downloads 0 Views

Physica A 441 (2016) 23–31

Contents lists available at ScienceDirect

Physica A journal homepage: www.elsevier.com/locate/physa

Extensivity of Rényi entropy for the Laplace–de Finetti distribution H. Bergeron a , E.M.F. Curado b,c,∗ , J.P. Gazeau b,d , Ligia M.C.S. Rodrigues b a

Univ Paris-Sud, ISMO, UMR 8214, 91405 Orsay, France

b

Centro Brasileiro de Pesquisas Fisicas, Brazil

c

Instituto Nacional de Ciência e Tecnologia-Sistemas Complexos, Rua Xavier Sigaud 150, 22290-180 Rio de Janeiro, RJ, Brazil

d

APC, UMR 7164, Univ Paris Diderot, Sorbonne Paris Cité, 75205 Paris, France

highlights • de Finetti distributions have extensive Rényi entropy. • Lower and upper bounds for Boltzmann–Gibbs entropy for de Finetti distributions. • Lower and upper bounds for Rényi entropy for de Finetti distributions.

article

info

Article history: Received 24 March 2015 Received in revised form 30 July 2015 Available online 1 September 2015 Keywords: Entropy Rényi Extensivity Binomial distribution Laplace–de Finetti representation

abstract The Boltzmann–Gibbs entropy is known to be asymptotically extensive for the Laplace–de Finetti distribution. We prove here that the same result holds in the case of the Rényi entropy. We also show some interesting lower and upper bounds for the asymptotic limit of these entropies. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Since the seminal works by Clausius and Lord Kelvin introducing the concept of entropy, S, which ended with the analytical formulation given by Clausius in 1865 in Eq. (59) of his fundamental work [1], this quantity is written as dS = δ Q /T , where δ Q is the heat exchanged in a thermodynamical transformation and T is the Kelvin temperature. The fact that this thermodynamical entropy is an extensive quantity can be immediately seen from the fundamental relation of thermodynamics, that can be written for a simple system as U = TS − pV + µN. As the energy U is extensive, the entropy, S, has to be extensive, since the temperature, T , is an intensive variable. From the statistical mechanics point of view we have to work with a microscopic definition of entropy. This quantity is defined on the density probability of the microscopic states of the system, whether classical or quantum. This route was initiated by L. Boltzmann, with his work of 1872 [2] and put in clearer form in his work of 1877 [3]. In both papers he essentially uses the microscopic quantity



Corresponding author at: Centro Brasileiro de Pesquisas Fisicas, Brazil. E-mail addresses: [email protected] (H. Bergeron), [email protected] (E.M.F. Curado), [email protected] (J.P. Gazeau), [email protected] (L.M.C.S. Rodrigues). http://dx.doi.org/10.1016/j.physa.2015.08.014 0378-4371/© 2015 Elsevier B.V. All rights reserved.

24

H. Bergeron et al. / Physica A 441 (2016) 23–31

H = dxf (x, t ) log f (x, t ), where f (x, t ) is a density probability on the microscopic states and x is the energy. The density f (x, t ) tends to a time-invariant distribution when time tends to infinite and the microscopic-based quantity H tends to a steady-state value, which can be associated with the thermodynamical entropy. This steady-state quantity H is extensive and associated with the equilibrium state. Since then, the nowadays so-called Boltzmann–Gibbs entropy (SBG ), related to H, or its quantum version, the von Neumann entropy, was accepted by the scientific community as the microscopic version of the Clausius thermodynamical entropy. In 1948 C. Shannon formulated his ‘‘A Mathematical Theory of Communication’’ [4] where, based on a set of three axioms, n he obtained a quantity that measures the information, H = −K i=1 pi log pi , depending on the probabilities {pi } of the messages. He also called this quantity, H, ‘‘entropy’’, in spite of the fact that this quantity was measuring a different object than the Boltzmann–Gibbs entropy was. In 1957 E.T. Jaynes [5] established the connection between information theory and statistical mechanics, assuming that, in statistical mechanics, the {pi }represents the probability of the microscopic states of a given physical system. If these probabilities of microscopic states are the equilibrium probabilities, the quantity H, which is extensive, gives the Clausius thermodynamical entropy. In information theory, in fact, after the work of Shannon, many quantities other than H that measure the information were proposed, but these other information measures were always considered as appropriate to the field of information theory and were not related to the thermodynamical entropy, reserved to the Boltzmann–Gibbs – or von Neumann – entropy. The association of the Boltzmann–Gibbs entropy with the thermodynamical entropy has been questioned in the last twenty five years for systems with long-range interactions or long-time memory, presenting a huge contraction of the accessible phase space of the system [6]. In those cases there are examples suggesting that the entropic form which should be extensive is not anymore the Boltzmann–Gibbs entropy but, for example, the Tsallis entropy. These results suggest that, for these cases, the microscopic entropic form which should be related to the thermodynamical entropy is the Tsallis one and not BG, the extensivity of the entropic form when the size of the system goes to infinity being one of the main criteria to choose the ‘‘correct’’ microscopic entropic form. However, in spite of the fact that some of these results concerning systems with long-range interactions are disputed, it is largely believed among the scientists that the only entropic form, depending on the probabilities of the microscopic states, which is extensive for any independent or weakly correlated systems is the Boltzmann–Gibbs–Shannon entropy. In fact, extensivity is to be expected from BG entropy‘s well known additivity property i.e., SBG (E1 ∪ E2 ) = SBG (E1 ) + SBG (E2 ) where E1 and E2 are independent. Therefore, as it is well known that Rényi entropy is additive [7], its extensivity for any independent systems should be also expected. Simple physical examples are chain of independent spins and ideal gases. In this work we revisit these basic concepts by comprehensively examining the extensivity properties of Boltzmann– Gibbs (BG) and Rényi entropies for the binomial distribution, which is appropriate for uncorrelated events, and for the Laplace–de Finetti representation [8], which deals with binary correlated systems. For an interesting discussion about the meaning of this representation, see Jaynes [9]. In recent years, application of the de Finetti representation theorem, in its more abstract versions, has been gaining more and more importance both in classical and quantum physics [10,11]; for more references, see Ref. [12]. For the binomial distribution we prove that both entropies are extensive, i.e. are proportional to the number n of events; we also prove that both entropies are upper-bounded by n log 2, which is the value that both assume when win and loss are equiprobable. The same upper bound n log 2 holds in the Laplace–de Finetti case, and we prove that this is the value asymptotically assumed at large n, which means that both BG and Rényi entropies are asymptotically extensive. We thus obtain new results about the extensivity of Rényi entropy and show unknown inequalities concerning both entropies, BG and Rényi. In Section 2 we examine and prove the extensivity properties of both the Boltzmann–Gibbs and Rényi entropies for the binomial distribution. The same study is implemented in Section 3 for the Laplace–de Finetti distribution. Our results are discussed in 4 and Appendix is devoted to the study of asymptotic behavior of both distributions. In writing the paper, we have opted for a pedagogical presentation of our results. Although the binomial case is one of the most familiar distributions, we recall elementary facts and proofs in order to make the paper self-contained.



2. Entropies for the binomial case Let us consider a sequence of n binary events, ‘‘win’’ or ‘‘loss’’, x1 , x2 , . . . , xn . The n-point joint probability of this sequence, pn (x1 , x2 , . . . , xn ), is called exchangeable if pn remains  symmetric in its arguments, for all n. Also, it is always possible to get pn by summing over xn+1 through pn (x1 , x2 , . . . , xn ) = xn+1 pn+1 (x1 , x2 , . . . , xn , xn+1 ). Therefore, the probability of a given sequence of n trials does not depend on the specific order of the binary events but only on the number of sequence elements that are in one of the states, ‘‘win’ or ‘‘loss’’. If we have k ‘‘wins’ and n − k ‘‘losses’’, we have

  n k

=

n! k!(n − k)!

(1)

possible sequences and the probability of getting them is (n)

Pk =

  n k

ϖk(n) ,

(2)

H. Bergeron et al. / Physica A 441 (2016) 23–31

(n)

where ϖk

n 

25

is the probability of one of these sequences and (n)

Pk = 1.

(3)

k=0

(n−1)

(n)

(n)

The property of exchangeability for all n is sufficient to lead to the relation ϖk = ϖk + ϖk+1 , which is known as Leibniz triangle rule. For a sequence of uncorrelated events (that are always exchangeable), where the probability to have ‘‘win’’ is η and to have ‘‘loss’’ is 1 − η, η ∈ [0, 1], we have the well-known binomial distribution

  n

(n)

Pk (η) =

k

ηk (1 − η)n−k ,

0 ≤ η ≤ 1.

(4)

Let us remark that in this paper we discuss only exchangeable sequences, but non exchangeable sequences are also easy to find. A time series of the stock market of a company, and the sequence of temperatures in Goiânia during a year, are examples of non-exchangeable sequences, because the ordering is important. In future works we intend to study these sequences as well. 2.1. Extensivity of Boltzmann–Gibbs entropy Let us first consider the probability term of the binomial distribution, without the binomial coefficient since we do not take into account the multiplicity of sequences,

P

(n)

ϖk(n) = nk = ηk (1 − η)n−k .

(5)

k

The Boltzmann–Gibbs entropy for the binomial case is then defined as SBG (η) = −

n n     n (n) (n) Pk log(ϖk ). ϖk(n) log(ϖk(n) ) = − k=0

Applying p

d dp

(6)

k=0

to the relation

n    n k=0

k

k

pk qn−k = (p + q)n ,

(7)

kpk qn−k = np(p + q)n−1 .

(8)

we obtain n    n k=0

k

Therefore

⟨k⟩ =

n    n k=0

k

(n)

kϖk (η) = nη.

(9)

From Eq. (6) we obtain SBG (η) = −(⟨k⟩ log η + ⟨n − k⟩ log(1 − η)),

(10)

or, explicitly in terms of the parameters, SBG (η) = −n(η log η + (1 − η) log(1 − η)) ≤ SBG (1/2) = n log 2,

(11)

the inequality resulting from logarithm concavity. Therefore SBG is extensive for any n, with upper bound equal to n log 2. 2.2. Extensivity of Rényi entropy The Rényi entropy for the binomial case is defined as (q)

SR (η) =

1 1−q

  n   q  n (n) log ϖk . k=0

k

(12)

26

H. Bergeron et al. / Physica A 441 (2016) 23–31

It is actually a q-dependent family of entropies which tend to the BG entropy as q → 1. Now, we trivially have n   n   q    n n n qk (n) ϖk (η) = η (1 − η)q(n−k) = ηq + (1 − η)q .

k

k=0

k

k=0

(13)

It follows that n (q) log(ηq + (1 − η)q ) ≤ SR (1/2) = n log 2, (14) 1−q where the inequality holds for 0 < q < 1. Therefore SR is also extensive for any n, and the constant factor of n is in general q-dependent. It is remarkable that this q-dependence is removed only for η = 1/2, where the factor is log 2. The expression n log 2 is an upper bound, which is the same as the one obtained for the BG entropy. This unexpected result, unexpected in the sense that we could not find any such simple result in the literature, shows that for the binomial case, which describes n-uncorrelated binary events, at least two entropies, namely the Boltzmann–Gibbs and the Rényi, for any 0 < q < 1, ones, are extensive with n. (q)

SR (η) =

3. Entropies for the Laplace–de Finetti distribution 3.1. Laplace–de Finetti representation One of the first discussions of deformations of the binomial law was made by Laplace (1774). In 1937, de Finetti, analyzing the exchangeable sequences (correlated or not), proved in his original theorem [8] that, considering a subset of n trials from an exchangeable sequence of ‘‘wins’’ or ‘‘losses’’, the probability that it has precisely k ‘‘wins’’ is given by the expression 1



(n)

Pk =

  n

dy

k

0

yk (1 − y)n−k g (y),

(15)

1

where g (y) ≥ 0 and 0 g (y)dy = 1. This actually represents a superposition of binomial factors weighted by a nonnegative g (y). In fact, it was in this intuitive sense that the representation given by Eq. (15) was introduced by Laplace in 1774. Let us remark here that an exchangeable sequence can be correlated or not, but an uncorrelated sequence is always exchangeable. (n) The Laplace–de Finetti representation leads to a deformation of the binomial law where the ϖk term, given by Eq. (5), changes into a sort of beta transform averaging the ‘‘pure’’ ηk (1 − η)n−k , namely

ϖ k(n) :=

1



1



dy g (y) yk (1 − y)n−k

dy g (y) = 1.

where

0

(16)

0

Clearly, the sum rule, Eq. (3), is trivially satisfied and when g (y) = δ(y − η) we recover the binomial case, Eq. (5). This modification represents exchangeable processes and therefore satisfies the Leibniz triangle rule.   (n) (kn) := n ϖ k tends to the We show in Appendix (see Eq. (47)) that in the limit n tending to infinity the distribution P k (n)

same function g (x) used in the definition (16) of the modified ϖ k . This allows us to have a limit distribution that is not Gaussian, unlike the binomial case. In fact this limit was already found by Hanel, Thurner and Tsallis [13] who analyzed the modification of the binomial law above. They also demonstrated that the Boltzmann–Gibbs entropy becomes extensive at large n. These results mean that in spite of the fact that the limit distribution can be changed if we choose properly the function g (x), the extensive entropy is still the Boltzmann–Gibbs one. 3.2. Lower and upper bounds for BG and Rényi entropies In the remainder we define η¯ as

η¯ =

1



dy g (y) y.

(17)

0

From Eq. (9), with the definition above, it is straightforward to have n    n k=0

k

(n)

kϖ k [g ] = nη. ¯

(18)

Furthermore, using the concavity of x → log x, we have



1

dyg (y) log yk (1 − y)n−k ≤ log





0

1





dyg (y)yk (1 − y)n−k .

(19)

0

With the same reasoning applied to x → xq in the case 0 < q < 1, 1



dyg (y) yk (1 − y)n−k



0

q

1



dyg (y)yk (1 − y)n−k

≤ 0

q

.

(20)

H. Bergeron et al. / Physica A 441 (2016) 23–31

27

3.2.1. Boltzmann–Gibbs entropy S˜BG [g ] Using Eq. (19) we deduce that



1

dyg (y)

 SBG [g ] ≤ − 0

n    n k=0

k

  ϖ k(n) [g ] log yk (1 − y)n−k .

(21)

Then, from Eq. (18), 1



dyg (y)(η¯ log y + (1 − η) ¯ log(1 − y)).

 SBG [g ] ≤ −n

(22)

0

We prove below, from the limit analysis of the Rényi entropy, that the upper bound n log 2 is unchanged. (q)

3.2.2. Rényi entropy  SR [g ] (for 0 < q < 1) From Eqs. (13) and (20) we infer that 1



dyg (y)(yq + (1 − y)q )n ≤ 0

n    n

k

k=0

ϖ k(n) [g ]

q

.

(23)

Now, using the concavity of x → log x, we have 1



dyg (y) log (yq + (1 − y)q )n ≤ log





1



0



dyg (y)(yq + (1 − y)q )n .

(24)

0

Hence we get the lower bound 1



n 1−q

(q)

dyg (y) log(yq + (1 − y)q ) ≤  SR [g ].

(25)

0

In other words 1



(q)

(q)

dyg (y)SR (y) ≤  SR [g ].

(26)

0

Furthermore, using the Holder inequality with q′ = 1/q > 1 and p′ = 1/(1 − q) we have

 1/p′   ′ n   n   n   q qq′ 1/q    n p′ n n (n) (n) 1 . ϖ k [g ] ≤ ϖ k [g ] k

k=0

k=0

k

k=0

k

(27)

Then, from the normalization of the Laplace–de Finetti probability law we deduce that

 1−q n   n   q   n n (n) = 2n(1−q) . ϖ k [g ] ≤ k

k=0

k=0

k

(28)

It follows the upper bound (q)  S [g ] ≤ n log 2.

(29)

R

3.2.3. Back to  SBG [g ] We know that as q → 1, we have SR → SBG . It follows from this fact and from Eq. (26) a second inequality for  SBG [g ]: 1



dyg (y)(y log y + (1 − y) log(1 − y)) ≤  SBG [g ].

−n

(30)

0

From Eqs. (22) and (30) we deduce that 1



dyg (y)SBG (y) ≤  SBG [g ] ≤ −n

1



0

dyg (y)(η¯ log y + (1 − η) ¯ log(1 − y)).

(31)

0

(q)

Furthermore, from the upper bound SR [g ] ≤ n log 2, taking the limit q → 1, we deduce that  SBG [g ] ≤ n log 2. We note that the asymptotic limit of the Boltzmann–Gibbs entropy for the Laplace–de-Finetti representation can be analytically calculated giving 1



dxg (x)[x log x + (1 − x) log(1 − x)].

 SBG [g ] ∼ −n 0

(32)

28

H. Bergeron et al. / Physica A 441 (2016) 23–31

The numerical value of the dominant above depends on the specific form of the function √ term in the asymptotic expression g (x). For example, for g (x) = π8 x(1 − x) it is  SBG [g ] ∼ (2 log 2 − 65 )n ≈ 0.553n. 3.3. Asymptotic analysis of the extensivity properties of the Rényi entropy Let us now analyze the asymptotic behavior of the Rényi entropy 1

(q)  S [g ] = R

1−q

log

 n    n k=0

k

(n)

ϖ k

q

 (33)

for the Laplace–de Finetti modification of the binomial law. The asymptotic behavior of the binomial coefficient is given by Eq. (37) in Appendix and the asymptotic behavior of ϖ k(n) can be obtained from Eq. (16), considering k = nx, (n)

ϖ k

1



dy exp[n(x log y + (1 − x) log(1 − y))]g (y)

= 0

1



dy exp[nfL (x, y)]g (y),

:=

(34)

0

where we can use the Laplace approximation since n is very large. The maximum of the function fL (x, y) occurs when y = x. Expanding the functions fL (x, y) for y around x up to second order, we have (n)

ϖ k

 ∼

2π x(1 − x) n

exp[n(x log x + (1 − x) log(1 − x))]g (x).

Using Eqs. (37) and (35) in Eq. (33) with k = nx,



 n  1−2 q SR [g ] ∼ log 1−q 2π 1

(q)

1

 0

dx 



k

→n

1 0

g (x)q

(x(1 − x))1−q

(35)

dx, we have

 ] exp[n(1 − q)fRL (x) ,

where fRL (x) = −x log x − (1 − x) log(1 − x). Again, we can use Laplace approximation when n is very large and 0 < q < 1. The maximum of the function fRL (x) occurs when x = 1/2. We have that fRL (1/2) = log 2 and, for the second derivative, ′′ fRL (1/2) = −4. Expanding the function fRL (x) up to second order and integrating from (−∞, ∞) we get

(q)

SR [g ] ∼

1 1−q

 log

  n  1−2 q π 22(1−q) g (1/2)q exp [n(1 − q) log 2] , 2n(1 − q) 2π

showing that asymptotically, for large values of n, the Rényi entropy behaves as (q)  S [g ] ∼ n log 2,

(36)

R

regardless of the value of q between (0, 1) (exactly the range of values of q where the Rényi entropy is concave). This result holds in agreement with Eq. (29). It means that the Rényi entropy for Laplace–de Finetti tends to its upper bound √ at large n. In Fig. 1 we show a numerical computation (up to n = 103 ) of the Rényi entropy for q = 1/2 and g (y) = π8 y(1 − y) (n)

(displaced Wigner distribution). With this function, the integral ϖ k

ϖ k(n) =

8Γ (k + 3/2)Γ (n − k + 3/2)

π Γ (n + 3)

takes the analytic form

.

Thus, even for the Laplace–de Finetti modification of the binomial law, where correlations are present, at least the Boltzmann–Gibbs (shown by Ref. [13]) and the Rényi entropies are extensive for large values of n. Besides, it is worthwhile noting that both these entropies are not only extensive but also additive. 4. Conclusions Thermodynamics requires that all the thermodynamic potentials and the thermodynamic entropy are extensive when the number n of microscopic variables is very large. Therefore, any microscopic description of these quantities, including the entropy, has to be extensive for large n in order to reproduce the thermodynamical results. In this paper we show analytically that more than one microscopic entropic form is extensive for the same probability distribution on the macroscopic states. These analytical results lead us to two considerations. First, the requirement of extensivity as a criterion for selecting a microscopic entropy as the thermodynamical entropy seems not be sufficient. Second, one is taken to ask how to choose among these entropic forms, which depend only on the probability of the microstates, the one that should lead to the

H. Bergeron et al. / Physica A 441 (2016) 23–31

Fig. 1. Rényi entropy for Laplace–de Finetti representation with q = 1/2, g (y) = π8

29



y(1 − y) and with n up to 103 .

thermodynamic entropy at a macroscopic level, when the number of particles (or events) is very large. One possibility of making this choice would be to look at Clausius definition, dS = δ Q /T , which must be satisfied for any quasi-static transformation. Probably, only one of the microscopically extensive entropic forms will satisfy this requirement. Certainly, the study of extensivity of entropies for more complex deformations of the binomial law than the Laplace–de Finetti representation, like those studied in Ref. [14–17], is expected to give interesting insights on these questions. In particular, in Ref. [18] an example is presented where Rényi entropy is extensive and Boltzmann–Gibbs is not. This shows that these two entropies are not necessarily equally extensive for all cases but, on the other hand, we do not know any example of probability distributions for which the BG entropy is asymptotically extensive and the Rényi entropy is not. Acknowledgments J.P. Gazeau thanks CBPF (PCI) and CNPq (grant 301135/2014-2) for financial support and CBPF for hospitality. E.M.F. Curado acknowledges CNPq (grant 308655/2014-1) and FAPERJ (INCT-SC) for financial support. Appendix. Asymptotic behavior of binomial and de Finetti distributions for large number of events (n)

In order to analyze the asymptotic behavior of the distribution Pk = k ηk (1 − η)n−k when the number n of events is very large, we calculate the asymptotic behavior of the binomial coefficient, Eq. (1), and of the probability term ηk (1 − η)n−k . First, defining x = k/n, we note that when n is very large x can be considered as a continuous variable belonging to [0, 1] and the sum over k, as for example in Eq. (3), turns into an integral in the Riemann–Stieltjes sense, with measure ndx. Using the Stirling’s approximation we can write the asymptotic behavior of the binomial coefficient, for n very large and for k = nx, as

  n

nx

∼ √

1 2π nx(1 − x)

n

exp{n[−x log x − (1 − x) log(1 − x)]},

(37)

and the probability term as

ηnx (1 − η)n(1−x) ∼ exp{n[x log η + (1 − x) log(1 − η)]}.

(38)

Putting the two terms together we have for the asymptotic behavior of the binomial distribution 1 (n) Pk ∼ √ exp[nfη (x)], 2π nx(1 − x)

(39)

where fη (x) = −x log

  x

η

− (1 − x) log

and Eq. (3) can be written, using n  k=0

(n)

1



Pk ∼ n 0

n

k=0



1−x 1−η

→n

exp[nfη (x)] . dx √ 2π nx(1 − x)

1 0



,

(40)

dx, as (41)

30

H. Bergeron et al. / Physica A 441 (2016) 23–31

As n is very large, this integral is dominated by the maximum of the function fη (x) allowing us to use the Laplace approximation, expanding fη (x) up to second order around the maximum. The maximum satisfies fη′ (x⋆ ) = 0 implying x⋆ = η. As fη (x⋆ = η) = 0 and fη′′ (x⋆ = η) = −1/(η(1 − η)), Eq. (39) can be written for a large n, up to the second order, as n  k=0

1

(n)





Pk ∼ √ 2π nη(1 − η)

 ndx exp −

−∞

n

(x − η)

2

2η(1 − η)



= 1,

(42)

where the limits of integration have been extended to (−∞, ∞) since the behavior of the Gaussian is Dirac-like at large n. This endows that the limit of the binomial distribution, for a large n, is the Gaussian distribution





n 1 P(nxn) ∼ √ exp − (x − η)2 . 2η(1 − η) 2π nη(1 − η)

(43)

Defining y = nx, y¯ = nη and σ 2 = nη(1 − η), Eq. (43) can be rewritten as





1 1 P(yn) ∼ √ exp − (y − y¯ )2 , 2nη(1 − η) 2π nη(1 − η)

(44)

that is the well-known Gaussian limit of the binomial distribution when n is very large.   (n) (kn) := n ϖ Let us now show that the Laplace–de Finetti distribution P k tends to the function g (x) used in the definition k (16). First we replace yk (1 − y)n−k in the integral (16) by



1 2π nx(1 − x)

exp[nψx (y)],

with ψx (y) = fy (x) = x log y + (1 − x) log(1 − y) − (x log x + (1 − x) log(1 − x)). Then we can write 1



(n)

k=nx ∼ P 0

1 dy g (y) √ exp[nψx (y)]. 2π nx(1 − x)

(45)

We now apply the same reasoning as above. At large n, this integral is dominated by the maximum of the function ψx (y). Since

d dy

ψx (y) =

x y



1−x 1−y

= 0 for y = x, ψx (x) = 0 and

d2 d2 y

ψx (y) = − yx2 −

order, for a large n Eq. (45) can be written as

1−x (1−y)2

  n(y − x)2 dy g (y) √ . exp − 2x(1 − x) 2π nx(1 − x) 0 √ √ With the change of variable t = n(y − x)/ 2x(1 − x) this integral becomes     n(1−x) 2x 1 2x(1 − x) 2 t + x e−t . dt g √  nx n n π − 2(1−x) (kn=)nx ∼ P



1

1

= − x(11−x) for y = x, up to the second

(46)

(n)

k=nx as a probability distribution on the interval (0, x) with Dropping the factor 1/n in order to keep the normalization of P the measure dx, and going to the limit n → ∞, we obtain (n)

k=nx → g (x). nP n→∞

(47)

Of course, this result is valid under (mild) restrictions on the weight g (y). References [1] R. Clausius, Ueber verschiedene für die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie, Ann. Phys. Chem. 125 (1865) 23. no 7; The Mechanical Theory of Heat, 1867, p. 28. translated by T. Archer Hirst, van Voorst. [2] L. Boltzmann, Weitere Studien über das Wärmegleichgewicht unter Gasmolekülen, Wiener Berichte 66 (1872) 275–370; Bermerkungen über einige Probleme der mechanische Wärmetheorie, Wiener Berichte 75 62–100. [3] L. Boltzmann, Über die beziehung dem zweiten Haubtsatze der mechanischen Wärmetheorie und der Wahrscheinlichkeitsrechnung respektive den Sätzen über das Wärmegleichgewicht, Wiener Berichte 76 (1877) 373–435. [4] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27 (1948) 379–423 & 623–656. [5] E.T. Jaynes, Information theory and statistical mechanics, Phys. Rev. 106 (1957) 620; Phys. Rev. 108 (1957) 171. [6] C. Tsallis, Introduction to Nonextensive Statistical Mechanics: Approaching a Complex World (Online-Ausg. ed.), Springer, New York, ISBN: 978-0387-85358-1, 2009. [7] A. Rényi, On measures of entropy and information, in: Proc. 4th Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1, 1960, pp. 547–561. [8] B. de Finetti, La prévision: ses lois logiques, ses sources subjectives, Ann. Inst. H. Poincaré 7 (1937) 1–68. [English translation in Studies in Subjective Probability (1980) (H.E. Kyburg and H.E. Smokler, eds.) 53–118. Krieger, Malabar, FL.]. [9] E.T. Jaynes, Some applications and extensions of the de Finetti representation theorem, in: Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, in: Studies in Bayesian Econometrics and Statistics, vol. 6, North-Holland, 1986.

H. Bergeron et al. / Physica A 441 (2016) 23–31

31

[10] G. Chiribella, On quantum estimation, quantum cloning and finite quantum de Finetti theorems, in: Theory of Quantum Computation, Communication, and Cryptography, in: Lecture Notes in Computer Science, vol. 6519, Springer, 2011. [11] M. Lewin, P.N. Tam, N. Rougerie, Remarks on the quantum de Finetti theorem for bosonic systems, hal-00870911v3. [12] N. Rougerie, Théorèmes de De Finetti, limites de champ moyen et condensations de Bose–Einstein, Cours Peccot au Collège de France (2014). FebruaryMarch. [13] R. Hanel, S. Thurner, C. Tsallis, Eur. Phys. J. B 72 (2009) 263–268. [14] E.M.F. Curado, J.P. Gazeau, L.M.C.S. Rodrigues, J. Stat. Phys. 146 (2012) 264–280. [15] H. Bergeron, E.M.F. Curado, J.P. Gazeau, L.M.C.S. Rodrigues, J. Math. Phys. 53 (2012) 103304-1-22. [16] H. Bergeron, E.M.F. Curado, J.P. Gazeau, L.M.C.S. Rodrigues, in: J.P. Gazeau, M. Ge, C. Bai (Eds.), Group 29: Physical and Mathematical Aspects of Symmetries, in: Proceedings of the XXIXth International Colloquium on Group Theoretical Methods in Physics, 19–25 August 2012, World Scientific, Tianjin, China, 2013, pp. 265–270. [17] H. Bergeron, E.M.F. Curado, J.P. Gazeau, L.M.C.S. Rodrigues, J. Math. Phys. 54 (2013) arXiv: 1308.4863v1 [math-ph]. [18] H. Bergeron, E.M.F. Curado, J.P. Gazeau, L.M.C.S. Rodrigues, arXiv:1412.0581v1 [cond-mat.stat-mech].