Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Statistical Mechanics: Principles and Selected Applications
Statistical Mechanics: Principles and Selected Applications
Statistical Mechanics: Principles and Selected Applications
Ebook756 pages4 hours

Statistical Mechanics: Principles and Selected Applications

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

"Excellent … a welcome addition to the literature on the subject." — Science
Before the publication of this standard, oft-cited book, there were few if any statistical-mechanics texts that incorporated reviews of both fundamental principles and recent developments in the field.
In this volume, Professor Hill offers just such a dual presentation — a useful account of basic theory and of its applications, made accessible in a comprehensive format. The book opens with concise, unusually clear introductory chapters on classical statistical mechanics, quantum statistical mechanics and the relation of statistical mechanics to thermodynamics. Then follows a wide-ranging, detailed examination of various applications. Chapter 4 deals with fluctuations. The fifth chapter treats the theory of imperfect gases and condensation, largely following Mayer's theory but also giving some new, alternative derivations and discussing in the final section Yang and Lee's theory. The sixth chapter is devoted to a discussion of distribution functions and the theory of the liquid state. Chapter 7 deals with nearest-neighbor (Ising) lattice statistics, while the last chapter discusses free-volume and hole theories of liquids and solids.
Written primarily for graduate students and researchers in chemistry, physics and biology who already have some acquaintance with statistical mechanics, the book lends itself to use as a text for a second course in statistical mechanics, as a supplement to a first course or for self-study or reference. The level is neither introductory nor highly sophisticated; the author has generally emphasized material that is not available in other books. In addition, selected bibliographic references at the end of each chapter suggest supplementary reading.

LanguageEnglish
Release dateApr 26, 2013
ISBN9780486318561
Statistical Mechanics: Principles and Selected Applications

Read more from Terrell L. Hill

Related to Statistical Mechanics

Titles in the series (100)

View More

Related ebooks

Physics For You

View More

Related articles

Reviews for Statistical Mechanics

Rating: 4 out of 5 stars
4/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Statistical Mechanics - Terrell L. Hill

    NDEX

    CHAPTER 1

    PRINCIPLES OF CLASSICAL STATISTICAL MECHANICS

    1. Statistical Mechanics and Thermodynamics

    Thermodynamics is concerned with the relationships between certain macroscopic properties (the thermodynamic variables or functions) of a system in equilibrium. We must turn to statistical mechanics if our curiosity goes beyond these formal interrelationships and we wish to understand the connection between the observed values of a thermodynamic function and the properties of the molecules making up the thermodynamic system. That is, statistical mechanics provides the molecular theory of the macroscopic properties of a thermodynamic system. In current research, both thermodynamics and statistical mechanics are in the process of extension to systems departing from equilibrium, but these developments will not be included here.

    In this chapter we summarize the foundations of classical statistical mechanics. The corresponding quantum-mechanical discussion is given in Chap. 2. The relationships between classical statistical mechanics, quantum statistical mechanics, and thermodynamics will then be outlined in Chap. 3.

    Consider a thermodynamic system which contains N1 molecules of component 1, N2 molecules of component 2, … , and Nr molecules of component r, where r is the number of independent components. Also, let there be s external variables x1,x2, … , xs (e.g., volume). Then extensive thermodynamic properties may be considered as functions of r + s + 1 independent variables (for example, T, x1, x2,… , xs, N1,N2, … , Nr). Similarly, intensive properties are functions of r + s independent variables (for example, T, x1, x2, … , xs, N2/N1, N3/N1, … , Nr/N1). Thus, the thermodynamic state of a system (including its extent) is completely specified by the assignment of values to r + s + 1 variables. In contrast to this, the determination of the dynamical state of the same system¹ requires the specification of a very large number of variables, of the order of the total number of molecules N1 + N2 + … + Nr (roughly 10²³ in typical cases). To be specific, if the system has n degrees of freedom, one usually chooses the 2n variables q1, q2, … , qn and p1, p2, … , pn, where the q’s are (generalized) coordinates and the p’s are the associated conjugate momenta defined by

    L is the kinetic energy of the system, and U(q1, … , qn) is the potential energy of the system. For example, in a one-component system containing N particles with no internal degrees of freedom (rotation, vibration, etc.), 2n = 6N since there are three translational degrees of freedom per particle.

    From the above it is clear that complete specification of the thermodynamic state leaves the dynamical state undefined. That is to say, there are very many (actually infinitely many in classical mechanics) dynamical (microscopic) states consistent with a given thermodynamic (macroscopic) state. The central problem of statistical mechanics, as a molecular theory, is therefore to establish the way in which averages of properties over dynamical states (consistent with the particular thermodynamic state of interest) should be taken in order that they may be put into correspondence with the thermodynamic functions of the system.

    2. Phase Space

    As suggested by Gibbs, the dynamical state of a system may be specified by locating a point in a 2n dimensional space, the coordinates of the point being the values of the n coordinates and n momenta which specify the state. The space is referred to as phase space and the point is called a phase point or representative point. It should be noted that with the forces of the system given, assignment of the position of a phase point in phase space at time t actually completely determines the future (and past) trajectory or path of the point as it moves through phase space in accordance with the laws of mechanics. The equations of motion of the phase point are in fact (in Hamiltonian form)

    where H, the Hamiltonian function, is an expression for the energy of the system as a function of the p’s and q’s. That is,

    where we have introduced the usual convention that p and q mean p1, p2,… , pn and q1, q2, … , qn, respectively. In principle, this system of 2n first-order differential equations may be integrated to give p(t) and q(t). The 2n constants of integration would be fixed by knowing the location of the phase point at some time t (i.e., by knowing the coordinates and components of momenta of all molecules at t).

    Now the value of a thermodynamic property² G is determined by the dynamical state of the system; thus we may write G = G(p,q). Let us consider first the simplest case, that of a perfectly isolated system. At time t0, let the momenta and coordinates have the values p⁰, q⁰. The experimental measurement of G, beginning at t = t0, to give Gobs, actually involves observation of G(p,q) over a period of time τ(the magnitude of τ is discussed in Sec. 4) with

    That is, Gobs is a time average. If Eqs. (2.1) are solved using the initial conditions p(t 0) = p⁰ and q(t0) = q⁰, then G(t) in Eq. (2.3) is given by G(t) = G[p(t),q(t)], and Gobs may be computed. Thus, the time average indicated in Eq. (2.3) is that average over dynamical states which should in principle be put into correspondence with the value of the thermodynamic function G. Unfortunately, such a purely mechanical computation is, of course, quite impossible (even for this perfectly isolated system) owing to the complexity of the problem (n 10²³ degrees of freedom), and it is for this reason that we must turn (Sec. 4) to another, less direct, method, due to Gibbs, of averaging over dynamical states. The relation between the two methods is considered briefly in Sec. 7.

    The above remarks refer to a perfectly isolated system. For a system not in perfect isolation, for example, a system in thermal equilibrium with a heat bath, Gobs is still a time average as in Eq. (2.3), but p(t) and q(t) are no longer associated with a single trajectory of a conservative system in phase space. Thus, the mechanical calculation of Gobs is even more difficult here; in fact, it can only be carried out, even in principle, by treating the system and surroundings together, as a new, but larger, perfectly isolated system.

    3 Ensembles

    of perfectly isolated and therefore independent systems in a variety of dynamical states, but all with the same values of N1, … , Nr and x1, … , xs∞.

    An operational systems in this same macroscopic state. At some time tsystems are instantaneously removed from their surroundings (if any) and each put in perfect isolation. The result is an ensemble which at t′ may certainly be said to be representative of the macroscopic state of the real system. We shall actually be interested primarily in ensembles which we attempt to construct theoretically in such a way that the distribution of systems of the ensemble over dynamical states is representative of the macroscopic state of some particular real system, in the same sense that the operational ensemble above is representative of such a state at time t′.

    In the remainder of this section we discuss general properties of ensembles which hold whether or not the ensemble is supposed to be a representative ensemble in the above sense, and if it is a representative ensemble, whether or not the macroscopic state of interest is an equilibrium state.

    phase points, the ensemble itself will appear as a cloud of phase points in the phase space.³ As time passes, each phase point of the cloud pursues its own independent trajectory in phase space.

    is always taken large enough so that the concept of a continuous density of phase points can be introduced. In fact, it is usually convenient to normalize the density in such a way that it becomes a probability density. To do this, we define a function f(p1, … , pn, q1, … , qn; t) so that f(p1, … , pn, q1, … , qn; t)dp1 … dpn dq1 … dqn or, for brevity, f(p,q;t) dp dq phase points, at time t, in the element of volume dp1 … dpn dq1 … dqn f(p,q;t) dp dq and the density of phase points at p,q f(p,q;t). The quantity f(p,q;t) itself is called the probability density (or distribution function), for if a system is chosen at random from the ensemble at time t, the probability that the phase point representative of its dynamical state is in dp dq about the point p,q is f dp dq. The probability density integrates to unity,

    where the single integral sign means integration over all the p’s and q’s.

    The equations of motion, Eqs. (2.1), determine the trajectory of each phase point, given its location in phase space at some initial time. These equations therefore also determine completely the distribution function f(p,q;t) at any time if the dependence of f on p and q is known at the initial time. It must always be understood, therefore, that the time dependence of f is in accord with the laws of mechanics, and is not arbitrary. This time dependence is discussed below in connection with Liouville’s theorem.

    The ensemble average of any function φ(p,q) of the dynamical state of the system is defined as

    In view of the fact that the direct computation of the value Gobs of a thermodynamic property G from Eq. (2.3) cannot be carried out, Gibbs’ alternative suggestion, mentioned at the end of Sec. 2, was that an ensemble average be used in place of a time average. This of course accounts for our interest in ensembles. The particular way in which ensemble averages are employed for this purpose will be discussed in Secs. 4 and 5.

    Liouville’s Theorem. We derive here a result which is necessary to the further development of our general argument. Suppose that the distribution function f(p,q;t), with its time dependence, is known. Now consider the change df in the value of f at p, q, t as a result of arbitrary infinitesimal changes in p, q, and t :

    That is, f + df is the value of f at a point p + dp, q + dq near p, q and at a time t + dt. Now, instead of using arbitrary quantities dp, dq, let us choose the particular neighboring point p + dp, q + dq which the trajectory of the phase point originally at p, q (time t) passes through at t + dt. In this case dp and dq cannot be independent variations since along the trajectory p = p(t) and q = q(t). Then, using Eq. (3.3), the change of f with time in the neighborhood of a phase point traveling along its trajectory is

    In contrast to df/dt, ∂f/∂t gives the change of f with time at a fixed location p, q in phase space. Now Liouville’s theorem states that df/dt = 0.

    To prove the theorem, consider an arbitrary but fixed volume V . Then the number of phase points in V at t is

    in unit time gives another expression for d V/dt, namely,

    where n is the usual normal unit vector, and the vectors u and fu have components in phase space

    respectively. Because V is arbitrary, on comparing Eqs, (3.6) and (3.7) we obtain

    Now, from Eq. (2.1),

    so the second sum in Eq. (3.9) is zero. We therefore have the required result

    which is equivalent to [see Eq. (3.4)]

    or to [see Eq. (2.1)]

    The sum on the right-hand side has the form of a so-called Poisson bracket (of f and H, in this case).

    According to Eq. (3.11), in the neighborhood of a phase point following its trajectory, the probability density f remains constant, or, in other words, the probability fluid occupying phase space is incompressible. An equivalent statement is that if p, q are the coordinates of a phase point at time t0 + s which at t0 were p⁰, q⁰, then Liouville’s theorem states that

    With the aid of the equations of motion of the phase point, p and q in Eq. (3.13) should be considered functions of the initial conditions p⁰, q⁰ and of the time. That is,

    Now, let us select a small element of volume at p⁰, q⁰ and time t0. At time t0 + s the phase points originally (t0) on the surface of this element of volume will have formed a new surface enclosing an element of volume of different shape at p, q. The element of volume at p, q must contain the same number of phase points as the original element of volume at p⁰, q⁰. This follows because a phase point outside or inside the element of volume can never cross the surface as the element moves through phase space, for otherwise there would be two different trajectories through the same point in phase space. But this is impossible in view of the uniqueness of the solution of the equations of motion of a phase point. As both the density and number of phase points in the element of volume are the same at p⁰, q⁰ and p, q, we conclude that although the shape of an element of volume is altered as it moves through phase space, its volume remains constant. This fact is expressed mathematically by saying that the Jacobian of the transformation in Eq. (3.14), from p⁰, q⁰ to p, q, is unity:⁴

    Equation (3.12) shows explicitly how the value of the distribution function at any fixed point p, q changes with time. In particular, the masses and forces associated with the molecules of the system determine H(p,q) and hence the ∂H/∂pi and ∂H/∂qi and the initially chosen f at t0 determines the ∂f/∂qi and ∂f/∂pi at t0. Thus Eq. (3.12) can in principle be integrated starting at t0 to obtain the value of f at any p, q, and t, given H(p,q) and the form of f at t0.

    4. Postulate on the Use of Ensemble Averages

    In this section and the following one we shall indicate the correspondence used in statistical mechanics between time averages [for example, Eq. (2.3)] and ensemble averages. No completely general and rigorous proof of the validity of this correspondence is available; therefore we shall adopt a postulatory approach. The correctness of the postulates chosen may then be tested by comparing statistical mechanical predictions with experiment. As a matter of fact, many such comparisons have been made with complete success, so that one is justified a posteriori in having virtually complete confidence in the postulates chosen.

    As in any logical system based on postulates, there are alternative but equivalent choices as to which ideas are to be considered fundamental. In the present section we introduce Postulate A, which is concerned with correlating time and ensemble averages. This postulate will then be employed to eliminate many possible specific choices of the form of the distribution function to be used in computing ensemble averages. In particular, for a system in equilibrium, we shall find that f(p,q;t) must be independent of t. Postulate B will then complete the specification of the appropriate form of f by indicating its dependence on p and q.

    Consider an experimental system in a given thermodynamic state. We measure the average value Gobs of a thermodynamic property G(p,q) of this system, from time t0 to t0 + τ, as indicated by Eq. (2.3), with the understanding, however, that in general the state is not one of perfect isolation and hence p(t) and q(t) in Eq. (2.3) are not associated with a single trajectory of a conservative system in phase space. Next, we construct at t′(t′ t0) an ensemble of systems with the same N1, … , Nr and x1, … , xs as the experimental system, and with a theoretically chosen distribution function which we attempt to make representative at t′ of the state of the experimental system.

    Now for each (perfectly isolated) system of the ensemble we perform a measurement of the time average of G(p,q) from t0 to t0 + τ, giving

    where p, q is the dynamical state at time t0 + s of the system whose state was p⁰, q⁰ at time t0. The value of depends of course on the initial state p⁰, q⁰, but we select τ of sufficient magnitude (microscopically long) so that becomes independent of τ through the smoothing out of microscopic fluctuations. For example, instruments designed for the measurement of the pressure automatically record a time average over an interval of the order of magnitude of, say, a microsecond to a second. If the area of the surface on which the normal force is recorded is sufficiently small and the time resolution of the instrument sufficiently fine, fluctuations will of course be observed which are independent of the characteristics of the instrument. With sufficiently fine resolution, the force acting on unit area, therefore, ceases to be a macroscopic property. Thus, when we speak of pressure as a macroscopic property, we mean the normal force per unit area averaged over a macroscopic interval of time of such a magnitude that the recorded average is not sensibly dependent on the magnitude of the interval.

    On the theoretical side, a rather general analysis of the proper magnitude of τ can be given⁵ in both equilibrium and nonequilibrium cases. However, we merely remark here, as an illustration, that in the case of gases of such low density that only binary collisions are important, it is rather obvious that τ must be long relative to the duration of a representative collision.

    We can now state Postulate A: Gobs is identified with the ensemble average of Gτ, Gτ. Explicitly,

    where f(p⁰,q⁰;t0) is the distribution function at time t0. We note immediately that it follows from this postulate that, since Gobs must be independent of time (i.e., we are concerned with an equilibrium state), must also be independent of time. But Eq. (4.2) shows that in general depends on the time t0 at which the time averaging is commenced and that the necessary and sufficient condition for to be independent of time is that the distribution function f be independent of time. That is, ∂f/∂t = 0 (for equilibrium). Hence to be representative of any system in equilibrium, the distribution function originally chosen at time t′ must be of such form that ∂f/∂t = 0 [for example, it must satisfy Eq. (3.12) with ∂f/∂t = 0]. Also, if the dependence of f on p and q at t′ is representative of the state of the experimental system, this dependence will also be representative at all later times, since it does not change with time.

    The validity of the identification of Gobs with depends on the fluctuation in the value of found from different systems in the ensemble. This is obvious in the (limiting) case of an experimental system which is perfectly isolated, for the experimental system itself may then be regarded as typical of the (perfectly isolated) members of the ensemble. Then if these fluctuations are very small, the probability of any particular , for example Gobs, deviating appreciably from the mean, , is also very small. If the experimental system is not perfectly isolated, it spends, between t0 and t0 + τ∞) of the ensemble at the same time. Thus, Gobs is a composite of parts of various ’s. But again if the fluctuations of from are very small, the probability of a composite of ’s deviating appreciably from Gτ is also very small, unless the ensemble is very far from representative of the experimental system.

    The validity of the identification of Gobs with is thus reduced to the question of whether the fluctuation of from would indeed be small. We would expect this to be the case, especially because of the smoothing effect of time averaging. This conclusion is also dependent of course on the ensemble being reasonably representative of the experimental system.

    The above argument does not pretend to be rigorous. In fact it is clear from Eq. (4.2) that the value of to be identified with Gobs depends on the particular form of the distribution function. But the preceding paragraphs do lead us to expect that actually would be quite insensitive to the form of f within certain limits. We shall see later (Chap. 4) that this is indeed the case: average values of thermodynamic properties are somewhat insensitive to f, but there is not this arbitrariness if we are interested in fluctuations in these properties.

    Because of the at least formal dependence of on f just mentioned, it is perhaps preferable to consider Postulates A and B (on the form of f) as two parts of a single fundamental postulate.

    We return now to the requirement ∂f/∂t = 0, which, when combined with Liouville’s theorem, Eq. (3.13), gives us

    That is, if we indicate the trajectory of a phase point by a curve in phase space, at a given time f has the same value at all points on the curve [not required by Eq. (3.13) alone] and this value does not change with time. In other words, f has the same constant value in the neighborhood of all phase points moving along the same trajectory. Thus, f depends only on the trajectory and is therefore a constant of the motion.

    Equation (4.3) makes possible a simplification in Eq. (4.2). From Eqs. (4.1) and (4.2) we have

    But we can write f(p⁰,q⁰;t0) = f(p,q). Also, let us change variables of integration from p⁰, q⁰ to p,q. The Jacobian of this transformation is unity [Eq. (3.15)], so that dpdq⁰ = dp dq. Then

    Now interchanging the order of integration,

    since the ensemble average of G, G, is independent of time in this case (f independent of t). Hence we may replace by G in Postulate A, if desired. Indeed, this identification of Gobs with G is the standard procedure. Use has been made here of instead of G because the connection between Gobs and is closer, both involving time averages. Also, it is possible to use a similar approach in nonequilibrium statistical mechanics.

    5. Postulate on the Form of the Distribution Function

    In order to complete the postulatory basis of classical statistical mechanics, particular choices of the distribution function f must be indicated. We have already found in the preceding section that the introduction of ensemble averages in Postulate A together with the stationary value of Gobs in an equilibrium state require that f be independent of time [using, for example, Eq. (3.12) with ∂f/∂t = 0 as a criterion]. We now supplement this restriction in possible forms of f by an argument that very strongly suggests further that f(p,q) should be chosen as some function of H(p,q), that is, f = f[H(p,q)].

    Since f is a constant of the motion, it may be considered a function of the dynamical invariants, the usual dynamical constants (or integrals) of the motion. Further, for f(p,q) to be single-valued and continuous, it can depend only on the uniform integrals of the motion, those which are single-valued, continuous functions of p, q. Such integrals are usually very few, merely energy and perhaps certain momenta or angular momenta. For example, if we have a gas in a circular cylindrical container with smooth walls, then the component of angular momentum about the axis of the cylinder is an integral of the motion, for the external forces (the forces associated with the wall) do not destroy this integral (i.e., this component of momentum is conserved in a collision of a molecule with the wall). On the other hand, if the wall is rough or the vessel lacks such symmetry, then each collision with the wall changes the angular momentum of the system about a given axis, so that it is no longer an integral of the motion. Thus, in actual systems confined in containing vessels, the energy is usually the only uniform integral of the motion. We shall therefore postulate, following Gibbs, that f depends only on H in an equilibrium ensemble.

    There are many different ways of expressing the state of a thermodynamic system, depending on the particular choice of r + s + 1 thermodynamic variables (see Sec. 1). With the type of ensemble (called a petit ensemble by Gibbs) we have discussed so far, it is required that the r + s variables N1, … , Nr, x1, … , xs be chosen (see Sec. 3). There remains the selection of the last independent variable, and for each choice a particular form of f is appropriate. We shall consider two cases here.⁸

    We now state Postulate B and follow this with a discussion of the postulate.

    Postulate B. (a) Microcanonical ensemble. For a closed, isolated thermodynamic system, i.e., a system with assigned values for the independent variables E, N1, … , Nr, x1, … , xs,

    where δE is a very small range in E.

    (b) Canonical ensemble. For a closed, isothermal thermodynamic system, i.e., a system with assigned values for the independent variables T, N1, … , Nr, x1, … , xs,

    where β is a constant.

    Equation (5.1), representing the microcanonical ensemble, recognizes the fact that it is impossible for an actual system to be perfectly isolated and hence that the energy can be specified only within some very small range⁹ δE. However, in an idealized conservative system in classical mechanics, there is no difficulty in principle in letting δE → 0. The situation with regard to this limit is rather different in quantum mechanics (see Chap. 2).

    It will be noted that Eq. (5.1) is really the only possible choice for an ensemble each system of which has the energy E, if we accept the argument above leading to the conclusion that in general f = f(H). However, although this argument is rather convincing, it is not rigorous; therefore Eq. (5.1) has the status of a postulate the validity of which remains to be demonstrated.

    Although Eq. (5.1), as we have seen, rather obviously represents a closed, isolated system, it is not clear offhand that Eq. (5.2) is appropriate for a closed, isothermal system. However, if we accept Eq. (5.1) as correct, it is then possible to deduce Eq. (5.2). That is, if Eq. (5.1) describes the proper representative ensemble for a closed, isolated system, then Eq. (5.2) describes the representative ensemble for a closed, isothermal system. This deduction is essentially the same in both classical and quantum theory, but is more natural in quantum language. For this reason, and also because classical mechanics can in any case be regarded as a limiting form of quantum mechanics, we defer the derivation of the canonical ensemble from the microcanonical ensemble until Chap. 2. The significance of the constant β will be deduced in Chap. 3. For the present, Eq. (5.2) is simply a postulate, the consequences of which are to be compared with experiment.

    The constant in Eq. (5.2) may be evaluated using Eq. (3.1), with the result

    where Q(β, x1, … , xs ,N1, … , Nr) is usually called the phase integral or partition function.

    6. Grand Ensembles

    Let us adopt the notation x = x1, … , xs and N = N1, … , Nr (analogous to our p, q notation). In a petit ensemble N and x are the same in all systems. We may introduce generalized ensembles by including systems with different values of x, N or both.¹⁰ It will clearly be necessary to use a generalized ensemble to represent any thermodynamic state specified by the assignment of r + s + 1 thermodynamic variables which do not include both x and N.

    A generalized ensemble may be regarded as a collection of petit ensembles (i.e., all systems in the generalized ensemble with the same x and N may be grouped together to form a petit ensemble). It is clear that a single phase space cannot be used for a generalized ensemble but rather that a separate phase space is needed for each of its petit ensembles. Within each such phase space all the properties of phase points discussed in Sec. 3 will still obtain.

    We define a grand ensemble as a generalized ensemble in which all systems have the same x, but not N. As will be seen, a grand ensemble is useful as the representation of an open thermodynamic system. A related type of ensemble, not usually discussed, is one in which some but not all of the variables N1, … , Nr have different values in different systems of the ensemble with x the same in all systems. This kind of ensemble would represent a thermodynamic system with semipermeable walls.

    We discuss further only grand ensembles; the extension to other generalized ensembles is considered in Chaps. 2 and 3. The treatment will be condensed as many arguments carry over with minor change from the earlier discussion.

    The experimental system consists of a certain definite region of space into and out of which molecules of the different components are free to pass. For example, the experimental system might be but is not necessarily a region in space which is simply part of a much larger similar system. The theoretically constructed grand ensemble, which we attempt to make representative of the state of the experimental system, is then a very large collection of closed and perfectly isolated systems in a variety of microscopic states and with different N’s, but all with the same x. Let f(G)(p1, … , pn, q1, … , qn; N1, … , Nr;t)dp1 , … , dpn dq1, … , dqn or, for brevity, f(G)(p,q;N;t) dp dq, be the fraction of the systems of the ensemble at time t which have the composition N and are also in the region dp dq of the appropriate (to N) phase space. We may note that the value of n depends on N. Then

    where the summation is over all values of each of the N1, … , Nr from zero to infinity. For our purposes, the fraction of systems in the ensemble with composition N, given by

    is independent of time. That is: the ensemble is constructed with a certain distribution in N [see (6.2)]; as time passes, no systems are added to or removed from the ensemble; each system of the ensemble is closed (and perfectly isolated); hence it follows that (6.2) is independent of time. There is, however, no reason why a more general point of view cannot be adopted (for example, in nonequilibrium statistical mechanics).

    The ensemble average of a macroscopic property φ(p,q) is defined as

    where it must be understood that φ(p,q) implies a dependence on N because of the meaning of p and q as abbreviations.

    As each system of the ensemble has a fixed N, we can obtain for each system as in Eq. (4.1). We then adopt Postulate A: Gobs is identified with the ensemble average of Gτ, Gτ. That is,

    The remarks in Sec. 4 concerning the identification of Gobs and for an experimental system not perfectly isolated apply here if extended slightly. Gobs from Eq. (2.3) will involve not only contributions from different trajectories with the same N and x (as in Sec. 4) but also from trajectories with different N, on account of the fluctuations in N of the experimental open system. That is, Gobs may properly be regarded as a composite of ’s from a grand ensemble.

    As before, we see that f(G)must be independent of time in an equilibrium ensemble, and therefore [compare Eq. (4.3)]

    Thus, for a given N, f(G) is a constant of the motion. From Eqs. (6.4) and (6.5) we then find that = G. Finally, we may conclude with considerable assurance, using the same argument as before, that to represent a system in equilibrium f(G) should be a function of H(p,q) and N only.

    As will become clear in Chap. 3, the use of a grand ensemble implies the choice of the r + s independent thermodynamic variables μ1, … , μr, x1,. . , xs where the μ’s are chemical potentials. There remains the selection of a last variable. We include the only important case¹¹ in the following statement of Postulate B. Grand canonical ensemble. For an open, isothermal thermodynamic system, i.e., a system with assigned values for the independent variables T, μ1, … , μr, x1, … , xs,

    where β and the γ’s are constants.

    The appropriateness of Eq. (6.6), like Eq. (5.2), is not obvious. For the present, it is merely stated as a postulate. Actually,

    Enjoying the preview?
    Page 1 of 1