Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Time's Arrow: The Origins of Thermodynamic Behavior
Time's Arrow: The Origins of Thermodynamic Behavior
Time's Arrow: The Origins of Thermodynamic Behavior
Ebook348 pages2 hours

Time's Arrow: The Origins of Thermodynamic Behavior

Rating: 4 out of 5 stars

4/5

()

Read preview

About this ebook

Written by a well-known professor of physiology at McGill University, this text presents an informative exploration of the basis of the Second Law of Thermodynamics, detailing the fundamental dynamic properties behind the construction of statistical mechanics.
Topics include maximal entropy principles; invertible and noninvertible systems; ergodicity and unique equilibria; asymptotic periodicity and entropy evolution; and open discrete and continuous time systems. The author demonstrates that the black body radiation law can be deduced from maximal entropy principles; discusses sufficient conditions for the existence of at least one state of thermodynamic equilibrium; describes the behavior of entropy in asymptotically periodic systems and the necessary and sufficient condition for the evolution of entropy to a global maximum; and presents the three main types of ergodic theorems and theory proofs. He also explores the potential of incomplete knowledge of dynamical variables, measurement imprecision, and the effects of noise in entropy increases.
Geared toward physicists and applied mathematicians with an interest in the foundations of statistical mechanics, this text is suitable for advanced undergraduate and graduate courses.

LanguageEnglish
Release dateNov 30, 2011
ISBN9780486152257
Time's Arrow: The Origins of Thermodynamic Behavior

Related to Time's Arrow

Related ebooks

Physics For You

View More

Related articles

Reviews for Time's Arrow

Rating: 4 out of 5 stars
4/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Time's Arrow - Michael C. Mackey

    INDEX

    CHAPTER 1

    STARTERS

    In this chapter we introduce some elementary concepts from measure theory as a start in our examination of the dynamical origin of increasing entropy. Section A draws a connection between thermodynamic systems and measure spaces. Section B briefly considers dynamics and phase spaces, while Section C introduces the notion of a density and postulates that the state of a thermodynamic system is characterized by a density. In Section D we introduce the Boltzmann-Gibbs entropy, and prove that it is the unique (up to a multiplicative constant) entropy definition that satisfies the physical requirement (additivity) that entropy be an extensive quantity.

    A. THERMODYNAMIC SYSTEMS.

    In defining a thermodynamic system we need some terms and concepts that come from measure theory.

    We first start with a set X. Measure theorists often like to keep X pretty abstract, but for us X is going to be the phase space (more about this in the next section) on which all of our dynamics operates. Sometimes X will be a closed finite interval like [0, 1], sometimes it may be infinite in extent like R+, R⁶, or even Rd, and sometimes X is a function space. In any event, whatever X is we are going to assume that it does not have any pathological properties.

    Given a phase space X we turn to a definition of what measure theorists call a σ be a collection of subsets (subspaces) of Xis a σ algebra if

    ;

    (2) given a sequence (infinite or not) { A k ; and

    .

    .]

    The final notion we need for the definition of a thermodynamic system is that of measure. Any real valued function μ defined on a σ is a measure if:

    (1) μ ) = 0;

    (2) μ ( A ; and

    whenever { A k .

    With the three concepts of a set X, a σ , and a measure μ we call the triple (X, μ) a measure space. If, in addition, we can find a sequence {Ak} of subsets of the σ such that

    then we say that the measure space (X, μ) is σ-finite. All of the measure spaces we consider will be σ-finite.

    Example 1.1. If we were considering a phase space like X = [0, 1] or X = R, then a reasonable σ algebra would be the smallest collection of closed intervals of the form [a, b]. These intervals have Lebesgue measure μL([a, b]) = b a

    Throughout, we will associate a thermodynamic system with a measure space through the following postulate.

    POSTULATE A. A thermodynamic system is equivalent to a measure space.

    Thus, every time we use the term thermodynamic system we are referring to the triple consisting of a phase space X, a σ , and a measure μ.

    B. DYNAMICS.

    We next consider a thermodynamic system operating in a phase space X. On this phase space the temporal evolution of our system is described by a dynamical law St that maps points in the phase space X into new points, i.e., St : X X, as time t changes. In general, X may be a d-dimensional phase space, either finite or not, and therefore x is a d-dimensional vector. Time t may be either continuous (t R) as, e.g., it would be for a system whose dynamics were governed by a set of differential equations, or discrete (integer valued, t Z) if the dynamics are determined by discrete time maps.

    We only consider autonomous processes in which the dynamics St are not an explicit function of the time t so it is always the case that St(St′(x)) = St + t′(x). Thus, the dynamics governing the evolution of the system are the same on the intervals [0, t′] and [t, t + t′]. This is not a serious restriction since any nonautonomous system can always be reformulated as an autonomous one by the definition of new dependent variables.

    Two types of dynamics will be important in our considerations, and some preliminary discussion will be helpful. Consider a phase space X and a dynamics St : X X. For every initial point x⁰, the sequence of successive points St(x⁰), considered as a function of time t, is called a trajectory. In the phase space X, if the trajectory St(x⁰) is nonintersecting with itself, or intersecting but periodic, then at any given final time tf such that xf = Stf(x⁰) we could change the sign of time by replacing t by −t, and run the trajectory backward using xf as a new initial point in X. Then our new trajectory S t(xf) would arrive precisely back at x⁰ after a time tf had elapsed: x⁰ = S tf(xf). Thus in this case we have a dynamics that may be reversed in time completely unambiguously. Dynamics with this character are known variously as time reversal invariant (Sachs, 1987) or reversible (Reichenbach, 1957) in the physics literature, and as invertible in the mathematics literature.

    We formalize this by introducing the concept of a dynamical system {St}t R (or, alternately, t Z for discrete time systems) on a phase space X, which is simply any group of transformations St : X X having the two properties:

    (1) S 0 ( x ) = x and

    (2) S t ( S t ′ ( x )) = S t + t ′ ( x ) for t , t R or Z .

    Since, from the definition, for any t R, we have

    it is clear that dynamical systems are invertible in the sense discussed above since they may be run either forward or backward in time. Systems of ordinary differential equations are examples of dynamical systems as are invertible maps. All of the equations of classical and quantum physics are invertible.

    To illustrate the second type of dynamics, consider a trajectory that intersects itself but is not periodic. Now starting from an initial point x we find that the trajectory {St(x, then picking xf = Stf(x⁰) and reversing the sign of time to run the trajectory backward from xf because the dynamics give us no clue about which way to go! Situations like this are called irreversible in the physics literature, while mathematicians call them noninvertible.

    Therefore, the second type of dynamics that is important to distinguish are those of semidynamical systems {St}t > 0, which is any semigroup of transformations St : X X, i.e.,

    (1) S 0 ( x ) = x and

    (2) S t ( S t ′ ( x )) = S t + t ′ ( x ) for t , t R + (or N ).

    The difference between the definition of dynamical and semidynamical systems lies solely in the restriction of t and t′ to values drawn from the positive real numbers, or the positive integers, for the semidynamical systems. Thus, in sharp contrast to dynamical systems, semidynamical systems are noninvertible and may not be run backward in time in an unambigious fashion. Examples of semidynamical systems are given by noninvertible maps, delay differential equations, and some partial differential equations.

    Often there is a certain confusion in the literature when the terms reversible and irreversible are used, and to avoid this we will always use the adjectives invertible and noninvertible. In spite of the enormous significance of distinguishing between dynamical and semidynamical systems later, at this point no assumption is made concerning the invertibility or noninvertibility of the system dynamics.

    C. THERMODYNAMIC STATES.

    The usual way of examining the dynamics of systems is by studying the properties of individual trajectories, but in keeping with the ergodic theory approach adopted here we opt instead to study the way in which the system dynamics operate on an infinite number of initial points.

    More specifically, we will examine the way in which the dynamics alter densities. What do we mean by a density? If f is an L¹ function in the space X, i.e., if

    then f is a density if f(xdenotes the L¹ norm of the function f,

    The examination of the evolution of densities by system dynamics is equivalent to examining the behavior of an infinite number of trajectories. This apparently simple assumption concerning the way in which systems operate on densities is so fundamental and important to the understanding of the foundations of thermodynamics that it is given a special status.

    POSTULATE B. A thermodynamic system has, at any given time, a state characterized by a density f(x), not necessarily independent of time.

    Given a density f then the f measure μf(A) of the set A in the phase space X is defined by

    and f is called the density of the measure μf. The usual Lebesgue measure of a set A is denoted by μL(A), and the density of the Lebesgue measure is the uniform density, f(x) = 1/μL(X) for all points x in the phase space X. We always write μL(dx) = dx.

    The Lebesgue measure of the entire phase space, denoted by μL(X), may either be finite or infinite. If it is finite, we often take it to be normalized so μL(X) = 1. It is important to realize that the measure of a set can be quite different depending on the density f. Thus for example, the Lebesgue measure of the positive real line R+ is infinite, whereas the measure of R+ with respect to the density f(x) = k e kx is just

    It is instructive to compare the approach used here with that of Boltzmann and Gibbs in their treatments of statistical mechanics. both started from the assumption that they were dealing with systems of dimension d = 2s whose dynamics were described by s position variables xi and s momentum variables pi. Boltzmann considered the phase space to be a 2s-dimensional space which is usually called μ space. He then considered the evolution of a large number N of identical particles, each with the same dynamics, in μ space. N is large and typically on the order of Avagadro’s number, 6 × 10²³. The limiting case of N → ∞ is the thermodynamic limit in which case the Boltzmann approach is equivalent to studying the evolution of a density in μ space. Gibbs also considered N identical particles operating with these 2s-dimensional dynamics in a phase space (commonly called the Γ space) of dimension 2sN. He then considered an infinite number of copies of this original system, and gave this construct the name ensemble. Thus Gibbs studies the evolution of the ensemble density, and Γ space has proved to be the most useful in statistical mechanics.

    This book is devoted to the study of systems by the evolution of densities, how system properties (dynamics) determine the character of the density evolution, and how this is translated into entropy behavior. Later, it will become clear what types of systems may be described by the evolution of densities. However, if for now we accept Postulate B that such systems exist, then it will be easy to examine the consequences of this postulate.

    D.BOLTZMANN-GIBBS ENTROPY.

    Having postulated that a thermodynamic system has a state characterized by a density f, we are now in a position to develop the physically useful concept of entropy as both Boltzmann and Gibbs introduced the term.

    First we define an observable : X R from the phase space X (x) with the system state density f(x) and integrating over the entire phase space:

    In his celebrated work Gibbs, assuming the existence of a system state density f on the phase space X, introduced the concept of the index of probability given by log f(x) where log denotes the natural logarithm. Though Gibbs identified − log f with entropy, now it is customary to introduce a quantity H(f) which is the negative of the phase space average of the index of probability weighted by the density f, i.e.,

    This is now known as the Boltzmann-Gibbs entropy of a density f since precisely the same expression appears in Boltzmann’s work (with the opposite sign) but the phase space is different for Boltzmann (μ space) and for Gibbs (Γ space). Clearly, the Boltzmann-Gibbs entropy is just the expectation of the observable defined by the negative of the index of probability.

    As it stands, the definition of the Boltzmann-Gibbs entropy may seem a bit obscure, and some motivation illustrates why it is the only reasonable candidate for a mathematical analog of the empirical thermodynamic entropy. It is easily shown that the only observable which is a function of a thermodynamic state that gives the requisite additive property to make the entropy an extensive quantity is the logarithmic function, and that it is unique up to a multiplicative constant (Khinchin, 1949; Skagerstam, 1974).

    To be more specific, consider two systems A and B operating in the phase spaces XA and XB, respectively, and each having the densities of states fA and fB. We now combine the two systems to form a new system C operating in the product space XC = XA × XB, so system C will have a density of states fC(x, y) = fA(x) fB(y) if A and B do not interact. Experimentally we expect that when the two systems are combined into a larger system C, then the entropy of system C should equal the sum of the individual entropies of A and B, since entropy is generally held to be an extensive system property. We wish to show that the Gibbs choice for the index of probability is the only choice (up to a multiplicative constant) that will ensure this.

    (fC(fA fB(fA(fB), then the relation H(fA) + H(fB) = H(fC(ω) = d log ω, where d with the requisite property?

    such that

    Define two new functions υA(a) and υB(b) through

    Then we have

    or with h(ω() this becomes

    This, however, is just the famous Cauchy functional equation that has the unique solution

    Figure 1.1. The

    Enjoying the preview?
    Page 1 of 1