Other Useful Information and Some Simple Models
We won't try to derive most of the following, we will just state the results without proof. Many of the statements you can derive yourself from what has been given on the previous page.
1. When we are dealing with a composite system, i.e., a system composed of two or more (approximately) independent subsystems, the energy states of the whole system are approximately combinations of the states of the individual subsystems. Consider subsystems A and B and their combination AB. We can write to good approximation
(What we are really saying is that the interaction energy between system A and system B is much smaller than either of their energies alone, so we neglect the interaction energy. This is not exact, but it is a good approximation.) In this case the partition function for the combined system QAB factors into a product of the individual partition functions QA and QB,
(You can show this is true by starting with the definition of QAB,
and showing that this factors into two independent summations, one over i and one over j.)
This result has important consequences in statistical thermodynamics. Since
In other words, the Helmholtz free energy for the composite system is just the sum of the Helmholtz free energies of the individual systems. The Helmholtz free energy is additive. I will leave it to you to show that this is also true of the internal energy, the entropy, the heat capacity, and any other property of the system.
2. It is customary to give the partition function for one molecule a special symbol. We call
3. A natural extension of these two ideas is that a gas of N molecules (where the energy states of one molecule do not alter the energy states of another molecule, i.e., the molecules do not interact with each other) will have a partition function
(This is almost correct. To properly account for the fact that molecules are indistinguishable, i.e., they can't be labeled, one should write instead
4. A further extension of the above ideas can be made when the translational energy, the rotational energy, the vibrational energy, and so on, are all independent of each other. That means, for example, that the rotational motion is unaffected by how the molecule is vibrating or by its translational motion through space. (Strictly speaking, this is never true, but it is almost always approximately true.) In this case one can write partition functions for each individual type of motion. For example qtrans is the partition function for the translational motion of one molecule, qrot is the partition function for the rotational motion of one molecule, and so on. In this case (which will be the usual case for us) we can write
where qetc is the partition function for any other motions you want to include in your model.
Combining this result with Equation 45 above gives, for a gas of N molecules,
Notice that the q's in the last two equations are multiplied together and not added. (There is, for some reason, a tendency for students to want to add these partition functions rather than multiply them.) In Equation 47 I have purposely associated the N! with the translational part of the partition function. This is because the labeling problem (i.e., our inability to actually label the molecules, which is why we put the N! there in the first place) only arises when we move the particles around, and moving them around is translational motion.
Equation 47 also has important consequences. You can write the thermodynamic functions as a sum of contributions from the different types of motion, translation, rotation, and etc. That is
(I'll let you prove this one.) For example the rotational contribution to internal energy or entropy or any other function can be obtained from
and so on. Notice that there is no N! in Equation 49. A similar equation would work for the vibrational contribution, the electronic contribution, and so on. But for the translational contribution you would have to include the N!.
Sometimes it is useful, when you have to include the N!, to use Stirling's approximation, which can be written
This approximation is useful when N is very large, like around Avogadro's number.
Using Stirling's approximation makes the translational part of the partition function look like,
which is a little easier to handle than the corresponding one with the N! in it.
Now for some specific models.
5. Translational Motion In One Dimension
Usually we model the translational part of the motion of a molecule by particle-in-a-box states. The energy of a particle in a one-dimensional of length, l, box depends on one quantum number, n, which can be 1, 2, 3, . . . up to infinity. The equation for the quantized energy is,
where h is Planck's constant and m is the mass of one molecule.
The partition function for this system is,
The summation cannot be performed in closed form but it can be approximated by an integral to high accuracy,
This integral can be evaluated and gives
6. Translational Motion In Three Dimensions
Translational motion in three dimensions is just as simple, except now there are three quantum numbers, one for each direction, nx, ny, and nz, and the energy is
The translational partition function in three dimensions is a three−fold summation
I will let you show that this can be written as
Since we've already approximated the summation in Equation 59 by an integral we can immediately write
where we have let l3 = V (you don't have to use a cubical box but it is simpler. You could even use a spherical box, but that is much more fun).
7. Rotational Motion
The rotational energy of a linear molecule (neglecting such things as centrifugal distortion) is given by BJ(J+1) and each J level is 2J+1 degenerate. The rotational partition function is easy to write,
(You must be careful to make sure the units in βB cancel. I usually do this by using Boltzmann's constant in cm−1: k = 0.69503 cm−1.) If βB << 1 (this is the high temperature limit) the summation can be approximated by an integral (which I will let you do, because it's easy) to give
This is almost right. However, we have missed something and that something is related to the same problem that led us to divide by N! in the translational partition function. The idea is somewhat like this: the partition function is a sum over states. For a heteronuclear diatomic molecule (or an unsymmetrical one, like HCN) you have to rotate the molecule all the way around 360o to bring it back to the same "state." For a homonuclear diatomic molecule (or a symmetrical one, such as CO2) it comes back to the same "state" after only 180o rotation. So somehow an asymmetric molecule, in going around 360o has only passed through one "state," while a symmetric molecule has passed through two "states" in a 360o rotation. If we try to use Equation 61 for a symmetric molecule we have somehow overcounted the "states" by a factor of two. So to fix things we write
where σ is called the symmetry number. σ is the number of ways the molecule can be oriented which are indistinguishable from each other. For HCl, σ = 1; and for Cl2, σ = 2 (as long as both Cl atoms are the same isotope). We will see the symmetry number again below.
It is customary to define the "characteristic rotational temperature," ΘR, as
so that the rotational partition function can be written,
Defining ΘR also allows us to say what we mean by high and low temperatures. If T >> ΘR we say that T is a high temperature and we can use Equation 63 or 65 as the rotational partition function. If T ≈ ΘR or T < ΘR, then we say T is a low temperature and we must use the summation formula, Equation 61 (divided by the appropriate σ ). For reasonable size molecules ΘR is usually only a few degrees Kelvin. For light molecules it can be higher (for H2, ΘR = 87.57 K).
Nonlinear molecules have three moments of inertia and three rotational constants (and, hence, three ΘR's). If we call the three rotational constants A, B, and C, the rotational partition function (at high temperatures) is
Again, σ is the symmetry number and it is the number of orientations of the molecule which are indistinguishable from each other (for benzene σ =12, for ammonia σ =3, etc).
8. Vibrational motion
Vibrational energies for one mode of vibration are
where v = 0, 1, 2, 3, . . . , and ν (Greek letter nu) is the characteristic frequency of the oscillator. The vibrational partition function is
which can be summed in closed form to give
(To see that this is true, recall that
Sometimes a characteristic vibrational temperature, Θv, is defined by
and the partition function is written in terms of Θv instead of hν/k. Characteristic vibrational temperatures are usually several thousands of Kelvins except for very "soft" or low frequency vibrational modes.
Polyatomic molecules have more than one vibrational mode. For polyatomic molecules one needs a term like Equation 69 for each mode (all multiplied together, not added!). For a molecule like Buckminsterfullerene, C60, that would be a lot of modes and a lot of qvib factors!
The high temperature limit of qvib (ignoring the zero−point energy contribution) is T/Θv (I will let you prove it). This is sometimes called the "classical" limit because it is the result that is obtained from developing statistical thermodynamics from classical mechanics instead of quantum mechanics. (It is also the reason why the Rayleigh−Jeans law for black−body radiation didn't work, but that is another story.)
9. Electronic Energy
Electronic excited state energies are usually (but not always) much higher than kT, so they don't contribute to thermodynamic properties except at extremely high temperatures. When they do contribute we have to write out the partition function term−by−term
Here gi is the degeneracy of the i'th level, and we have selected the ground electronic state as the zero (or origin) of energy.
10. Et Cetera
Most of what has been given above will work just fine until you try to apply it to chemical reactions, as in when we want to calculate an equilibrium constant. When you have the possibility of molecules (or atoms) being taken apart and put back together, possibly differently, you must be careful to make sure the molecular energies are always referred to the same zero of energy. For atoms involved in ionic reactions this is accomplished by selecting the zero of energy as the ion and its electron(s), separated and at rest. That means the ground state of the neutral atom is actually below the zero of energy by an amount of energy equal to the ionization potential, Ip. (So the ground electronic state actually has energy −Ip.) Accordingly we set qetc = exp(+β Ip). Note that (−β)(−Ip) = +βIp in the exponent.
For molecules we select the zero of energy as the separated atoms at rest. This means that all the internal energy states of the molecule have been measured from an energy which is below the zero of energy by an amount equal to the dissociation energy, De. To adjust for this shift of energy origin we use qetc = exp(+βDe). Again the plus sign comes from (−β)(−De) = +βDe in the exponent.
The subject of the origin (or zero) of energy brings up another issue. Energy measurements or calculations are always relative (and I don't mean in the sense of the theory of relativity). The energy you calculate or measure depends on what you defined to be the zero of energy. For example, we usually think of the gravitational potential energy of our bodies as being zero, as long as we are on the ground. However, if we measured our potential energy on the same scale that planetary astronomers use to discuss the earth−moon system (zero energy is the objects separated at infinity and at rest) we would find that our potential energy − standing on the ground − would be large and negative. Matter of fact, on the planetary scale, our total energy is negative. If it weren't we would be "unbound" and off floating in space. Now all of this makes no difference in how much energy it takes to climb the stairs to go from one floor to another, nor to the change in your potential energy upon doing so.
By the same token when you specify the energy of a molecule there is always an implied statement about what you called zero energy, i.e., where you measured from. The zero we pick shouldn't and doesn't have any affect on thermodynamic properties. If we have a set of energy levels Ej we will get a partition function,
If we shift each level by the same constant amount, say call the levels Ej + c, we get what looks like a new partition function
But this factors to give
When we take lnQ' we will see that it differs from lnQ only by an additive term −β c. This term will contribute a constant additive term to A, U, H, and G, but it will not contribute to the entropy or the heat capacities, nor will it contribute to quantities like ΔA, etc. Some writers like to continually remind us that we are carrying around an implied definition of the zero of energy by writing, for example,
and they continue in the same manner with U, G, and etc. Since we always know that there is always an arbitrary zero of energy in our equations, I prefer to stay with our Equation 32,
and save a lot of extra writing.
11. Equilibrium Constants
You can calculate equilibrium constants (in terms of concentration in molecules/m3) from partition functions. The expression for this concentration equilibrium constant, in terms of the material given above, for a hypothetical reaction,
a A + b B → c C + d D,
In Equation 76 the volume dividing each of the molecular partition functions just serves to cancel the volume occurring in the translational part of the partition function, so that there is no explicit volume dependence in K. This equilibrium constant will be in units of molecules/m3 to some power (the −c − d + a + b power, actually). It can be converted to mol/L or pressure by standard methods. Recall that N/V = p/kT.
Copyright 2004, W. R. Salzman
Return to the local Table of Contents,
Return to the Table of Contents for the Dynamic Text, or
Return to the WRS Home Page.