-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathch_statistical_physics.tex
More file actions
1088 lines (984 loc) · 48.3 KB
/
ch_statistical_physics.tex
File metadata and controls
1088 lines (984 loc) · 48.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\chapter{Physics: Statistical Physics}\label{ch:statphys}
One regime of interest for lattice QCD calculations is the application
of lattice formalism to hot, dense nuclear systems; this is the realm
of thermodynamics\index{thermodynamics} and statistical physics. In this chapter I try to
offer some reminders of statistical physics, especially those that
are directly relevant to modern lattice investigations of the QCD
phase diagram. Some references I drew from include
Ref.~\cite{tahir-kheli_general_2012} and \cite{kardar_statistical_2007}.
\section{A brief history of early thermodynamics}
As it turns out, much of thermodynamics was discovered in order to develop
and increase the efficiency of engines. This presentation is based
on Ref.~\cite{wiki:thermo}. The first engine progenitor that I'm
aware of would be the vacuum pump, which was built and designed by Otto von
Guericke in 1650. Shortly thereafter, Boyle and Hooke developed an air pump.
Playing around with this air pump helped reveal {\it Boyle's law}\index{Boyle's
law}
\begin{equation}
P\propto\frac{1}{V},
\end{equation}
a relationship between the pressure $P$ and volume $V$ of a gas.
In 1697, Denis Papin developed a steam digester, a closed vessel which trapped steam
until a high pressure was generated, with a valve to release pressure to keep it
from exploding. He noticed the value would rhythmically move up and down and was
inspired to create a piston and cylinder engine; however, he did not pursue this design.
In 1712, Thomas Newcomen built the first engine based on Papin's concept.
This\index{engine!Newcomen} {\it Newcomen engine} was quite inefficient; by
1781, major improvements were made by James Watt, including making the condenser a separate
entity to avoid energy loss repeatedly cooling and re-heating the cylinder and
implementing rotary motion, allowing the\index{engine!Watt} {\it Watt engine} to
be more broadly applicable.
Such early engines are examples of {\it external combustion engines}, which
burn\index{engine!external combustion}
fuel, using the heat to increase the temperature of some liquid whose
pressurized vapor is used to move something.
The development of thermodynamics as a modern science arguably begins with
with Sadi Carnot, who in 1824 published {\it Reflections on the Motive Power of
Fire}, a text on heat, power, energy and engine efficiency.
His book outlined energetic relations between the\index{Carnot!engine}
Carnot engine, the\index{Carnot!cycle} Carnot cycle, and motive power.
In 1850, Clausius published a paper titled ``On the Moving Force of Heat", which
first stated the basic principles of the second law of thermodynamics.
He introduced the concept of entropy\index{entropy} in 1865.
In parallel to the development of thermodynamics as a discipline was the
constant improvement of commercial engine efficiency. Of particular note for
modern transportation is the constant improvement of the {\it internal
combustion engine}\index{engine!internal combustion}, where the products of fuel
combustion themselves directly provide the pressure used to produce mechanical
motion. These improvements were due to many scientists and engineers and
happened over the course of many years~\cite{wiki:internalCE}.
A few notable engineers, which include some household names, are listed in
chronological order of their contributions:
In 1872, George Brayton invented the first commercial liquid-fueled
internal combustion engine. In 1876, Nicolaus Otto, in cooperation with Gottlieb
Daimler and Wilhelm Maybach, patented the four-stroke cycle engine with a
compressed charge. Karl Benz, four years later in 1879, patented a
dependable two-stroke gas engine. Lastly, in 1892 Rudolf Diesel\footnote{Diesel
engines tend to be more efficient for transportation
than gas engines, in the sense that they tend
to deliver more distance per unit fuel. One of the reasons for this is that the
combustion is triggered by high pressure instead of a spark, which requires more
compression. When the gas then expands, it therefore pushes the piston a larger
distance. Diesel died under mysterious circumstances--one likely possibility is that he
was assassinated.} invented the
first compressed charge, compression ignition engine.
\section{The laws of thermodynamics}
In thermodynamics, one usually divides the universe into a system
(or collection of systems) under
consideration and its (their) surroundings.
The {\it zeroth law of thermodynamics} is just the statement that
equilibrium is transitive.\index{thermodynamics!zeroth law}
\begin{theorem}{Zeroth law of thermodynamics}{}
If two systems $A$ and $B$ are in equilibrium with a system $C$, then
$A$ and $B$ are also in equilibrium.
\end{theorem}
Let $U$ be the energy of any system,
$Q$ the heat added to it, and
$W$ the work done on it. The {\it first law of thermodynamics}
is the statement of energy conservation.\index{thermodynamics!first law}
\begin{theorem}{First law of thermodynamics}{}
$$\dd U=Q+W$$
\end{theorem}
%In other words, heat and work are the only ways to change a system's energy.
These two laws are relatively straightforward to understand. But to understand
the second law, one needs the concept of entropy. First we will proceed
using discoveries by Carnot and Clausius, which I think is valuable
because it helps one understand why $Q=T\dd{S}$. Then we will take a look at an
information-oriented understanding of entropy. Both of these follow the
presentations in Ref.~\cite{kardar_statistical_2007} closely.
\begin{figure}
\includegraphics[width=\linewidth]{figs/engineFrige.pdf}
\caption{Schematic diagrams for an ideal engine (left) and refrigerator (right).}
\label{fig:engineFrige}
\end{figure}
\subsection{Entropy from engines}
A {\it heat engine}\index{engine!heat} is any idealized system that takes in
heat $\Qin$ from a source, converts some of that into work $W$, and loses some heat to its
surroundings $\Qlost$. A reasonable measure of its efficiency\footnote{It is not
clear to me at this stage that this is the ``best" definition of efficiency or a
unique one. In particular, one could raise this ratio of heat to an arbitrary
(positive) power. In a sense, using 1 as the power is the simplest possible definition.
Additionally, when we derive
the Carnot engine efficiency, we will see that it related to the ratio
of two temperatures raised to some power. There also we have the freedom to
pick a power, and again the simplest decision is 1. Picking the same power for
the heat and the temperature ratios is important so that the entropy derived from
engines matches the microcanonical (or information) entropy, which we will derive
later.} is
\begin{equation}\label{eq:engineEff}
\text{efficiency}=\frac{W}{\Qin}=\frac{\Qin-\Qlost}{\Qin}\leq1,
\end{equation}
where the inequality is due to the first law of
thermodynamics\footnote{Incidentally, this rules out any (perpetual motion)
machine that can produce more energy than you put in.}.
A {\it refrigerator}\index{refrigerator} is any idealized system that does the
opposite; i.e. it takes in some work $W$ to suck some heat $\Qsuck$ out of
some source, and dumps that heat $\Qdump$ into its environment.
Similarly, a reasonable measure of the refrigerator's efficiency is
\begin{equation}\label{eq:refrigeratorEff}
\text{efficiency}=\frac{\Qsuck}{W}=\frac{\Qsuck}{\Qdump-\Qsuck},
\end{equation}
Schematics of an ideal engine and refrigerator are given in
\figref{fig:engineFrige}.
With this terminology, we give two equivalent statements of the second law,
which as far as I can tell, are just things that were empirically observed
when developing engines:\index{thermodynamics!second law}
\begin{enumerate}
\item (Kelvin) There is no process whose sole result is the conversion of heat
into work. In other words the inequality in \equatref{eq:engineEff}
should be strict.
\item (Clausius) There is no process whose sole result is the transfer of
heat from a colder system to a warmer one. In other words, the
efficiency \eqref{eq:refrigeratorEff} is finite.
\end{enumerate}
It is not too difficult to show using ideal engines and refrigerators
that these two formulations are equivalent~\cite{kardar_statistical_2007}.
Later, we will formulate these in terms of a new quantity, entropy. The
formulation in terms of entropy can be proven in the framework of statistical
physics.
In order to examine systems in detail, we need to be in equilibrium so that all
thermodynamic coordinates are well defined.
If nothing changes, a system remains in equilibrium; it stands to reason that
if a process happens sufficiently slowly, it is more or less still in
equilibrium. Such processes are called\index{quasistatic} {\it quasistatic}.
A process is {\it adiabatic}\index{adiabatic} if $Q=0$ throughout that process.
Next we introduce the idea of a\index{engine!Carnot} {\it Carnot engine},
which should theoretically be the most efficient engine possible.
It turns out that the most important property of such an engine is that it
is\index{reversible} {\it reversible}, i.e. that you can reverse all its inputs
and outputs with the result that it works like the forward engine running
backwards in time. In particular a Carnot engine is reversible,
runs\index{cycle} in a {\it cycle}, i.e. it returns to its original state
at the end of its process, and its heat exchanges occur at
temperatures\footnote{A Carnot cycle generically consists of two adiabats
and two isotherms. Specifying the heat exchanges at these temperatures allows us
to pick two isotherms. One can theoretically find adiabats using, e.g. an ideal
gas, but in principle Carnot engines with other materials are possible.}
$T_H$ and $T_C$, i.e. it draws and dumps heat without changing the
temperatures of its surroundings.
Looking at \figref{fig:engineFrige}, we see that a Carnot
engine run backward in time is just a Carnot refrigerator, and vice versa.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{figs/carnotThm.pdf}
\caption{An idealized machine used to prove Carnot's theorem.}
\label{fig:carnot}
\end{figure}
To see why this reversibility matters, consider \figref{fig:carnot}. In this
system, we use any engine to draw heat from a hot reservoir fixed at
$T_H$ that loses heat to a cold reservoir $T_C$ with $T_H>T_C$. We reverse a Carnot engine,
and use the output work from the left engine as input work to the reversed
Carnot engine. This combined system works as a process whose sole result
transfers heat between the top and bottom reservoirs. By Clausius's statement
of the second law of thermodynamics, it must be that $\Qin\geq\Qdump$ and
$\Qlost\geq\Qsuck$. It follows that
\begin{equation}
\text{efficiency}=\frac{W}{\Qin}\leq\frac{W}{\Qdump}=\text{Carnot engine
efficiency}.
\end{equation}
\index{Carnot's theorem}
\begin{theorem}{Carnot}{}
No engine is more efficient than a Carnot engine.
\end{theorem}
\begin{corollary}{Carnot}{carnot}
All Carnot engines have the same efficiency $\eta(T_H,T_C).$
\end{corollary}
\begin{figure}
\centering
\includegraphics[width=0.4\linewidth]{figs/carnotSeries.pdf}
\caption{A series of Carnot engines.}
\label{fig:carnotSeries}
\end{figure}
Next, we are going to derive a relationship between heat flow and reservoir
temperature for the Carnot engine. Once we have this relationship, we will
creatively apply our knowledge of Carnot engines to say something about heat
flow and temperature generally. This will ultimately lead us to conclude that
entropy exists, and deliver a definition of it.
To derive the aforementioned relationship, consider a series of Carnot engines
as shown\footnote{The Carnot engines are identical and reversible, so it
must be that the heat exchanges on either side of $T_2$ are the same.}
in \figref{fig:carnotSeries} with $T_3>T_2>T_1$. The total effect of this
Carnot series is to take in heat $Q_3$, lose heat $Q_1$, and do work
$W_{31}=W_{32}+W_{21}.$ Using \corref{cor:carnot}
we find that the heat exchanges must be related by
\begin{equation}\begin{aligned}
Q_2 &= Q_3 - W_{32} &&= Q_3\left(1-\eta(T_3,T_2)\right) \\
Q_1 &= Q_2 - W_{21} &&= Q_2\left(1-\eta(T_2,T_1)\right)
= Q_3\left(1-\eta(T_3,T_2)\right)\left(1-\eta(T_2,T_1)\right) \\
Q_1 &= Q_3 - W_{31} &&= Q_3\left(1-\eta(T_3,T_1)\right).
\end{aligned}\end{equation}
From the last two equations, it follows
\begin{equation}
\left(1-\eta(T_3,T_2)\right)\left(1-\eta(T_2,T_1)\right) =
1-\eta(T_3,T_1).
\end{equation}
Whatever the functional form of $1-\eta$ is, the middle temperature must be
cancelled through a multiplication. Hence $1-\eta$ must be a ratio of its input
temperatures, raised to some power. The simplest choice\footnote{Again, this
choice also will help our definition of entropy match that from the
microcanonical ensemble.} that guarantees $0\leq\eta<1$ is
\begin{equation}\label{eq:carnotEfficiency}
\frac{Q_1}{Q_2}=1-\eta(T_2,T_1)\equiv\frac{T_1}{T_2}.
\end{equation}
It is nice to have this expression for the Carnot engine efficiency.
But again, from my perspective, it's more interesting to have this relation between
heat and temperature at the input and output heat vents for the engine.
Indeed, we can now potentially learn something about heat delivered
to an arbitrary system at some temperature, and we will leverage this trick to
prove a theorem due to Clausius.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figs/clausius.pdf}
\caption{A convenient setup used to prove Clausius's theorem.}
\label{fig:clausius}
\end{figure}
\index{Clausius's theorem}
\begin{theorem}{Clausius}{}
For any cyclic transformation,
$$ \oint\frac{\delta Q}{T}\leq0. $$
where $\delta Q$ is a small amount of heat supplied at temperature $T$ to
a system during part of the cycle.
\begin{proof}
Consider the setup of \figref{fig:clausius}. The directions of heat and
work can be chosen as given WLOG. We begin with a system that
has a small amount of heat $\delta Q$ delivered to it at temperature $T$; by
energy conservation, some work $\delta W$ will leave the system.
To leverage \equatref{eq:carnotEfficiency}, we direct $\delta Q$ to the output port
of a Carnot engine, which generically takes in heat $\delta Q_R$ from a
reservoir at temperature $T_R$ and does some work $\delta W_E$.
From \equatref{eq:carnotEfficiency} we have
\begin{equation*}
\delta Q_R=T_R\frac{\delta Q}{T}.
\end{equation*}
After returning to their original states, the system and Carnot engine have the
combined effect of taking in heat $Q_R=\oint\delta Q_R$ and doing work $W$.
By energy conservation $Q_R=W$. By Kelvin's statement of the second law, it must
be that positive work is done on the system and positive heat leaves
the system; given our arrow conventions, we conclude $W=Q_R\leq0$. Hence
\begin{equation*}
T_R\oint\frac{\delta Q}{T}\leq0.
\end{equation*}
That $T_R$ is non-negative proves the theorem.
\end{proof}
\end{theorem}
We are finally in a position to define\index{entropy} entropy.
In particular, if we further specify that the cycle is reversible,
we will find that $\delta Q\to-\delta Q$ when we switch the direction of the
cycle. It follows that for a reversible cycle
\begin{equation}
\oint\frac{\delta Q}{T}=0.
\end{equation}
This tells us that the process is path-independent, which means we can define a
new function $S$ with
\begin{equation}
S(B)-S(A)=\int_A^B\frac{\delta Q}{T}.
\end{equation}
Hence we learn for reversible\footnote{I've seen examples of people defining
reversible\index{reversible} systems to be those that have $Q=T\dd{S}$.},
quasistatic\footnote{Lattice calculations are always done under the assumption
of equilibrium, which means it's quasistatic by definition.} changes
\begin{equation}
Q=T\dd{S}.
\end{equation}
It follows that reversible, adiabatic processes are\index{isentropic} isentropic.
We also get this statement of the first law:\index{thermodynamics!first law}
\begin{equation}
\dd U = T\dd{S}+P\dd{V}.
\end{equation}
Finally, consider a possibly irreversible path from $A$ to $B$ that is closed
with a reversible one from $B$ to $A$. From Clausius's theorem, we get
\begin{equation}
\int_A^B\frac{\delta Q}{T}+\int_B^A\delta Q_{\text{rev}}\leq0
\end{equation}
or
\begin{equation}
\int_A^B\frac{\delta Q}{T}\leq S(B)-S(A).
\end{equation}
It follows that $\delta S\geq\delta Q/T$ for any process. In particular consider
two systems that are originally isolated from each other and separately in
equilibrium, then allow them to exchange heat. Since there is no work, we must
have $\delta Q_{\text{total}}=0$ by the first law, and hence $\delta
S_{\text{total}}\geq0$. This is the more familiar statement of
the {\it second law of thermodynamics}.\index{thermodynamics!second law}
\begin{theorem}{Second law of thermodynamics}{}
If two or more adiabatically isolated systems are brought into thermal contact,
their combined entropy must always increase.
\end{theorem}
In the context of statistical physics, this statement will follow from
probabilistic considerations. There we will see that systems in equilibrium have
maximum entropy. What about statements of minimum entropy? This is covered by
the {\it third law of thermodynamics}, which is an observation.\index{thermodynamics!third law}
\begin{theorem}{Third law of thermodynamics}{}
$$
\lim_{T\to0}S(T)=0
$$
\end{theorem}
%\subsection{Entropy from information} (can connect to probability chapter)
\section{Extensive and intensive variables}\label{sec:intensiveextensive}
\index{intensive}\index{extensive}
Consider for a moment an ideal gas of $N$ particles trapped in a box of volume
$V$ and temperature $T$. Now imagine that you create an exact copy of that system,
concatenating it with the original system. This creates a new system.
It is instructive to ask what must happen to some of the thermodynamic variables
during this process.
The new system must have volume $2V$ and $2N$ particles. On the other hand, we
know that if we take the two original systems and put them into thermal contact,
since the two systems had the same temperature, they are already in equilibrium,
and thus the concatenated system will also have temperature $T$. We could
generalize this process to concatenating, say, half a copy, or by concatenating
$\pi$ copies. The volume of the combined system will be $1.5V$ or $(1+\pi)V$,
and $N$ would scale similarly. Since $V$ and $N$ behave this way, we identify
them as our first two examples of {\it extensive} variables.
The temperature $T$, which is independent of this sort of scaling, is identified
as our first example of an {\it intensive} variable.
Moving forward, let us say that $V$ and $N$ are extensive by definition and that
$T$ is intensive by definition. Now consider a set of $M$ extensive variables
${X_i}$ and a set of $R$ intensive variables ${Y_j}$. Then we further designate
a function $F$ of these variables as extensive if and only if $\Forall\lambda\in\R$
\begin{equation}\label{eq:extensive}
F(\lambda X_1, ... \lambda X_M,Y_1,...,Y_R)=\lambda F(X_1,...,X_M,Y_1,...,Y_R).
\end{equation}
A function of the above form is said to be {\it homogeneous of degree
one}\index{homogeneous} w.r.t. its arguments, so extensive quantities are
homogeneous of degree one w.r.t. other extensive quantities. If $F$ instead obeys
\begin{equation}\label{eq:intensive}
F(\lambda X_1, ... \lambda X_M,Y_1,...,Y_R)= F(X_1,...,X_M,Y_1,...,Y_R),
\end{equation}
then it is intensive. These rules together are usually succinctly summarized in
sentences like ``extensive quantities scale linearly with the system size and
intensive quantities are independent of system size".
Since $V$ is by definition extensive, one can construct an intensive quantity
from an extensive one by turning the extensive quantity into a density. For
example the {\it particle number density}\index{number density}
\begin{equation}
n\equiv \frac{N}{V}
\end{equation}
is clearly intensive. Whether a variable is intensive or extensive can also be
surmised from known relationships between it and other thermodynamic variables.
For example the equipartition theorem tells us that a classical system of $N$
particles, each with $f$ degrees of freedom, maintained at temperature $T$, has
internal energy
\begin{equation}
U=\frac{Nf}{2}k_B T,
\end{equation}
from which it follows that $U$ is extensive.
\section{Equations of state}\label{sec:EoS}
Probably the equation of state that you are most familiar with is the
ideal gas law\index{ideal gas law},
\begin{equation}
PV=Nk_BT.
\end{equation}
Based on the discussion in \secref{sec:intensiveextensive}, we see that $P$ is
an intensive quantity.
There is an equation of state for each intensive variable required for the
description of thermodynamic states. For example from the
first law of thermodynamics,\index{thermodynamics!first law}
\begin{equation}\label{eq:fslaw}
\dd{U}=T\dd{S}-P\dd{V}+\sum_i\mu_i\dd{N}_i,
\end{equation}
we know that\footnote{It can be quite confusing to track which variables
are held fixed and allowed to vary doing these partial derivatives.
This matters because sometimes different choices of control parameters
are related to each other through a constraint.
You can get a hint for example by looking at equations like
\equatref{eq:fslaw}, which is a reminder we are thinking of
$U$ as a function of the set \{$S$, $V$, $\vec{N}$\}. In this context,
partial derivatives with respect to one of those variables must
guarantee the others in that set are fixed.}
\begin{equation}
T=\pdv{U}{S}.
\end{equation}
This intensive variable $T$ depends on only the extensive variables;
generally we could write
\begin{equation}
T=T(U,S,V,N).
\end{equation}
This is what we call an {\it equation of state}\index{equation of state}
(EoS). The arguments are sometimes referred to as
{\it control parameters}\index{control parameter}. Knowing every equation
of state is enough to reconstruct the fundamental equation, and therefore
enough to determine the physics of the system.
In the ideal gas EoS, one makes two assumptions, namely:
\begin{enumerate}
\item The particles themselves have zero volume; or alternatively,
their volume can be neglected.
\item There is no interaction between the particles.
\end{enumerate}
Van der Waals was the first to attempt to correct for these
assumptions~\cite{van_der_waals_over_1873}. The Van der Waals EoS looks like
\begin{equation}\label{eq:vanderwaals}
P=\frac{NkT}{V-b}-\frac{A}{V^2}.
\end{equation}
We see two corrections. First, the parameter $b$, which is sometimes
called the {\it excluded volume}\index{excluded volume}, represents the total
volume occupied by the particles themselves. Hence $b\propto NV_0$, where $V_0$
indicates the volume of one particle of the gas species. The second term
represents a pairwise interaction between particles whose strength
is parameterized by $A$. When $A>0$, this interaction is attractive.
That the interaction term is scaled by $V^2$ can be interpreted as follows:
The density of particles should increase the importance of this term, giving
rise to one factor of $V$. Since the interaction is pairwise, it will be scaled
by the fraction of the time particle $i$ interacts non-negligibly with particle
$j$, which is also proportional to the particle density, leading to another
factor of $V$. In the limit $A,b\to0$, or equivalently $V\to\infty$,
one recovers the ideal gas EoS.
The Van der Waals pressure cannot be neatly classified as an intensive quantity,
except in the thermodynamic limit.
\section{Rules for differentials}\label{sec:thermdiff}
Especially in statistical physics one encounters many involved combinations
of partial derivatives, so it is worthwhile to remind yourself what the
rules are. Here we list some useful results, which one can also
find in e.g. Ref.~\cite{huang_introduction_2001}.
When first learning about partial derivatives, one considers functions
of independent variables, for example when $x$, $y$, and $z$ represent
spatial directions. And so when you think of some
regular\footnote{That is, analytic or differentiable.} \index{function!regular}
function $f(x,y,z)$
and calculate its partial derivative,
\begin{equation}
\pdv{f}{x},
\end{equation}
it means that all other variables, in this case $y$ and $z$, are treated as
constants.
A subtle issue occurs when we consider $f$ to be a constraint on $x$, $y$,
and $z$; i.e. when $f(x,y,z)=0$. In this case, we have two independent
quantities. If we like, we can think of this constraint as a two-dimensional
manifold, and when taking partial derivatives, we think of moving some direction
along that manifold. In such a case, it is important to specify which quantity
is being held fixed. For example if we are interested in $\partial_x y$, we
could hold $z$ fixed, but we more generally hold some function
$w(x,y,z)$ fixed. In this case we obtain
\begin{equation}\label{eq:chainw}
\left(\pdv{x}{y}\right)_w
\left(\pdv{y}{z}\right)_w
=\left(\pdv{x}{z}\right)_w
\end{equation}
and
\begin{equation}\label{eq:inv}
\left(\pdv{x}{y}\right)_w
=\left(\pdv{y}{x}\right)_w^{-1}.
\end{equation}
Note that \equatref{eq:chainw} and \eqref{eq:inv} have always the same
fixed quantity $w$. This is to be distinguished from the following
situation.
\begin{proposition}{The constrained chain rule}{chainconstrain}
$$
\left(\pdv{x}{y}\right)_z
\left(\pdv{y}{z}\right)_x
\left(\pdv{z}{x}\right)_y
=-1
$$
\begin{proof}
From the constraint, we have
\begin{equation*}
0=\dd f=
\left(\pdv{f}{x}\right)_{y,z}\dd x
+\left(\pdv{f}{y}\right)_{x,z}\dd y
+\left(\pdv{f}{z}\right)_{x,y}\dd z.
\end{equation*}
If we hold e.g. $z$ fixed, we find $\dd z=0$, and therefore
\begin{equation*}
\left(\pdv{x}{y}\right)_z=-\left(\pdv{f}{y}\right)_{x,z}\left(\pdv{f}{x}\right)_{y,z}^{-1}.
\end{equation*}
Similar equations hold for the other two derivatives, and multiplying them
together yields the desired result.
\end{proof}
\end{proposition}
Often in statistical physics it is useful to be able to switch between different
sets of control parameters. For instance one may be interested in a system where
one can control the temperature $T$, along with some set of $N$
potentials\footnote{For instance chemical potentials.},
which we represent here as a vector $\vec\alpha$. On the other hand, it may only
be possible in an experiment to fix some other $N$ control parameters that in
general change with $T$ and $\alpha$; call them $\vec{x}(T,\vec\alpha)$. We
would like to conveniently switch between $\vec\alpha$ and $\vec{x}$.
\begin{proposition}{}{}
\begin{equation*}
\left( \pdv{f}{T} \right)_{\vec{x}} = \left(\pdv{f}{T}\right)_{\vec\alpha}
+\sum_\alpha
\left(\pdv{\alpha}{T}\right)_{\vec{x}}
\left(\pdv{f}{\alpha}\right)_{T,~\vec{\beta},~\beta_i\neq\alpha}
\end{equation*}
\begin{proof}
Since $f(T,\vec{x})$, we can write
\begin{equation*}
\dd f=\dd{T} \left( \pdv{f}{T} \right)_{\vec{\alpha}}
+ \sum_\alpha\dd\alpha\left(\pdv{f}{\alpha}\right)_{T,~\vec{\beta},~\beta_i\neq\alpha},
\end{equation*}
where in the sum we indicate that we hold all control parameters $\beta_i$ of
$\vec{\beta}$ except for $\alpha$ fixed. Similarly,
\begin{equation*}
\dd\alpha=\dd{T} \left( \pdv{\alpha}{T} \right)_{\vec{x}}
+ \sum_x\dd x\left(\pdv{\alpha}{x}\right)_{T,~\vec{y},~y_i\neq x}.
\end{equation*}
Plugging $\dd \alpha$ into $\dd f$ completes the proof.
\end{proof}
\end{proposition}
Next we derive a useful fact concerning derivatives of intensive variables
w.r.t. $V$. Let $G=Vg$ be extensive with $g$ intensive.
Consider the extensive function $F$ of $G$ and $V$.
From \equatref{eq:extensive} we know that $\Forall\lambda\in\R$
\begin{equation}
F(\lambda G,\lambda V)=\lambda F(G,V).
\end{equation}
If we now choose $\lambda = 1/V$ and let $f=F/V$, we find
\begin{equation}\label{eq:fintensive}
f(g,1) V=F(G,V).
\end{equation}
Since $F$ was extensive, we know $f$ must be intensive. This
rescaling makes clear that $f$ has no remaining explicit volume dependence.
It follows that
\begin{equation}\label{eq:dfdVintensive}
\atFixed{\pdv{f}{V}}{g}=0.
\end{equation}
Note that if we had instead held $G$ fixed, $g$ would be allowed to compensate
changes in $V$, and we are no longer guaranteed zero when taking the partial
derivative. On the other hand, the argument leading to
\equatref{eq:dfdVintensive} does not depend on the number of
intensive quantities held fixed, and hence generalizes:
\begin{proposition}{}{}
$$
\atFixed{\pdv{f}{V}}{{\rm all~intensive~variables}}=0
$$
\end{proposition}
\section{Legendre transforms}\index{Legendre transformation}
Equation~\eqref{eq:fslaw} tells us that we can think of the internal energy
$U$ of a system in equilibrium at $(T,P,\mu)$ as a function of $S$, $V$, and
$N$. However one of the control parameters, such as $S$, may be
difficult or impossible to measure, and therefore we would rather
think in terms of the more accessible quantity $T$, which
is the derivative of $U$ with respect to $S$;
Hence we want
\begin{enumerate}
\item to look at $U$ in terms of a derivative with respect to $S$ rather
than $S$ itself; moreover
\item we do not want to lose any information we had before, i.e. we want this
process to be invertible.
\end{enumerate}
These are the purposes of a Legendre transformation. The second point
may seem too obvious to state, but it's worth emphasizing here because
what makes thermodynamic potentials such as $U$ special is that you are
supposed to be able to determine state variables like $T$ from them.
Since no information is lost, Legendre transforms guarantee that
thermodynamic potentials get transformed to other thermodynamic potentials.
\index{thermodynamic!potential}\index{potential!thermodynamic}
Before we define a Legendre transformation, let us look at an example
due to Markus Deserno~\cite{deserno} where a naive transformation can go
wrong and information can be lost.
\begin{example*}{}
We consider a function $y(x)$ and define a new variable
\begin{equation}\label{eq:xlegendre}
p\equiv y'(x).
\end{equation}
In order to accomplish goal (1) above, one might naively solve
\equatref{eq:xlegendre} for $x$, obtaining the function $x(p)$
and then plug this back into $y(x)$ to obtain
\begin{equation}
Y(p)=y\left(y'^{-1}(p)\right).
\end{equation}
To see that this procedure destroys information, consider the example
\begin{equation}\label{eq:badlegendre}
y(x)=\frac{1}{2}(x-x_0)^2.
\end{equation}
The derivative is $p=y'(x)=x-x_0$, and hence $x=p+x_0$. Plugging this
into \equatref{eq:badlegendre} we get the function
\begin{equation}
Y(p)=y(x(p))=\frac{1}{2}\left(x(p)-x_0\right)^2=\frac{1}{2}p^2.
\end{equation}
Evidently all functions of the form~\eqref{eq:badlegendre} get transformed
to the same function regardless of the value of $x_0$; therefore there is
no way starting from $Y(p)$ to figure out what $x_0$ was. Hence
information was destroyed.
\end{example*}
Now on to the definition. Recall that a function $f$ is {\it convex}
\index{function!convex} in a region if the graph of the function lies
below the line segment connecting any two points in that region. It is
{\it concave}\index{function!concave} if $-f$ is convex. Consider
a function $y:\R\to\R$. Then the
{\it Legendre transform} is defined by
\begin{equation}
Y(p)=\begin{cases}
\min_x [y(x)-xp] & \text{if $y$ is convex}\\
\max_x [y(x)-xp] & \text{if $y$ is concave.}
\end{cases}
\end{equation}
This definition makes sense in view of goal (1) at the beginning of this
section, assuming $y$ is differentiable. To see this, note that the maximum
or the minimum corresponds to a critical point, and so
\begin{equation}\label{eq:legendremin}
0=\frac{d}{dx}\left[y(x)-xp\right]=y'(x)-p.
\end{equation}
We now state two useful facts about Legendre transforms. These can
be relatively straightforwardly proven. We emphasize that Legendre
transformations are only defined for concave or convex functions, and
that the concavity or convexity is important to prove the second point,
because it guarantees that the derivative is monotonic.
\begin{proposition}{}{}
\begin{enumerate}
\item The Legendre transformation of a convex function is concave
and vice versa.
\item The Legendre transformation is its own inverse.
\end{enumerate}
\end{proposition}
Now that we have some intuition for why one might want to do a
Legendre transform, we are going to see how some other useful
thermodynamic potentials arise from carrying them out.
\subsection{Helmholtz free energy}
\index{free energy!Helmholtz}
Let us now return to our issue of dropping $S$ in favor
of $T$. According to the above section if we
Legendre transform $U(S)$ out of the variable $S$, then the minimization
will guarantee that the new independent variable is $T$. Calling
this new function $F$, we obtain
\begin{equation}\label{eq:helmholtz}
F(V,T,N)=U-TS.
\end{equation}
This new function is guaranteed to be a thermodynamic potential, and
we give it a special name: the {\it Helmholtz free energy}.
From \equatref{eq:helmholtz} and the first law, we get
\begin{equation}\label{eq:helmholtz1st}
\dd F = -P\dd{V} -S\dd{T} +\sum_i\mu_i\dd{N}_i.
\end{equation}
We can derive some useful thermodynamic relations from these. For
instance\footnote{In \secref{sec:thermdiff} we outlined the importance of
specifying which quantities are held fixed. Sometimes it is clear from context
what is held fixed. For example \equatref{eq:helmholtz1st} makes clear
that $F(V,T,\vec{N})$. In that context, the partial derivative in
\equatref{eq:dfdt} w.r.t. $T$ must be carried out at fixed $V$ and $\vec{N}$.
Of course it is good practice to write what is fixed explicitly, but some
authors including me will drop that notation if they deem it clear, not
relevant, or if they are being lazy.}
\begin{equation}\label{eq:dfdt}
-T^2\pdv{(F/T)}{T}=U=\pdv{(\beta F)}{\beta},
\end{equation}
where $\beta=1/T$ in natural units.
\subsection{Enthalpy}\index{enthalpy}
The {\it enthalpy} is given by the Legendre transformation
\begin{equation}\label{eq:enthalpy}
H(S,P,N)=U+PV.
\end{equation}
From \equatref{eq:enthalpy} and the first law we obtain
\begin{equation}
\dd H = T\dd{S} +V\dd{P} +\sum_i\mu_i\dd N_i.
\end{equation}
\section{Ideal quantum gas}
\subsection{Canonical formulation}
\index{gas!bose}\index{gas!fermi}
\begin{equation}
\eta=
\begin{cases}
+1 & \text{bosonic gas} \\
-1 & \text{fermionic gas}.
\end{cases}
\end{equation}
\subsection{Grand canonical formulation}
% keep discussion general to mu_i N_i
\index{fugacity}
\begin{equation}\label{eq:fugacity}
z=e^{-N\mu/T}
\end{equation}
The {\it grand potential} is\index{grand potential}
\begin{equation}\label{eq:grand}
\Phi\left(V,T,\vec{\mu}\right)=U-TS-\sum_i\mu_i N_i=-T\log\grandZ
\end{equation}
from which one gets
\begin{equation}\label{eq:1stlawgrand}
\dd\Phi=-P\dd{V}-S\dd{T}-\sum_i N_i\dd\mu_i.
\end{equation}
Since $\log\grandZ$ is extensive, it must be that
\begin{equation}
-T\log\grandZ=kV
\end{equation}
for some volume-independent constant $k$. That $k$ is volume-independent usually
holds in the thermodynamic limit\footnote{Compare with, e.g. the Van der
Waals pressure \eqref{eq:vanderwaals}.} Hence\footnote{Sometimes it is convenient to work
with $\muh\equiv\mu/T$ and $T$ instead of $\mu$ and $T$. This equation
holds for both sets of control variables since both $\mu$ and $T$ are
held fixed.} from \equatref{eq:1stlawgrand}
\begin{equation}
P=\lim_{V\to\infty}-\pdv{\Phi}{V}=-k,
\end{equation}
which means we can identify\footnote{Often the thermodynamic limit is taken as
implicit and not always directly written.}
\begin{equation}\label{eq:pgrand}
P=\lim_{V\to\infty}\frac{T}{V}\log\grandZ.
\end{equation}
Reorganizing \equatref{eq:grand} using \equatref{eq:pgrand}
yields the {\it Gibbs-Duhem relation}
\index{Gibbs-Duhem relation}
\begin{equation}
s=\frac{\epsilon}{T}+\frac{P}{T}-\sum_i\frac{\mu_i}{T}n_i,
\end{equation}
where $s\equiv S/V$ is the entropy density, and $\epsilon$ and $n_i$
are the energy and number densities.
Taking a derivative of \equatref{eq:pgrand} w.r.t. $\mu_i$ and
using \equatref{eq:1stlawgrand} yields
another often useful relation,
\begin{equation}
\frac{N_i}{V}=\pdv{P}{\mu_i}.
\end{equation}
\section{Relativistic quantum gases}
Relativistic quantum gases are especially interesting for us
to consider because of their
application to low temperatures of the QCD phase diagram. There, the medium
behaves approximately like a relativistic gas of hadronic bound states.
This allows for a cross check against lattice QCD at low temperatures
and at finite densities. Following some lecture notes by F. Karsch,
we are going to derive the
pressure. From this we can derive other thermodynamic quantities
using standard thermodynamic relations.
We use units $\hbar=c=k_B=1$ and consider a gas of a single species
with spin $s$ and rest mass $m$. Working in the rest frame of the gas,
we have
\begin{equation}\label{eq:dispersion}
E=\sqrt{p^2+m^2}.
\end{equation}
From the grand canonical formulation,
doing the momentum integral in spherical coordinates, the pressure is
\begin{equation}
\frac{P}{T}=-\frac{4\pi\eta g}{(2\pi)^3}\int_0^\infty
\dd{p}p^2\log\left(1-\eta z e^{-E/T}\right),
\end{equation}
where $g=2s+1$ is the {\it degeneracy factor}. \index{degeneracy factor}
Solving the dispersion relation
for $p$ and substituting it into the pressure gives
\begin{equation}
\frac{P}{T}=-\frac{4\pi\eta g}{(2\pi)^3}\int_m^\infty
\dd{E}E\left(E^2-m^2\right)^{1/2}\log\left(1-\eta z e^{-E/T}\right).
\end{equation}
Expanding the logarithm yields
\begin{equation}\label{eq:pressE}
\frac{P}{T}=\frac{4\pi g}{(2\pi)^3}\sum_{k=1}^\infty\eta^{k+1}\frac{z^k}{k}\int_m^\infty
\dd{E}E\left(E^2-m^2\right)^{1/2}e^{-kE/T},
\end{equation}
which is of course only valid for $\eta z\exp(-E/T)<1$.
Let us now evaluate the integral in \equatref{eq:pressE}. We introduce
$x\equiv E/m$. Performing this variable change and integrating by parts gives
\begin{equation}\label{eq:pressint}
\int_m^\infty\dd{E}E\left(E^2-m^2\right)^{1/2}e^{-kE/T}
=\frac{km^4}{3T}\int_1^\infty\dd x\left(x^2-1\right)^{3/2}e^{-mkx/T}.
\end{equation}
This integral can now be expressed in terms of the
{\it modified Bessel functions of the second kind}\index{Bessel
function!modified, second kind}
\begin{equation}
K_\nu(a)=\frac{\pi^{1/2}(a/2)^\nu}{\Gamma(\nu+1/2)}
\int_1^\infty\dd x(x^2-1)^{\nu-1/2}e^{-ax}.
\end{equation}
In particular we see upon comparison with \equatref{eq:pressint}
that our integral contains $K_2(mk/T)$. Combining everything,
using $\Gamma(5/2)=3\pi^{1/2}/4$, we obtain
\begin{equation}\label{eq:pressrelQM}
\frac{P}{T}=\frac{m^2gT}{2\pi^2}\sum_{k=1}^\infty\frac{\eta^{k+1}z^k}{k^2}
K_2\left(\frac{mk}{T}\right),
\end{equation}
which is a form that is well suited for computer calculations\footnote{What
one does is keep some number of terms of the above sum, usually depending
on the mass $m$; in particular $K_2$ falls off sharply for large $m$.
For very large $m$, one may choose to keep only the $k=1$ contribution,
which is the\index{Boltzmann approximation} {\it Boltzmann approximation}.}.
Other thermodynamic quantities can be derived straightforwardly
from the pressure in this form. In this context it is useful to know
the following relation for derivatives of the $K_\nu$:
\begin{equation}
\pdv{K_\nu(z)}{z}=-K_{\nu-1}(z)-\frac{\nu}{z}K_\nu(z).
\end{equation}
Using~\equatref{eq:dfdt} one then gets for the energy density
\begin{equation}\label{eq:edensityQM}
\epsilon=\frac{m^2gT}{2\pi^2}\sum_{k=1}^\infty\frac{\eta^{k+1}z^k}{k^2}
\left( mk K_1\left(\frac{mk}{T}\right)
+ 3TK_2\left(\frac{mk}{T}\right) \right).
\end{equation}
In the context of lattice calculations, which we will discuss in more
detail in \chref{ch:realPhys}, it is often more convenient to switch
from the variable $\mu$ to $\muh\equiv\mu/T$. Besides the fact that
all quantities directly going into and coming out of lattice calculations
must be unitless, this also has the advantage of making the
fugacity blind to partial derivatives w.r.t. $T$, which are then
interpreted as being done at fixed $\muh$.
\subsection{Hadron resonance gas}\label{sec:HRG}\index{gas!hadron resonance}
Knowing the pressure \equatref{eq:pressrelQM}, we are now ready to write down
some expressions for the {\it hadron resonance gas} (HRG) model,
following Ref.~\cite{karsch_probing_2011}. This model is interesting for lattice
results to compare against below $\Tpc$.
The HRG model is a low-temperature model, in the sense we imagine we are working
in a phase where quarks are confined, so that the only degrees of freedom are
hadronic bound states. To a good approximation, this means we need only include
stable mesons, baryons, and their antiparticles.
\subsection{Ideal Fermi gas}\label{sec:fermi}\index{gas!ideal Fermi}
At high temperatures $T\gg\Tpc$, all the hadrons are melted, and the system is
asymptotically free. Therefore a reasonable model is of a gas of non-interacting
quarks and anti-quarks. This can also be obtained from \equatref{eq:pressrelQM}
by setting $\eta=+1$. For a single species of quark, one adds together
contributions in the integrand for both particles and antiparticles.
After performing an integration by parts, you get a simple expression for the
pressure~\cite{hegde_lattice_2008}.
This can be used to compare against lattice results at high $T$.
\section{Bulk properties of matter}
One of the experimental goals of statistical physics is to predict some
properties of thermodynamic systems. Historically one considered e.g.
the air, which is a gas of particles. In these research notes, the focus
is rather on the bulk thermodynamics of strongly interacting matter.
A commonly used procedure is to calculate the pressure, giving
you the EoS, and to derive other thermodynamic quantities from that.
In this section we will discuss some of these quantities.
Altogether, they are often referred to as {\it material
parameters}\index{material parameter}.
The {\it speed of sound}\index{speed of sound} at fixed control parameter $X$
is given by
\begin{equation}
c_X^2=\atFixed{\pdv{P}{\epsilon}}{X}.
\end{equation}
This is not the only expression one might encounter for a sound speed.
Qualitatively, the speed of sound should capture something like
\begin{equation}
c_X^2=\frac{{\rm elasticity}}{{\rm inertia}}.
\end{equation}
If a material is quick to return to its original shape after deformation, a sound
wave in the material, which is caused by these kinds of movements, will have a
higher speed. Conversely if it's harder to move the constituents of the
material, i.e. if it has a higher inertia, sound speed will decrease.
The {\it compressibility}\index{compressibility} at fixed $X$ is
\begin{equation}
\kappa_X=-\frac{1}{V}\atFixed{\pdv{V}{P}}{X},
\end{equation}
which captures how much the volume changes under a change of pressure.
Substances that take a lot of pressure to change the volume are not
very compressible; hence the minus sign.
The {\it thermal expansion coefficient}\index{thermal expansion coefficient}
is
\begin{equation}
\alpha=\frac{1}{V}\atFixed{\pdv{V}{T}}{P},
\end{equation}
and transparently characterizes how much the volume changes under temperature
changes.
The {\it specific heat} at constant volume\index{specific heat} is
\begin{equation}\label{eq:CV}
C_V=\atFixed{\pdv{U}{T}}{V},
\end{equation}
while the specific heat at constant pressure is
\begin{equation}
C_P=\atFixed{\pdv{H}{T}}{P}.
\end{equation}
The idea of a specific heat in general is to characterize how much heat
it takes to change the temperature. Actually this is hidden in each of
these formulae; for instance if we look at \equatref{eq:CV}, we notice
that according to the first law, if we fix $V$, that
$\dd U=T\dd{S}$, which is the heat. The same is true of the specific heat
at constant pressure. Here, the Legendre transform, which adds $PV$,
switches the dependence from $V$ to $P$, and hence this constrained
partial derivative captures the change in heat once again.
We note that the above expressions for material parameters were
written for a thermodynamic system of only two variables. These expressions
can be generalized to systems with, e.g., multiple particle species.
A common choice in that case is to hold fixed the particle numbers.
\subsection{Maxwell relations}\index{Maxwell relation}
Each of the above material parameters are essentially derivatives like
\begin{equation}
\atFixed{\pdv{X}{Y}}{Z,W,...}
\end{equation}
where variables $Z$, $W$, and so on are held fixed. In principle, the
number of such derivatives is large, but in practice, the fact
that partial derivatives of the same function at the same fixed
control parameters commute implies there is a lot of redundancy.
For example:
\begin{equation}
-\atFixed{\pdv{P}{S}}{V,N}
=\atFixed{\frac{\partial^2 U}{\partial S\partial V}}{N}
=\atFixed{\frac{\partial^2 U}{\partial V\partial S}}{N}
=\atFixed{\pdv{T}{V}}{S,N}.
\end{equation}
The equality relating partial derivatives like the ones on the far left
and far right is an example of a {\it Maxwell relation}.
Leveraging Maxwell relations reveals several useful relationships
among material parameters. Since they at their heart stem from
commutation of partial derivatives between two specific variables holding
all other control variables fixed, these will hold regardless of
the number of particle species. One can show:
\begin{equation}\label{eq:matparamconstraints}
\frac{\kappa_T}{\kappa_S}=\frac{C_P}{C_V},~~~~~~ C_P-C_V=\frac{\alpha^2
T}{\kappa_T}, ~~~~~
\kappa_T-\kappa_S=\frac{\alpha^2 T}{n C_P}.