K.G.DENBIG Discussion - Comment on Barrett and Sober
In their Introduction the authors  pose the question: Why do we
know more about the past than about the future? They are critical of previous
answers, such as were concerned with Schlick's familiar example of the
footprint in the sand. They go on to claim that they have established a
novel connection between the asymmetric knowledge we have of past and future,
on the one hand, and entropy change, on the other.
In my view it is necessary first to be clear about the meaning of 'entropy'.
There are many entropies and not all of them are unambiguously related
to the Second Law of thermodynamics. As a consequence there is much confusion
in the literature, and it has not been entirely avoided in Barrett and
Although thermodynamic entropy had been clearly enough defined, in terms
of heat and temperature, by Clausius and Kelvin, Boltzmann proceeded to
give it a tentative interpretation as a measure of disorder. Rather similarly,
Willard Gibbs toyed with the idea that it might be a measure of 'mixed-up-ness'.
More recently entropy has been taken as a 'lack of information', or as
a measure of uncertainty. It is true that examples can be found to support
all of these interpretations. But to all of them there are important counter-examples
as was shown in my  and . And of course even a single counter-example
is sufficient to disprove any claim of identity with the original thermodynamic
There are various useful entropies for which no such claim is made,
and which are quite legitimate when they are named distinctively - e.g.,
as 'statistical mechanical entropy'. 'Kolmogorov-Sinai entropy', 'Shannon
entropy', and so on. Even among the familiar statistical mechanical entropies
those named after Boltzmann/Planck and Gibbs respectively are not necessarily
equal to each other, and neither do they exhibit quite the same behaviour
as the thermodynamic entropy (Penrose ).
Further confusion has arisen through a disregarding of the fact that
the significance and value of the expression-,
depends on what sort of probabilities the pi stand for
(Popper ). Whereas in statistical thermodynamics the pi
refer to the probabilities of molecular states, in the context of information
theory they usually refer to the probabilities of certain macrostates,
such as are given by the frequencies of occurrence of symbols in a message.
Barrett and Sober are primarily interested in retrodiction and prediction
in biological situations - for instance concerning an ancestor/descendant
lineage of organisms. They ask: What can be said concerning an ancestor's
character state n generations in the past, or a descendant's character
state n generations in the future, on the basis of an actual observation
of that character state in the present.
For this purpose they take the temporal development to be a Markov chain
with fixed transition probabilities. They use the expression-,
and apply it to a population. The generalization they draw from their calculations
is that 'When entropy increases from past to present to future, observing
the present can be expected to provide more information about the past
than about the future'. 'This result does not depend on whether the system
is closed or open.'
However, where these authors refer to 'entropy' the context makes it
clear that it is the Shannon measure they are concerned with. 'Entropy',
they say (p. 156), 'is a measure of uncertainty'. This is not how a thermodynamicist
understands the term, and in my  it was shown that the uncertainty
measure and the thermodynamic entropy bear no fixed relationship to each
Turning to retrodiction and prediction it is just not generally true
that, during a period in an isolated system when its thermodynamic entropy
rises monotonically to a maximum, the past of that system is better known
than is its future. The reverse is the case, and this is demonstrated by
many physico-chemical examples involving such well-understood processes
as heat transfer, diffusion and chemical and nuclear reactions.
An example is provided by two blocks of material which are in contact,
and are isolated from the environment apart from whatever small apertures
are needed for the measuring of their temperatures (Denbigh , p.
127-8). If at t = 0) the blocks are found to be at different temperatures,
it can be reliably predicted that at any later time the temperature difference
will be smaller. Or again, if at t = 0 the temperatures are equal
it can be predicted that at all later times they will still be equal.
However, as regards knowing about the past. an observer who arrives
at t = 0 is helpless! If he finds the blocks to be at different
temperatures, he has no means of saying whether someone had just a moment
earlier assembled the system in that state, or whether the blocks had been
for some time at even greater temperature differences. And if he finds
them at equal temperatures, he cannot know whether they had previously
been at unequal temperatures, and, if so, which one of the two blocks had
been the hotter.
As has been said, there are many similar physico-chemical examples.
To quote just one of them. if I find two gases in a state of being well
mixed. I can predict that they will remain well mixed, but I cannot retrodict
that they were so in the past.
Now it is true, of course, that people generally know more about the
past than about the future - mainly due, I think, to their ability to recognize
records and traces, and to their powers of reasoning about the correct
temporal order in causal sequences. Perhaps it is to be expected that an
information measure, such as is used by Barrett and Sober, is able to mirror
these human abilities.
But why, we must ask, does entropy (i.e. thermodyamic entropy) work
better in the opposite direction, as has been seen in the foregoing examples.
I think the answer must lie in what is at the root of the Second Law -
it is the tendency, to put it crudely, for the inanimate world to 'run
down'. In any isolated system the potentiality for further change gradually
becomes exhausted, with the consequence that the system in question approaches
a terminus, an equilibrium state, characterized by a maximum of thermodynamic
Furthermore, from a given initial state, as specified by macroscopic
physico-chemical variables, there is just one equilibrium state
which can be arrived at, under conditions of isolation. Theory can often
be used to predict what that equilibrium state will be. What is of greater
significance in the present context is that the one equilibrium
state, as macroscopically specified, can arise from any number of
initial states. Therefore a retrodiction from the later equilibrium state
to any particular initial state is necessarily far less reliable (in the
absence of traces or records) than is a prediction from the initial state
forward to the equilibrium state, or to any intermediate state which is
known to be en route to the equilibrium state.
In general, I think it is not so much entropy itself as the more elementary
facts about causal sequences, about the A- and B-theories of time, and
about the human faculty of 'knowing', which are most relevant to retrodiction
King's College, London
- Barrett, M. and Sober, E. : 'Is Entropy Relevant
to the Asymmetry Between Retrodiction and Prediction'. The British Journal
for the Philosophy of Science. 43, pp. 141-60.
- Denbigh, K. G. : Three Concepts of Time.
New York: Springer-Verlag.
- Denbigh, K. G. and Denbigh, J. S. : Entropy in
Relation to Incomplete Knowledge. Cambridge: Cambridge University Press.
- Denbigh, K. G. : The British Journal for the
Philosophy of Science. 40, 323-332.
- Penrose, O. : Foundations of Statistical Mechanics.
London: Pergamon Press.
- Popper, K. R. : in The Philosophy of Karl Popper,
(ed.) P. A. Schilpp, 1, p. 130. La Salle, IL: Open Court Publishing
Order of Glory
Institute of Physico-Chemical Problems of Evolution
Commmerce and Law