Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
Second Law, "Hard" and "Soft" Irreversibility, and NKS
In our NKS Online guestbook we got a reasonable question, which I thought others might be interested in, along with my response. It concerns the section at the start of chapter 9 that discusses the second law of thermodynamics. Here is the way the question was posed -
"I'm not sure I agree that the second law of thermodynamics is disproved or somehow not supported by any of these examples. The second law states the total entropy of the universe either increases or stays the same as a result of a physical process. None of Dr. Wolfram's examples show evidence of a spontaneous decrease in entropy which would be required to deny the universality of the Second Law. I would look forward to a discussion of any evidence to the contrary that I may have missed".
The relevant section of the book starts on page 435 and continues to the end of page 457. In the notes, all of them are relevant from page 1017 to the bottom of the first column on page 1022.
To start with, NKS does not say that the second law is disproven, and indeed the note on page 1020 titled "My explanation of the Second Law" explicitly states that what NKS says about the second law is not incompatible with past understandings of it. It claims only to have clarified some points about it, and especially about the puzzle created by the contrast between the apparent reversibility of all known laws of physics in the small, and the definite direction of evolution provided by the second law in the large (frequently called "the arrow of time"). Since the behavior described by the second is supposed to be an emergent consequence of the (apparently reversible) micro-laws, there is a puzzle to explain.
NKS discusses the issue in the context of reversibility. One needs to understand at the outset that there are two distinct ways something like the second law might be taken, which I will call hard irreversibility and soft irreversibility. By hard irreversible, I mean any rule or transformation that takes multiple prior states to the same subsequent state. Such a mapping is a contraction, in a state-space sense. Rule 254 is the classic example - it takes any initial to all black in a few steps. It is easy to see how an emergent measure of rule 254 would show irreversibility, a constant increase in black cells, etc. If the micro-laws of physics were irreversible in a hard sense, then the second law or things like it would be expected.
But the micro-laws we actually see, in great detail e.g. in particle physics, appear to be completely reversible in every respect. They are not contractions. From a local state (including perhaps some local rates of change to be sure, but that is just a local state in a phase-space sense), the past can be projected as readily as the future, according to the same laws. (In QM some of these projections are probabilistic, but that is again true in both directions. In QM, a given current state could have come from a blurred variety of priors).
So there is no obvious reason from the micro-laws themselves, to expect irreversibility up at a coarse-grained level. Coarse graining might be lossy - it might lump many micro-configurations into the same "bin" (of temperature e.g.). But the micro-states themselves need not evolve from more to less ordered, just considering the micro-laws.
Wolfram deals with a few of the classic attempts to explain this in passing, in the note on page 1020 titled "Current thinking on the Second Law". Ergodic explanations, for example, suggest that systems visit all of their states, and that their typical average behavior can be found just by portioning out time among the possibilities. (See also the previous note, discussing Gibb's contributions to this subject). As Wolfram points out, however, only some systems are ergodic to begin with, and the number of possible (real, physical) states for systems with large numbers of components is so astronomical they cannot possibly actually visit them all. More components does not help in that respect, because the number of configurations always grows much more rapidly than components.
So we have systems visiting only tiny portions of their state space according to reversible laws, not populating all of them, nor (it appears) evolving in a contractive way. Yet the observed macro behavior is still irreversible. And the question is why.
Wolfram notices that for any reversible rule, one can find definite micro configurations that would evolve toward increasing order, rather than the opposite. To see this, take such a system (which in addition produces complexity, not always simple behavior) from some ordered state, call it B, and evolve it backward in time. You will reach a state, call it A, that has high entropy measure in all conventional senses. The same as if you evolved it forward to C - a necessary consequence of reversibility of the micro rule. Entropy measured at A or C is high, and at B is low.
Now, imagine starting the system from A and running forward, only to B. You have a reversible deterministic system capable of generating complex behavior, that shows a decrease in entropy measure over the course of its evolution - that portion of it anyway. There is no "hard" increase in entropy going on. Nor is it constant - it is in fact decreasing. But from a highly special initial condition, though one with a high conventional entropy measure, as well. The system state at A is a disordered jumble, that happens to be so configured, that it evolves to an ordered state at B. There is no hard impossibility of this.
But it depends on a rare initial state, A. Of all possible configurations of the systems elements at the time of A, only some tiny portion will evolve to ordered configurations at B, or other nearby times B' or B''. Most possible configurations at A will go from disorder to disorder. A decrease in entropy is then possible, but improbable, if we select initial conditions at time A in some arbitrary manner, instead of constructing them by evolving backward from a known ordered state at B.
This understanding of the second law, therefore, predicts there will be local and temporary deviations from it, on some scale damping out as the special conditions around A that are ancestor states of an ordered B, die out in a sea of disordered states at time A that are ancestors of disordered states at B. The second law is a statistical law about ordered areas of configuration space, not a hardwired requirement from step to step.
The analog of the Maxwell's Demon question then becomes, can one find ancestors of ordered Bs, by some method more efficient than actually evolving the system to see what it does? And there Wolfram invokes computational irreducibility, to explain why it will in general be as hard to know what the ancestors of ordered Bs are, as it is to overcome the entropy of the system in any other way. The inability to short-cut the backward calculation, "what microstates will lead to order at B?", sets a limit on the sort of initials that can be considered. Otherwise put, a large information (negative entropy) "investment" would be needed to pick out exactly the prior configurations A - themselves apparently disordered - that happen to evolve to order at B.
For any system that intrinsically generates randomness, the above is the whole story. It shows the kind of thing the second law is about, and how and why it should be expected to hold. If such systems are the most common forms of complexity, formally and in the real world, again we expect the second law to hold wherever we look, with only local and temporary, small scale deviations from it, as regularly covered in basic statistics e.g. in the theory of sampling, when there are small sample sizes.
But Wolfram looked for and found (purely formally) some reversible rules that do not seem to always produce increasing disorder in this sense, from typical initial conditions. Without being simple in their behavior all of the time. Andrew Wuensche noticed a similar phenomenon, when he looked for a classifier to distinguish class 4 from class 3 complexity, by looking at a 2D graph of entropy vs. change in entropy. The 3s rapidly get to maximum entropy and stay there. Entropy high, delta entropy low. But 4s show fluctuations in both. 4s have local particles on simple or periodic backgrounds, which are in effect quite ordered states compared to seething class 3 randomness. But they can also produce regions of disorder that look essentially like class 3. As the portions of a system in one or the other "phase", changes, an entropy measure will move, and in either direction.
What Wolfram found in his rule 37R example was essentially this phenomenon in a reversible rule (as opposed to general rules, where it also happens with 4s, most of which are not reversible). And the idea is that selecting a state at random back at A for such a system, there is no reason to expect, even statistically let alone by any step-by-step necessity, that the entropy measure of the system at B will be higher than it was at A.
The second law is correct for lots of systems (formal as well as empirical), because intrinsic generation of randomness is everywhere. It can be expected in a hard way if the underlying rules are contractions (irreversible themselves). It can be expected in a statistical way if the underlying rule is reversible but class 3 randomizing. But if the underlying rule of a system is both reversible and class 4, local-particle-supporting, there is no good reason to expect the second law to apply to that system. Overall it is certainly true - that is confirmed by experiment, and to be expected, purely formally, from the rariety of special initials, the preponderance of 3s over 4s, etc. But from formal considerations alone, we should expect there might be (limited measure, rare, etc) exceptions.
I hope this helps.
Report this post to a moderator | IP: Logged