Wolfram Science Group
Phoenix, AZ USA
Registered: Aug 2003
The statement that math is tautology is a typical bit of word play. It was based on a fundamentally mistaken idea in epistemology - the idea that reduction of a problem to one of logic was tantamount to solving it, and the only real theoretical difficulty was to get outside our heads. The first stemmed from too exclusive a focus on trivial bits of logic. The second stemmed from a skeptical hang-up over induction.
Logic is not simple if there is enough of it, in involved enough form. If Goldbach's conjecture is true then it follows from the axioms of arithematic, therefore it is a tautology. But this does not mean anybody has actually proved it. An encylopedia of finite groups contains only math, therefore only tautology. But involves rather more real effort and thought than A = A.
As for the idea that the epistemological problem is to get outside our heads, it was based on a too skeptical distrust of induction, contrasted only with naive verificationist ideas about it, coming from the empiricist tradition in philosophy. Idealists thought they were being very clever when they conceeded points to Hume yet continued phenomenal induction. But the points did not need to be conceeded to Hume; Hume was simply wrong about induction. He read half the relation but not the other half. We can be certain some of our incorrect inductions are incorrect. And that is all that is required.
The positivist program failed completely. In different fields this was noticed at different times, and the failure cropped up in different ways. They thought they would reduce everything to direct sense experience plus logic, and call the rest nonsense, and that the result would be a purely understandable world. With a little Humean distrust of the direct experience bits, ironically handled in a neo-Kantian manner, by talking about only how things looked to us instead of how they are.
It doesn't work. Induction is fine, it proceeds as guesswork and correction - something Peirce already explained and Popper refined to a skeptic's standards. Logical foundationalism fails even within mathematics, as the failure of Hilbert's program demonstrated. Although plenty of confusion ensued about just how and why it had failed.
What really happened is Turing discovered universality. Which showed that non-trivial things go on even in purely formal systems, and well before higher ordinals, and even without distracting issues like self reference. Allow even variables within logic and you arrive. Axioms systems that are universal are not adequately described as systems of tautology; as Churchill said in another context, it is "true but not exhaustive". If this were something rare and artificial we might downplay its importance. After NKS it should be clear it happens all over the place, in systems simple enough to occur naturally.
The issue of matches between our formal models and external relationships was already adequately understood in dogmatic philosophy. All the intervening sound and fury put things in various sorts of brackets for skeptics, operationalists, pragmatists, and idealists of various stripes, but did not change the essential feature. Both sides are now sometimes complex; fine.
Some select portion of external relations are isomorphic to the relations within some formal system, or they are not. If they are, we can experiment or deduce within the formal system in place of the external one. Ashby gave a particularly clear exposition of this essential of modeling in his Intro to Cybernetics, but hardly owns the concept, which is really just an (operationalist or pragmatic) cleaned up version of the venerable old correspondance theory of truth. The "if" above in "if they are" goes within Peircean guesswork or Popperian falsifiability, with suitable "doubt brackets" around every statement for fastidious skeptics, operationalists et al.
The interesting stuff already arises in the purely formal, and is repeated in just as interesting a fashion outside in real relations. We can build a universal axiom system and we can build a general purpose computer. The areas of undecidability left by either are of the same character. Nature computes, we did not invent it. We can't tell what a sufficiently complicated computation will produce without doing about as much work aka watching and seeing what happens, externally or by experimenting within a model.
The old division of labor, that experiment only mattered for externals while deduction was only for formal internals, with the first always being isolated data points in need of theoretical unification and the latter always being simple tautologies, has been comprehensively exploded. Now we have to experiment even with formal systems, if they are involved enough. And we hold ourselves free to guess about externals, and to ascribe reality to our guesses until the data explode them. Conjecture and experiment in mathematics, not just proof; computation not just mathematics as the formal side generalization; and sticking one's neck out in scientific theories, are how we scout unknown terrain, formal or empirical.
You call it the importance of imagination. Imagination is helpful but not the whole story or in my opinion the real story. Yes we need to be freer about speculation than the positivist program allowed. We also need to experiment systematically to explore possibility spaces, do so in exact enough ways that the results are reproducible and verifiable, and in conjecture and modeling try to be adequate to the particular phenomena we are trying to capture. Rather than fixating on imagined remote consequences of preferred schemes, or being dazzled by the generality of this or that encoding scheme - which could easily happen to imagination alone.
Report this post to a moderator | IP: Logged