Sunday, August 05, 2007

Negative Type Theory

Stephen Simpson repeatedly talks about certain subsystems of second-order arithmetic as 'natural' or as 'arising naturally' (SOAS, e.g. pp. 33, 43, etc.). In a similar context, John Burgess contrasts 'artificial examples' with 'theories that it is natural to consider' (Fixing Frege, p. 54). But what idea of naturalness is at work here? There's more than one sort of naturalness that can be in play when we talk about mathematical theories. Which is a trite point, but which bears some discussion. In fact, there's a number of things to be said. But here's a nice case which is perhaps relatively unfamiliar but which very vividly illustrates one basic, preliminary, distinction we need to draw.

As background, recall the structure of a simple theory of types. The entities in its universe are divided into levels. At level 0, there are individuals. At level 1, there are sets of individuals. Then at level 2, sets of level 1 sets; at level 3, sets of level 2 sets; and so on up an unending but non-cumulative hierarchy. The two key principles structuring this hierarchy are an extensionality principle for sets,and a comprehension principle to the effect that given any condition satisfied by zero or more level n − 1 entities, there exists a level n set containing just those entities.

Now, to get an interesting system in which we can actually do some mathematics, we'll also need to add a third independent principle governing the hierarchical universe, an axiom of infinity which tells us that there is an infinite set. And why do we need a substantive axiom of infinity? Because, to labour the obvious point, there are only a finite number of predecessor levels below any given level in the hierarchy. So if there are only finitely many individuals at ground level, then – although the population at each succeeding level grows exponentially – when we reach the given level the population need still only be finite.

So far, all so very familiar. But the alert mathematician might pick up on the point I've just laboured, and wonder whether we can't after all get a type theory which guarantees infinitely populated levels by changing our structural assumptions about the levels, and imagining the hierarchy continuing infinitely downwards as well as upwards.

So now think of the denizens of level 0, and the lower levels, as more sets. It is easy to see that comprehension alone will guarantee an infinite population at every level.For example, comprehension guarantees that there is an empty set ∅n at level n. And so, at level zero, again by comprehension, there are all the following sets: ∅0 , {∅−1 }, {{∅−2 }}, {{{∅−3 }}}, . . . , which by extensionality are all distinct. So we've conjured infinite populations just out of the structure of the hierarchy, without making any additional substantive assumption. We've shown how to get more out of less.

Now, that is indeed very cute! And, after all, if we are considering a typed universe, with the levels indexed by N, what – in one good sense – is more natural than to explore by way of contrast, on the one hand, a restricted universe with only a finite number of levels and, on the other hand, a richer universe with the levels indexed by Z? These formal variants on a standard simple theory of types immediately suggest themselves for mathematical investigation. But while the idea of a 'negative type theory' is a formally natural one – and there is indeed some fun to be had exploring the resulting theory, which can readily be proved consistent – it is of course conceptually highly artificial. For the original idea of a type hierarchy, after all, involved the conception of building from the ground up by repeatedly applying an operation of set-formation. Remove the distinguished ground level and it seems that we can make no conceptual sense of what is supposed to be going on in the resulting hierarchy. And we can make even less sense of the theory when we find e.g. that, by a little theorem of Thomas Forster (1989), all its models are nonstandard in the sense that the population of level n + 1 is not reliably of the expected standard size in a type theory, i.e. not the size of the power set of level n.

This lack of a clear conceptual motivation is presumably why Hao Wang, who first noted the possibility of a negative type theory in his (1952), calls it 'a kind of curiosity'. So let's take this as a cautionary tale, to illustrate the contrast we surely do need to draw between what we might call formal naturalness and conceptual naturalness. Or, as we might be tempted to put it in some cases, between purely mathematical and philosophical naturalness. Though note, talking the second way might suggest that we are presupposing that there's a place for a 'first philosophy', critically assessing mathematics from outside. But that, I'd argue, is the wrong picture: conceptual naturalness is itself already one kind of mathematical virtue, well recognized in the actual practice of mathematicians.

It seems rather easy to be seduced by a Bourbarkiste fantasy, and to fall into thinking that mathematics – real mathematics – is, or ought to be, an entirely formal game, with theories evolving purely by linear deduction from definitions and axioms. But as Imre Lakatos, for one, reminds us, that isn't how mathematics in fact proceeds. There is, rather, more of a to-and-fro between intuitive ideas, analogies, pattern-recognitions, informal proofs, and the development of sharpened concepts and the more rigorous proofs which deploy them (though sufficient unto the day is the rigour thereof ). The conceptually unnatural theories are the ones where, in the process of formalizing, we lose touch with, or distort too far, the informal mathematical ideas that were originally supposed to be guiding the theory-building. More on all this anon.


Aatu Koskensilta said...

An interesting example indeed! The canonical example of a formally natural but conceptually entirely unmotivated theory is of course Quine's NF. As a formal development of type theory it does arise naturally -- in type theory most theorems are "typically ambiguous" and so on -- but, unlike ZFC (and other set theories of that breed) there is no natural motivating picture from which the axioms naturally flow. It is, from a conceptual point of view, just a random bunch of formulas, a purely formal device.

On a related note, in your Isaacson's thesis paper you mention the results of Harvey Friedman that certain natural -- for some values of "natural" -- combinatorical theorems are equivalent to (1-)consistency of ZFC+certain large cardinal axioms. Are we entitled to draw from this the conclusion that "higher order" or "infinitary" considerations must necessarily enter into any proof of such theorems? In order to do that we must surely first argue that there is no theory in which the theorem is provable that is naturally motivated by "finitary" ideas. That certainly seems very plausible, but surely a close look at the hierarchy of natural theories, and the concept of naturalness, is called for before any definite conclusions. After all, Yesenin-Volpin has promised us an ultra-intuitionistic consistency proof of ZFC with inaccessibles thrown in for good measure...

This is talk from cloud-cuckoo land, of course, but a more close-to-earth example can be provided. Isaacson, if I recall correctly, and you both mention Goodstein's theorem as an example of a statement unprovable in PA. It is surely not a very convincing argument for "hidden higher-order concepts" to note that the natural proof of this statement involves induction up to epsilon-0. For the natural proof of a statement of the form "the function so-and-so at level omega^omega^omega^...^omega in a hierarchy of fast-growing recursive functions terminates on every input" is also by transfinite induction. The only reason to regard this sort of tf. induction as "finitary" and that up to epsilon-0 as involving higher order concepts not seems to be the unprovability of induction up to epsilon-0 in PA. But that is a convincing argument only if one already knows PA comprises exactly the finitarily provable theorems.

It seems to me that the answer to the question of which formal theory exactly captures those theorems which are provable on basis of our understanding of the naturals is ambiguous; there are at least two ways of conceiving "basic grasp of the naturals" and two ways of understanding "provable on basis of". We can include in our basic understanding of the naturals the realisation that certain basic principles characterise our understanding, or we can leave this reflective understanding out. Also, we can include in our conception of the naturals the idea that they (and other finitary inductively defined classes) form a "determined totality", or we can leave that out. The different ways, combined, lead to four different answers, in increasing order of proof theoretic strength: PRA, PRA + {TI^qf_alpha | alpha < epsilon-0}, PA and PA + {TI_alpha | alpha < Gamma_0}.

That was somewhat rambling once again. I hope to return to the issue in e-mail when I have some time.

Peter Smith said...

I agree entirely about NF of course -- though it is nice to have a different example. The interesting philosophical question is: what counts as "natural motivating picture". It seems to me that this is a rather central idea that mathematicians work with, but philosophers of maths have said next to nothing about.

As to the rest of your post, Aatu, this is very interesting and relates to things I want to be thinking about it. I'll try to return to these matters in due course when I have something useful to say!

Aatu Koskensilta said...

What counts as a "natural motivating picture" is a philosophical question of considerable interest, at least to me, and exploring the notion of naturalness seems a promising avenue of research.

A related question concerns the notion of mathematical substantiality: in any text there is a depressing amount of scaffolding, that is, definitions, theorems, lemmasa, and so on, that are not in themselves of any mathematical interest, but are necessary to frame and prove the important theorems. It is obviously not of centreal importance to recursion theory whether a Turing machine is a 4-tuple or a 5-tuple, for example, and it is easy to see when two theorems, stated using different definitions, are "really the same". But can we say something more specific and interesting about this?

As to the possibility that the proof of this or that theorem of finite combinatorics must necessarily involve large cardinals, infinitary ideas or whatnot, there's a possible approach I've been thinking about. If we could find some necessary conditions for a theory to be considered "natural" it might be possible to prove something like

"If a theory T meets the minimal criteria necessary for naturalness, and proves a theorem P, then this-or-that fragment or extension of set theory is naturally interpretable in T"

where P is a theorem of finite combinatorics, thus establishing that any proof of P from "natural" principles must involve these-or-those infinitary principles.

We would of course also need to isolate some criteria for naturalness of interpretations; ZFC + billions of Woodin cardinals is interpretable in PA + Con(ZFC + billions of Woodin cardinals), after all.