Saturday, December 29, 2007

"The best thing out there"

Cue sound of blowing one's own trumpet. The first "Customer Reviews" on Amazon USA for the Gödel book have just appeared. And there is a quite terrific one by Jon Cogburn of LSU at Baton Rouge. Very cheering indeed! (Though it is also another spur to get going with developing the online sets of exercises that I've been meaning to put together.)

Logical Options, 1

As I noted in my last post, I'm going to be working through Bell, DeVidi and Solomon's Logical Options in a seminar with some second year undergrads this coming term. To answer Richard Zach's question, I might have used Ted Sider's draft book as the main text if I'd known about it before plans were made in November -- though Logical Options does still perhaps more neatly dovetail with our tree-based first year course.

Because the seminar starts on the first teaching day of term, I can't presume much reading. So we'll have to make a slow start. Here though are the reading notes for the first session, in case anyone else is interested. (It goes without saying that corrections and/or suggestions are always welcome!)

Thursday, December 27, 2007

Back to the grindstone

Hmmm, the next ten weeks or so are going to be busy.

For a start, I must find time to finish reading the papers in the Absolute Generality collection, and I'll no doubt continue to try to say something about them here. Not that the topic thrills me anywhere near as much as the number of posts might suggest, but it does remain puzzling, I have promised to write a BSL review, and blogging about the papers is as good a way as any of making myself do the reading moderately carefully.

Then at the beginning of term there is a Graduate Conference on the Philosophy of Logic and Mathematics in the faculty which promises well, and I'm down to comment on the first paper on vagueness (a topic on which I've got pretty rusty).

During term, I'm now giving some seminars to second-year students on the Bell/DeVidi/Solomon text, Logical Options. That book isn't at all ideal, on a closer look, but I can't think of anything better to cover the sort of ground the students need to cover, though that will certainly involve writing some supplementary notes. I'll link to the notes here as they get done.

And then there's going to be all the homework for the grad seminar I'm running with Thomas Forster on model theory, working through the shorter Hodges.

Oh, and I've promised to talk to the Jowett Society in Oxford in February. Gulp.

Well, I've only myself to blame, given that those are all works beyond the call of duty. Still, it should be tolerable fun (on the logician's rather stretched understanding of 'fun'). Better make a start tomorrow, though ...

Monday, December 24, 2007

Absolute Generality 19: Lavine on McGee's argument

There are still over twenty pages of Lavine's paper remaining. Since, to be frank, Lavine doesn't write with a light touch or Lewisian clarity, these are unnecessarily hard going. But having got this far, I suppose we might as well press on to the bitter end. And, as I've just indicated in the previous post in the series, I do have a bit of a vested interest in making better sense of his talk of schematic generalizations.

There are four Sections, 9 -- 11, ahead of us. First, in Sec. 8, Lavine argues that even with schematic generalizations as he understands them in play, we still can't get a good version of McGee's argument that the quantifier rules suffice to determine that we are ultimately quantifying over a unique domain of absolutely everything, and so McGee's attempt to respond to the Skolemite argument fails. I think I do agree that the rules even if interpreted schematically don't fix a unique domain: but I'm still finding Lavine's talk about schematic generalizations pretty murky, so I'm not sure whether that is right. Not that I particularly want to join Lavine in defending the Skolemite argument: but I am happy to agree that McGee's way with the argument isn't the way to go. So let's not delay now over this.

In Sec. 9, Lavine discusses Williamson's arguments in his 2003 paper 'Everything' and claims that everything Williamson wants to do with absolutely unrestricted quantification can be done with schematic generalizations. Is that right? Well, patience! For I guess I really ought now to pause here to (re)read Williamson's paper, which I've been meaning to do anyway, and then return to Lavine's discussion in the hope that, in setting his position up against Williamson, more light will be thrown on the notion of schematic general in play. But Williamson's paper is itself another fifty page monster ... So I think -- just a little wearily -- that maybe this is the point at which to take that needed holiday break from absolute generality and Absolute Generality.

Back to it, with renewed vigour let's hope, in 2008!

Sunday, December 23, 2007

Simple things are best.

I did fiddle around a bit today trying to get a hack to work for splitting long posts into an initial para or two, with the rest to be revealed by hitting a "Read more" link (if you want to know how to do it in Blogger, see here). But in the end, I decided I didn't like the result. It's not as if even my longest posts are more than about half-a-dozen moderate sized paragraphs (and it is a good discipline to keep it like that): so it is in any case easy enough to scan to the end of one post to jump on to the next. I'll stick to the current simple format.

The Daughter recommended that I try OmniFocus 'task management' software which implements Getting Things Done type lists. Well, it's not that I haven't tasks to do, and the GTD idea really does work. But, having played about a bit with the beta version you can download, I reckon my life isn't so cluttered that carrying on using NoteBook and iCal won't work for me. Again, I think I'll stick to the simpler thing.

Absolute Generality 18: More on schematic generality

In a subsection entitled 'Schemes are not reducible to quantification', Lavine writes

Schematic letters and quantifiable variables have different inferential roles. If n is a schematic letter then one can infer S0 ≠ 0 from Sn ≠ 0, but that is not so if n is a quantifiable variable -- in that case the inference is valid only if n did not occur free in any of the premisses of the argument.

But, in so far as that is true, how does it establish the non-reducibility claim?

Of course, one familiar way of using schemes is e.g. as in Sec. 8.1 of my Gödel book where I am describing a quantifier-free arithmetic I call Baby Arithmetic, and say "any sentence that you get from the scheme Sζ ≠ 0 by subsituting a standard numeral for the place-holder 'ζ ' is an axiom". And to be sure, the role of the metalinguistic scheme Sζ ≠ 0 is different from that of the object language Sx ≠ 0. Still, it would misleading to talk of inferring an instance like S0 ≠ 0 from the schema. And here the generality, signalled by 'any', can -- at least pending further, independent, argument -- be thought of as unproblematically quantificational (though not quantifying over numbers of course). So this sort of apparently anodyne use of numerical schemes doesn't make Lavine's point, unless he can offer some additional considerations. So what does he have in mind?

Lavine's discussion is not wonderfully clear. But I think the important thought comes out here:
One who doubts that the natural numbers form an actually infinite class will not take the scheme φ(n) → φ(Sn) to have a well-circumscribed class of instances and hence will not be willing to infer φ(x) → φ(Sx) from it; for the latter formula involves a quantifiable variable with the actually infinite class of all numbers as its domain or the actually infinite class of all numerals included in its substitution class.

We seemingly get a related thought e.g. in Dummett's paper 'What is mathematics about?', where he argues that understanding quantification over some class of abstract objects requires that we should 'grasp' the domain, that is, the totality of objects of that class -- which seems to imply that if there is no totality to be grasped, then here there can be no universal quantification properly understood.

But do note two things about this. First, a generalization's failing to have a well-circumscribed class of instances because we are talking in a rough and ready way and haven't bothered to be precise because we don't need to be, and its failing because we can't circumscribe the class because there is no relevant completed infinity (e.g. because of considerations about indefinite extensibility), are surely quite different cases. Lavine's moving from an initial example of the first kind when he talked about arm-waving generalizations we make in introductory logic lectures to his later consideration of cases of the second kind suggests an unwarranted slide. Second, I can see no reason at all to suppose that sophisticated schematic talk to avoid being committed to actual infinities is "more primitive" than quantificational generality. On the contrary.

Still, with those caveats, I guess I am sympathetic to Lavine's core claim that there is room for issuing schematic generalizations which don't commit us to a clear conception of a complete(able) domain. In fact, I'd better be sympathetic, because I actually use the same idea myself here (where I talk about ACA0's quantifications over subsets of numbers, and argue that the core motivation for ACA0 in fact only warrants a weaker schematic version of the theory). So, even though I don't think he really makes the case in his Sect. 7, I'm going to grant that there is something in Lavine's idea here, and move on next to consider what he does with idea in the rest of the paper.

Saturday, December 22, 2007

Three cheers for the Stanford Encyclopedia

Richard Zach, one of the subject editors, has noted on his blog that there's a new entry on the Stanford Encyclopedia by Herb Enderton on Second-order and Higher-order Logic. The SEP is really developing quite terrifically, and it seems to me that the average standard of the entries on this freely accessible resource is distinctly better than e.g. on the expensive Routledge Encyclopedia. (I did write one entry for the latter, on C.D. Broad, still one of my favourite early 20th century philosophers: but I was given just half the space of his entry in the old Edwards encyclopedia -- and that seems to be indicative of the shortcomings of the Routledge Encyclopedia: it covers too much too thinly.)

Friday, December 21, 2007

Absolute Generality 17: Schematic generality

In Sec. 7 of his paper, Lavine argues that there is a distinct way of expressing generality, using "schemes" to declare that 'any instance [has a certain property], where "any" is to be sharply distinguished from "every"' (compare Russell's 1908 view). In fact, Lavine goes further, talking about the kind of generality involved here as 'more primitive than quantificational generality'.

We are supposed to be softened up for this idea by the thought that in fact distinctively schematic generalization is actually quite familiar to us:

When, early on in an introductory logic course, before a formal language has been introduced, one says that NOT(P AND NOT P) is valid, and gives natural language examples, the letter 'P' is being used as a full schematic letter. The students are not supposed to take it as having any particular domain -- there has as yet been no discussion of what the appropriate domain might be -- and it is, in the setting described, usually the case that it is not 'NOT(P AND NOT P)' that is being described as valid, but the natural-language examples that are instances of it.1

Here, talk about a full schematic variable is to indicate that 'what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.'

But Lavine's motivating example doesn't impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of 'NOT' and 'AND'. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a 'substitution instance' of that form needs quite a bit of explanation, since plugging in a declarative English sentences won't even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness ('it is kinda raining and not raining, you know'), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I -- like Lavine -- will leave things in a sense pretty 'open-ended' at this stage. Does that mean, though, that I'm engaged in something other than 'quantificational generality'? Does it mean that I haven't at least gestured at some roughly delimited appropriate domain? Isn't it rather that -- as quite often -- my quantifications are cheerfully a bit rough and ready?

'Ah, but you are forgetting the key point that 'what counts as an acceptable substitution instance is ... expands as the language in use expands.' But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of 'All the rabbits at the bottom of the garden are white' changes as the population of rabbits expands. Does that make that claim not quantificational?

A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn't multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn't, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the 'open-ended' usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes.

So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?


1. Quite beside the present point, of course, but surely it isn't a great idea -- when you are trying to drill into beginners the idea that truth is the dimension of assessment for propositions and validity is the dimension of assessment for inferences -- to turn round and mess up a clean distinction by calling logically necessary propositions 'valid'. I know quite a few logic books do this, but why follow them in this bad practice?

Wednesday, December 19, 2007

Absolute Generality 16: Lavine on the problems, continued

(3) "The third objection to everything is technical and a bit difficult to state, and in addition it is relatively easily countered," so Lavine is brief. I will be too. Start with the thought that there can be subject areas in which for every true (∃x)Fx -- with the quantifier taken as restricted to such an area -- there is a name c such that Fc. There is then an issue whether to treat those restricted quantifiers referentially or substitutionally, yet supposedly no fact of the matter can decide the issue. So then it is indeterminate whether to treat c as having a denotation which needs to be in the domain of an unrestricted "everything". And so "everything" is indeterminate.

Lavine himself comments, "the argument ... works only if the only data that can be used to distinguish substitutional from referential quantification are the truth values of sentences about the subject matter at issue". And there is no conclusive reason to accept that Quinean doctrine. Relatedly: the argument only works if we can have no prior reason to suppose that c is operating as a name with a referent in Fc (prior to issues about quantifications involving F). And there is no good reason to accept that either -- read Evans on The Varieties of Reference. So argument (3) looks a non-starter.

(4) Which takes us to the fourth "objection to everything" that Lavine considers, which is the Skolemite argument again. Or to use his label, the Hollywood objection. Why that label?

Hollywood routinely produces the appearance of large cities, huge crowds, entire alien worlds, and so forth, in movies ... the trick is only to produce those portions of the cities, crowds, and worlds at which the camera points, and even to produce only those parts the camera can see -- not barns, but barn façades. One can produce appearances indistinguishable from those of cities, crowds, and worlds using only a minisule part of those cities, crowds, and worlds. Skolem, using pretty much the Hollywood technique, showed that ... for every interpreted language with an infinite domain there is a small (countable) infinite substructure in which exactly the same sentences are true. Here, instead of just producing what the camera sees, one just keeps what the language "sees" or asserts to exist, one just takes out the original structure one witness to every true existential sentence, etc.

That's really a rather nice, memorable, analogy (one that will stick in the mind for lectures!). And the headline news is that Lavine aims to rebut the objections offered by McGee to the Skolemite argument against the determinacy of supposedly absolutely unrestricted quantification.

One of McGee's arguments, as we noted, appeals to considerations about learnability. I didn't follow the argument and it turns out that Lavine too is unsure what is supposed to be going on. He offers an interpretation and readily shows that on that interpretation McGee's argument cuts little ice. I can't do better on McGee's behalf (not that I feel much inclined to try).

McGee's other main argument, we noted, is that "[t]he recognition that the rules of logical inference need to be open-ended ... frustrates Skolemite skepticism." Lavine's riposte is long and actually its thrust isn't that easy to follow. But he seems, inter alia, to make two points that I did in my comments on McGee. First, talking about possible extensions of languages won't help since we can Skolemize on languages that are already expanded to contain terms "for any object for which a term can be added, in any suitable modal sense of 'can'" (though neither Lavine nor I am clear enough about those suitable modal senses -- there is work to be done there). And second, Lavine agrees with McGee that the rules of inference for the quantifiers fix (given an appropriate background semantic framework) the semantic values of the quantifiers. But while fixing semantic values -- fixing the function that maps the semantic values of quantified predicates to truth-values -- tells us how domains feature in fixing the truth-values of quantified sentences, that just doesn't tell us what the domain is. And Skolemite considerations aside, it doesn't tell us whether or not the widest domain available in a given context (what then counts as "absolutely everything") can vary with context as the anti-absolutist view would have it.

So where does all this leave us, twenty pages into Lavine's long paper? Pretty much where we were. Considerations of indefinite extensibility have been shelved for later treatment. And the Skolemite argument is still in play (though nothing has yet been said that really shakes me out of the view that -- as I said before -- issues about the Skolemite argument are in fact orthogonal to the interestingly distinctive issues, the special problems, about absolute generality). However, there is a lot more to come ...

Tuesday, December 18, 2007

More introductions to forcing

I was going to post a follow-up to that link a few days ago to Tim Chow's "Beginner's guide to forcing", but Richard Zach at LogBlog has beaten me to it.

Three cheers for Radio 3

People grumble that it isn't what it was, but the output on BBC Radio 3 remains pretty amazing, and these days there is the great bonus of being able to "listen again" via the web for up to a week. While driving earlier today, I caught a few minutes of what seemed a strikingly good performance of the Beethoven op. 132 quartet. It turned out to by the Skampa Quartet. I've just "listened again" to the whole performance: I'd say it is extremely well worth catching here (click on "Afternoon on 3" for Tuesday, the Beethoven quartet starts about 19 minutes in). Terrific stuff.

Monday, December 17, 2007

Absolute Generality 15: Lavine on the problems

Shaughan Lavine's is one of two fifty-page papers in Absolute Generality (I'm not sure that the editors' relaxed attitude to overlong papers does either the authors or the readers a great service, but there it is). In fact, the paper divides into two parts. The first six sections review four anti-absolutist arguments, and criticizes McGee's response on behalf of the absolutist to (in particular) the Skolemite argument. The last five sections are much more interesting, arguing that we can in fact do without absolute unrestricted generality -- rather, "full schematic generality will suffice", where Lavine is going to explain at some length what such schematic generality comes to.

But first things first. What are the four anti-absolutist arguments that Lavine considers? (1) First, there's the familiar argument from the paradoxes that suggests that certain concepts (set, ordinal) are indefinitely extensible and that it is not possible for a quantifier to have all sets or all ordinals in its domain. Now, unlike McGee, Lavine recognizes that that the "objection from paradox" raises serious issues. However, he evidently thinks that any direct engagement with the objection just leads to a stand-off between the absolutist and anti-absolutist sides, "each finds the other paradoxical", so he initially sets the argument aside.

(2) The second argument is the "framework objection" that Hellman also discusses in his contribution. "Different metaphysical frameworks differ on what there is ... If the answers to [questions like are there any mathematical entities? is space composed of points?] are not matters of facts, but of choice of framework, ... [then there is only] quantification over everything there is according to the framework [and not] absolutely unrestricted quantification." Well, as I noted before, if two frameworks differ just in what they take to be ontologically basic and what they take to be (in some broad sense) constructions out of the basics, then that is beside the present point. We can still quantify over all the same things in the different frameworks -- for quantifying over everything isn't to be thought of as restricted quantification over whatever is putatively basic. So to make trouble here, the idea would have to be that there can be equally good rival frameworks, with only a conventional choice to be made between them, where Xs exist according to one framework, and cannot even be constructed according to the other. If there are such cases, then there may be an argument to be had: but that is a pretty big "if", and Lavine doesn't give us any reason to suppose that the condition can be made good, so let's pass on.

[To be continued.]

Sunday, December 16, 2007

Blog roll ... and thanks to Monica Vitti

I've added a link alongside to Tim Gower's blog (thanks to Carrie Jenkins for recommending it), and I've removed links to a couple of seemingly dormant blogs.

Monica Vitti
I'm a bit staggered to find that the number visitors to this blog have doubled over recent weeks, to over a thousand this week. A sudden enthusiasm out there for musings about absolute generality or recommendations for logicky reading? Somehow I doubt it! Perhaps unsurprisingly, the tracker shows that this geeky post still gets a number of hits a day, even though it just links to someone else's neat solution to a Leopard irritation. Rather more surprisingly, there is a steady if small stream of people who've ended up here because they are searching for photos of Monica Vitti (so as not to disappoint, here is another screen-capture from L'Eclisse). But I wonder why they are searching ... Other one-time film buffs, perhaps moved by Antonioni's death earlier in the year to revisit the haunting films first seen so long ago? Or is it that the films, and Vitti's iconic presence, are being rediscovered? I find it so very difficult to imagine how they would now seem, if being seen for the first time, against such an utterly different cinematic background to that of the early 60s.

Absolute Generality 14: A rule for 'everything'

The final section of McGee's paper is called "A rule for "everything"'. He argues that "the semantic values of the quantifiers are fixed by the rules of inference". The claim rests on noting that (i) two universal quantifiers governed by e.g. the same UE and UI rules will be interderivable [McGee credits "a remarkable theorem of J.H. Harris", but it is an easy result, which is surely a familiar observation in the Gentzen/Prawitz/Dummett tradition]. McGee then claims that, assuming the quantifier rules don't misfire completely [like the tonk rules?], this implies that (ii) they determine a uniquely optimal candidate for their semantic value. And further, (iii) "the Harris theorem ... gives us reason to anticipate that, when we develop a semantic theory, it will favor unambiguously unrestricted quantification."

The step from (i) to (ii) needs some heavy-duty assumptions -- after all, the intuitionist, for example, doesn't differ from the classical logician about the correct quantifier rules, but does have different things to say about semantic values. But McGee seems to be assuming a two-valued classical background; so let that pass. More seriously in the present context, the step on to (iii) is just question-begging, if it is supposed to be a defence of an absolutist reading of unrestricted quantification. Consider a non-absolutist like Glanzberg. He could cheerfully accept that the rules governing the use of unrestricted universal quantifiers fix that they run over the whole background domain, whatever that is (and that a pair of quantifiers governed by the same rules would both run over that same domain): but that leaves it entirely open whether the background domain available to us at any point is itself contextually fixed and can be subject to indefinite expansion.

Of course, says the anti-absolutist, there is no God's eye viewpoint from which we can squint sideways at our current practice and comment that right now "(absolutely) everything" on our lips doesn't really run over all the exists. "Everything" always means everything. What else? But that isn't what the anti-absolutist denies, and so it seems that McGee fails to really engage with the position.

Saturday, December 15, 2007

Logic tests

A few weeks back, I was asked to put together a twenty minute logicky multiple choice test for undergraduates applying to Cambridge to read philosophy. I had better not say much here at all, but I think I managed to produce a fairly tricky paper to do in the time. I'm told that scores spread out from worse-than-chance to almost-perfect, so I guess it worked to filter out the not-so-hot reasoners. I did, though, find it surprisingly difficult to put together a template that I was happy with and which we could weave variations around. I think next year we should instead use this splendid alternative.

Absolute Generality 13: Skepticism about the quantifiers in particular

In Sec. 2 of his paper, McGee reviews a number of grounds that might be offered for skepticism about absolutely unrestricted quantification. But he doesn't take the classic indefinite extensibility argument very seriously -- indeed he doesn't even mention Dummett, but rather offers a paragraph commenting on the disparity between what Russell and Whitehead say they are doing in PM to avoid vicious circles, and what they end up doing with the Axiom of Reducibility. Given the actual state of play in the debates, just ignoring the Dummettian version of the argument seems pretty odd to me.

But be that as it may, "the bothersome worry," according to McGee, "is not that our domain of quantification is always assuredly restricted [because of indefinite extensibility] but that the domain is never assuredly unrestricted [because of Skolemite arguments]". Here I am, trying to quantify all-inclusively in some canonical first-order formulation of my story of the world, and by the LS theorem there is a countable elementary model of story. So what can make it the case that I'm not talking about that instead?

OK, it is a good question how we should best respond to the Skolemite argument, and McGee offers some thoughts. He suggests two main responses. The first appeals very briefly to considerations about learnability. I just don't follow the argument (but I note that Lavine is going to discuss it, so let's hang fire on this argument for the moment). The second is that "[t]he recognition that the rules of logical inference need to be open-ended ... frustrates Skolemite skepticism." Why?

The LS construction requires that every individual that's named in the language be an element of the countable subdomain S. If the individual constant c named something outside the domain S, then if '(∀x)' is taken to mean 'for every member of S', the principle of universal instantiation [when c is added to the language] would not be truth-preserving. Following Skolem's recipe gets us a countable set S with the property that interpreting the quantifiers as ranging over S makes the classical modes of inference truth-preserving, but when we expand the language by adding new constants, truth preservation is not maintained. The hypothesis that the quantified variables range over S cannot explain the inferential practices of people whose acceptance of universal instantiation is open-ended.

But this line of response by itself surely won't faze the subtle Skolemite. After all, there is a finite limit to the constants that a finite being like me can add to his language with any comprehension of what he is doing. So start with my actual language L. Construct the ideal language L+ by expanding L with all those constants I could add (and add to my theory of the world such sentences involving the new constants that I would then accept). Now Skolemize on that, and we are back with trouble that McGee's response, by construction, doesn't touch.

Actually, it seems to me that issues about the Skolemite argument are orthogonal to the distinctive issues, the special problems, about absolute generality. Suppose we do have a satisfactory response to Skolemite worries when applied e.g. to talk about "all real numbers" (supposing here that "real number" doesn't indefinitely extend): that still leaves the Dummettian worries about "all sets", "all ordinals" and the like in place just as they were. Suppose on the other hand we struggle to find a response to the Skolemite skeptic. Then it isn't just quantifications that aim to be absolutely general that are in trouble, but even some seemingly tame highly restricted ones, like generalizations about all the reals. Given this, I'm all for trying to separate out the distinctive issues about absolute generality and focussing on those, and then treating quite separately the entirely general Skolemite arguments which apply to (some) restricted and unrestricted quantifications alike.

Friday, December 14, 2007

Absolute Generality 12: McGee on semantic scepticism

I was intending to look at the papers in Absolute Generality in the order in which they are printed. But Glanzberg's piece is followed by a long one by Shaughan Lavine which is in significant part a discussion of Vann McGee's views, including those expressed in the latter's contribution to this book. So it seems sensible to discuss McGee's paper first.

The first section of this paper addresses "semantic skepticism in general". McGee writes

The prevalent skeptical view, which is sometimes called deflationism or minimalism, allows that a speaker can say things that are true, but denies that her ability to do so depends on the linguistic practices of herself and her community. ... [D]isquotationalism doesn't connect truth-conditions with patterns of usage. ... The (T)-sentences for own language are, for the deflationist, an inexplicable brute fact.

And there is more in the same vein. Well, I thought I was a kind of deflationist, but certainly I don't take myself to be wedded to the idea that truth-conditions aren't connected to patterns of usage. Au contraire. I'd say that it is precisely because of facts about the way I use "snow is white" that I am interpretable as using it to say that snow is white. And because "snow is white" is used by me to say that snow is white, then indeed "snow is white" on my lips is true just in case snow is white. So, I'd certainly say that the truth of such a (T)-sentence isn't inexplicable, it isn't an ungrounded brute fact. But the core deflationist thought -- that there is in the end, bells and whistles apart, no more to the content of the truth predicate than is given in such (T)-sentences (the notion, so to speak, lacks metaphysical weight) -- is surely quite consistent with that.

Still, let's not fuss about who gets to choose which position counts as properly "deflationist". My point is merely that the sort of extreme position which McGee seems to talking about (though he is far from ideally clear) is remote from plausible versions of deflationism, is therefore to my mind not especially interesting, and in any case -- the key point here -- hasn't anything particularly to do with issues about absolute generality. So exactly why does he think it is going to be illuminating to come at the topic this way? I'm rather stumped. So I propose just to pass over his first section with a rather puzzled shrug.

Meanwhile, in Iraq, ...

the situation of many women much improves. Of course. What does one say?

Wednesday, December 12, 2007

Absolute Generality 11: Indefinitely expanding?

I mentioned that Glanzberg's paper focuses on Williamson's version of Russell's paradox for interpretations. I can't say that I find that version very illuminating, but there it is. But it does shape Glanzberg's discussion, and he tells the story about background domain expansion in terms of someone's reflecting directly about the interpretation of their own language. But I don't think that this is of the essence, nor the clearest way to present a discussion about what Dummett would call indefinite extensibility.

What is central to the discussion is Glanzberg's reflections on how far we should iterate the expansion of our domain of "absolutely everything", once we grasp the (supposed) Dummettian imperative to start on the process. Dummett's talk about indefinite extensibility suggests that he thinks that there is no determinate limit point (which, I take it, isn't to say that the expansion definitely goes on for ever, but that there is no point where we have a clear reason to stop). Now, Glanzberg by constrast, things here might be reason just to embrace iteration up to the first non-recursive ordinal, or up to the first α + 1 ordinal, or up to the first α+ ordinal. He then says

In considering multiple options, I do not want to suggest that there is nothing to distinguish among them ... Rather, I think the moral to be drawn is that we do not yet know enough to be certain just how far iteration really does go.

But, once we play the Dummettian game, I just don't see why we should think that there will be a determinate answer here, and certainly Glanzberg gives us no clear reason to suppose otherwise.

Mathematical Knowledge

Another day, another new book to mention. My current and recent colleagues Mary Leng, Alexander Paseau, and Michael Potter have edited revised versions of some of the papers from the 2004 conference held here in Cambridge on Mathematical Knowledge. It is quite a slim, expensive, hardback; but it should certainly at least be in your university library, if only for the three papers by the editors.

I wonder why they thought that Durer's 'Melancholia' was an apt illustration for the cover ...?

Tuesday, December 11, 2007

Absolute Generality 10: Expanding background domains

Having tried twice and miserably failed to explain to a sullenly sceptical Hungarian girl that you don't make an espresso macchiato by filling up the cup to the top with hot milk, I was perhaps not in the optimal mood to sit in a cafe and wrestle with the rest of Glanzberg's paper again. Still, I tried my best.

What I was expecting, after the first three sections, was a story about how "background domains" get contextually set, a story drawing on the thoughts about what goes on in more common-or-garden contextual settings of restricted domains. Though I confess I was sceptical about how this could be pulled off -- given that the business of restricting quantifications to some subdomain of everything available to be quantified over in a context and the business of (so to speak) reaching out to everything would seem to be intuitively quite different. But in fact, what Glanzberg gives us is not the whole story about background-domain-fixing ("very little has been said here about how an initial background domain is set" p. 71), but rather a story about how we might go about domain-expansion when we are (supposedly) brought to acknowledge new objects like the Russell class which cannot (on pain of paradox) already be in the domain of objects that we are currently countenancing as all that there are.

Now, Glanzberg says (p. 62) that although domain-expansion isn't a case of setting a restricted domain, "it is still the setting of a domain of quantification ... [so] it should be governed by the principles" discussed in Sec. 3 of the paper. But in fact, as we'll see, at most one of the principles to do with common-or-garden domain fixing arguably features in the discussion here about domain-expansion (and I'd say not even that one).

What Glanzberg does discuss is the following line of thought (I'll use a more familiar example, though he prefers to work with Williamson's variant Russell paradox about interpretations). Suppose I'm cheerfully quantifying over everything, including sets. You then -- how? invoking an All in One principle? -- get me to countenance that domain as itself a something, an object, and then show me that it is one which on pain of paradox can't be in the domain I started off with. Ok, so now this new object is a "topic of discourse", and what is now covered by "(absolutely) everything" should now include that new object -- and I suppose we could see this as an application of the same principle about domains including current topics which we mentioned as governing ordinary domain setting. (But equally, as I said before, it in fact isn't at all clear how we should handle that supposed general principle in the case of ordinary domains. And the thought in the present case just needn't invoke any wobbly notion of 'topic' but comes down to the following more basic point: if we are brought explicitly to acknowledge the existence of an object outside the previous domain of what we counted as "(absolutely) everything", then that forces an expansion of what we must now -- in our new situation -- include in "(absolutely) everything".)

So, to repeat, suppose you bring me to acknowledge e.g. a set-like object beyond those currently covered by "all sets". I expand my domain of quantification to contain that too. But not just that too. I'll also need to add ... well, what? At least, presumably, all the other objects that I can define in terms of it, using notions that I already have. And so then what? Thinking of all these objects together with the old ones, I can -- by the same move as before -- take all those together as a domain, and now we have a new object again. And off we go, iterating the procedure. It is, of course, not for nothing that Dummett called this sort of expansion indefinitely extensible!

We are now in very familiar territory, but territory unconnected with the early sections of the paper. We can now ask: just how far along the ordinals should we iterate? Maybe -- Glanzberg seems to be saying -- it isn't a matter of indefinite extensibility, but there is a natural limit. But I found the discussion here to be not very clear.

Where does all this leave us then? As I say, the early sections about common-or-garden domain setting in fact drop out as pretty irrelevant. If the paper has does have anything interesting to say, it is in the later sections, particularly in Sec. 6.2 about -- so to speak -- how indefinitely extensible indefinitely extensible concepts are. So, OK, I'll return to have another bash at that. (Though I will grumpily add that I think the editors could have taken a firmer line in getting Glanzberg to make his arguments more accessible.)

Monday, December 10, 2007

Hartley Slater goes off-piste

B. Hartley Slater has a book out, The De-Mathematisation of Logic, collecting together some of his papers. You can in fact download it free by filling in the form here. As you'll see just from reading the first few pages, he certainly thinks that most of us are skiing too fast down the conventional well-worn slopes, ending up in tangles of confusion. Methinks he protests too much; but still, you might find some thought provoking episodes here.

Sunday, December 09, 2007

L'Eclisse


Monica Vitti
It is bad luck to return from blue skies in Milan to a miserably wet and cold Cambridge (it is one of those times when those notices in the Botanical Gardens classifying us as falling into a 'semi-arid' region seem a mockery). And term has ended so the faculty is almost deserted, so that's not very cheering either. It all gives added attraction to the idea of spending a lot more of the year in Italy when I retire.

In an Italian mood, we've just watched L'Eclisse for the first time in very many years. It does remain quite astonishing. And what is remarkable is just how many of the images seem so very familiar, having been burnt into the memory by perhaps three viewings in the cinema decades ago. For a tiny example, there is a moment when Monica Vitti in longshot is unhappily walking home after a bad night alongside a grassy bank, carrying her handbag and perhaps a silk wrap or scarf, and she suddenly -- as one might -- swishes the scarf though some plants by the road. Why on earth should one remember that? Yet both of us watching did.

Absolute Generality 9: Restricting quantifiers

Section 3 of Glanzberg's paper gives an overview of the ways in explicit and common-or-garden-contextual restrictions on quantifiers work (as background to a discussion in later sections about how "background domains" are fixed). This section isn't intended to be more than just a quick set of reminders about familiar stuff, so we can be equally speedy here.

Going along with Glanzberg for the moment, suppose the "background domain" in a context is M (the absolutist says there is ultimately one fixed background domain containing everything: the contextualist says that M can vary from context to context). More or less explicit restrictions on quantifiers plus common-or-garden-contextual restrictions carve out from this background a subdomain D (so that Every A is B is interpreted as true just when the As in D are B, and so on). How?

Explicit restrictions are relatively unproblematic. But how is the contextual carving done? There are cases and cases. For example, there is carving by 'anaphora on predicates from the context', as in

1. Susan found most books which Bill needs, but few were important,

where 'few' is naturally heard as restricted to the books that Susan found and Bill needs. Then second, there is 'accommodation', where we rely perhaps on some Gricean mechanisms to read quantifiers so that claims made are sensible contributions to the conversational context. For example, as we are about to leave for the airport, I reassuringly say that, yes, I'm sure,

2. Everything is packed

when maybe some salient things (the passports, say) are in plain view in my hand and my keys are jingling in my pocket. Here, my claim is heard, and is intended to be heard, as generalizing over those things that it was appropriate to pack, or some such.

There's a third, rather different way, in which context can constrain domain selection, that isn't a matter of domain restriction but rather a matter of how, when an object which is already featuring prominently enough as a focal topic of discourse at a particular point, "we will expect contextually set quantifier domains to include it". (Though I guess that this point has to be handled with a bit of care. The taxi for the airport arrives very early: we comment on it. "But," I say, "we might as well leave in the taxi now. Everything is packed" The quantifier of course doesn't now include the taxi, even though it is the current topic of discourse.)

Well, so far, let's suppose, so good. But how are these reminders about common-or-garden contextual settings of domains going to help us with understanding what is going on in fixing 'background' domains? The story continues ...

Saturday, December 08, 2007

Twaddle about religion and science

The Guardian's Review of books on a Saturday is always worth reading, and the lead articles can be magnificent. This week, for example, it prints Doris Lessing's Nobel prize acceptance speech. Still, the reviews do occasionally get me spluttering into my coffee.

For a shaming display of sheer intellectual incompetence, how about one Colin Tudge, reviewing today a book on the conflict or otherwise between science and religion. Here's what he says about science:

Scientists study only those aspects of the universe that it is withintheir gift to study: what is observable; what is measurable and amenable to statistical analysis; and, indeed, what they can afford to study within the means and time available. Science thus emerges as a giant tautology, a "closed system".

That is simply fatuous. Firstly, science isn't and can't be a tautology -- science makes contentful checkable predictions, and no mere tautology (even in a stretched sense of "tautology") can do that. Second, the fact the scientists are limited by their endowments (both cognitive and financial) in no way implies that science is a "closed system" on any sensible understanding of that phrase -- at least in our early stage in the game, novel conjectures and new experimental techniques to test them are always possible.

But it gets worse:
Religion, by contrast, accepts the limitations of our senses and brains and posits at least the possibility that there is more going on than meets the eye - a meta-dimension that might be called transcendental.

First, the "by contrast" is utterly inept. Anyone of a naturalistic disposition accepts the limitations of our senses and brains; science indeed already tells us that there is a lot more going on than meets the eye; and it is a pretty good bet, on scientific grounds, that there is a more going than we will ever be able to get our brains around. So what? Accepting such limitations has nothing whatever to do with "meta-dimensions" (ye gods, what on earth could that even mean?); and it would be just a horribly bad pun to slide from the thought that there might be aspects of the natural world that transcend our ability to get our heads around them to the thought that a properly modest view of our own cognitive limitations means countenancing murky religious claims about the "transcendental". Yet, we are told,

[A]theism - when you boil it down - is little more than dogma: simple denial, a refusal to take seriously the proposition that there could be more to the universe than meets the eye.

So, according to Tudge, no atheist takes a sensibly modest view about our cognitive limitations. What a daft thing to say. It is plainly entirely consistent -- I don't here say correct, but at least consistent -- to hold that (a) our cognitive capacities are limited, but (b) among the things we do have very good reason to believe are that Zeus, Thor, Baal and the like are fictions, and that the Gods of contemporary believers are in the same boat. And that entirely consistent position is, for a start, the one held by the atheist philosophers I know (in fact, by most of the philosophers I know, full stop).

There's more that is as bad in Tudge's piece, as you can read for yourself. If this kind of utterly shabby thinking is the best that can be done on behalf of religion, then it does indeed deserve everything that Dawkins and co. throw at it.

Added later, in response to comments elsewhere If I wrote with a little heat, I was just echoing Colin Tudge, who says of Dennett and Dawkins:

On matters of theology their arguments are a disgrace: assertive without substance; demanding evidence while offering none; staggeringly unscholarly.

I think it is appropriate to point out, in similar words, that -- whatever the merits or demerits of Dennett and Dawkins -- Tudge's arguments are a disgrace, assertive without substance, and staggeringly unscholarly.

Absolute Generality 8: Glanzberg on contextualism

According to Michael Glanzberg's "Context and Unrestricted Quantification", quantifiers always have to be understood as ranging over some contextually given domain; and paradoxes like Russell's show that, 'for any given context, there is a distinct context which provides a wider domain of quantification'. So he is defending 'a contextualist version of the view that there is no absolutely unrestricted quantification'.

The aim of this paper, however, isn't to directly defend the contextualist thesis as the best response to the paradoxes (Glanzberg has argued the case elsewhere), so much as to explore more closely how best to articulate the thesis, and in particular to explore how the idea that quantifiers always have to understood in terms of a background domain which is set contextually relates to more common-or-garden cases of quantifier domain restriction.

Consider, for example,

1. Every graduate student turned up to the party, and some undergraduates did too.
2. Everyone left before midnight.

In the first case there is are explicitly restricted quantifiers. But we of course don't mean every graduate student in the world turned up: there is also a contextually supplied restriction to e.g. students in the Cambridge philosophy department. In the second case, context does all the restricting -- e.g. to the people at the party.

So far, so familiar. But what about
3. Absolutely everything that exists, without exception, is self-identical?

Here there is no explicit restriction to a subclass of what exists; nor need there be any common-or-garden-contextual restriction of the ordinary kind. Still, Glanzberg wants to say, in any given context there is a background domain ('the widest domain of quantification available' in that context). This is the domain over which quantifications as in an occurrence of (3) range, when there is no explicit restriction and no common-or-garden-contextual restriction. And, the argument goes, there is a kind of contextual relativity in fixing this background domain (so, in a sense, the likes of (3) involve contextually relative although unrestricted quantifiers):

Whereas the absolutist holds there is one fixed background domain, which is simply 'absolutely everything', the contextualist holds that different contexts can have different background domains.

But of course the contextualist needs to say more than that: it isn't just that different contexts might give different extensions to 'absolutely everything', it is also the case that there is no way of setting up a 'maximal' context in which our quantifiers do succeed in being maximally, unexpandably, all-embracing. For example, the contextualist must say that, even given a context of sophisticated philosophical reflection, when -- in fall awareness of the issues -- we essay a claim like

4. Absolutely everything that might fall under our quantifiers in any context whatsoever is self-identical,

we still somehow must fall short of our aim, because the context can be changed in a way that will expand what counts as everything. But how plausible is this? Well, we'll have to see how the explanations develop over the rest of the paper.

Friday, December 07, 2007

Absolute Generality again

A while ago I made a start here on blogviewing Absolute Generality, edited by Augustín Rayo and Gabriel Uzquiano (OUP, 2006): but I only had a chance to comment on two papers before the chaos of term and other commitments brought things to a halt. I'm reviewing the book for the Bulletin of Symbolic Logic, so I really need to get back down to business, so I can write the 'proper' review over the next few weeks. Next up here, then, will be comments on the contribution by Michael Glanzberg: I'm planning to go through the pieces in roughly the order they are printed. If you want to know what I thought about the contributions by Kit Fine and Geoffrey Hellman, then -- rather than trawl back through the blog -- you can find what I wrote all in one place here.

Wednesday, December 05, 2007

Forcing myself back to logic!

Tim Chow has posted a draft "beginner's guide to forcing". I very much like these opening remarks:

All mathematicians are familiar with the concept of an open research problem. I propose the less familiar concept of an open exposition problem. Solving an open exposition problem means explaining a mathematical subject in a way that renders it totally perspicuous. Every step should be motivated and clear; ideally, students should feel that they could have arrived at the results themselves. The proofs should be 'natural' in Donald Newman's sense: "This term ... is introduced to mean not having any ad hoc constructions or brilliancies. A 'natural' proof, then, is one which proves itself, one available to the 'common mathematician in the streets'." I believe that it is an open exposition problem to explain forcing. Current treatments allow readers to verify the truth of the basic theorems, and to progress fairly rapidly to the point where they can use forcing to prove their own independence results .... However, in all treatments that I know of, one is left feeling that only a genius with fantastic intuition or technical virtuosity could have found the road to the final result.

Leaving aside the question of how well Tim Chow brings off his expository task -- though it looks a very interesting attempt to my inexpert eyes, and I'm off to read it more carefully -- I absolutely agree with him about the importance of such expository projects, giving "natural" proofs of key results in various levels of detail: these things are really difficult to do well yet are hugely worth attempting for the illumination that they bring.

Also, Philosophy of Mathematics: 5 Questions (which I've mentioned before as forthcoming) is now out. This is a rather different kind of exercise in standing back and trying to give an overview, with 28 philosophers and logicians giving their takes on the current state of play in philosophy of mathematics (the authors range from Jeremy Avigad, Steve Awodey and John L. Bell through to Philip Welch, Crispin Wright and Edward N. Zalta). The five questions are

  1. Why were you initially drawn to the foundations of mathematics and/or the philosophy of mathematics?
  2. What example(s) from your work (or the work of others) illustrates the use of mathematics for philosophy?
  3. What is the proper role of philosophy of mathematics in relation to logic, foundations of mathematics, the traditional core areas of mathematics, and science?
  4. What do you consider the most neglected topics and/or contributions in late 20th century philosophy of mathematics?
  5. What are the most important open problems in the philosophy of mathematics and what are the prospects for progress?
The answers should be very illuminating.

Tuesday, December 04, 2007

Postcard from Milan #3

Continuing to wax lyrical about the food here would quickly get very boring, so I won't -- I'll just say that if you get a chance to eat at the Trattoria dei Cacciatori, something of a Milanese institution, in old farm buildings on the outskirts of the city, then do take it!

But enough already. It has been very good being here again, in all sorts of ways (not just gastronomic!). And in the gaps between other things, I'm having the welcome chance to idly turn over in my mind what my next work project(s) should be. For the first time in far too many years, I find myself a completely free agent. No journal to edit, no necessity to write anything to get brownie points for this or that purpose, no new courses I need to work up. The feeling of freedom is quite a novelty. Slightly disconverting, but I'm rather enjoying it. And some possible ideas are already beginning to take shape ...

Sunday, December 02, 2007

Postcard from Milan #2

Two of life's mysteries. Why is it more or less impossible to get a decent cappuccino in England when any Autogrill stop on an Italian motorway can do a brilliant one? And just why is tagliatelle in butter with white  truffle shaved over it to die for?

Saturday, December 01, 2007

Postcard from Milan #1

I'm taking an overdue weekend away from the delights of Cambridge and from thinking/teaching about matters logical. Milan isn't at all my favourite Italian city (too big, too flat, too nineteenth century, too North European): but The Daughter is here, and the place has its moments. The front of the Duomo is almost revealed again after restoration and looks amazing. The shops in their way look equally wonderful (though it is mostly quite mad of course). L'Artigiano in Fiera is on at the moment, a huge fair, much of it showing off local products from all over Italy, including -- inevitably -- unending stalls of hams and sausages and salami and lardo and other delights of more variety than could last a lifetime. We were bowled over by the pride of the producers. Four people came back laden with porcine goodies.

And then there are the restaurants. Last night to La Riscacca Blu for the best sea food meal ever. Carpaccio of tuna and yellowtail and swordfish and anchovies. A salad of scampi and onions and tomatoes awash in oil. Then a very light fritto misto of scampi and octopus and squid and strips of courgettes. A pause. Pasta with red mullet. Then a whole turbot between four, simply baked but wonderful. Fine chardonnay from Alto Adige. Like many good Italian restaurants, the place looks nothing special. But it seems just impossible to eat like this in England even if you spend an absolute fortune and we didn't. (Our neighbours at the next table were a tough looking quartet of heavy-set guys, tucking in with great relish to a distinctly adventurous menu. It dawned on us that they must have been bodyguards for the minister of defence a couple of tables down.) So, when you are next in Milan ...


Wednesday, November 28, 2007

Boolos, Burgess and Jeffrey, 5th edn.

It seems only yesterday that the fourth edition came out: but -- as I found in the CUP bookshop today --- there's now a fifth edition of Computability and Logic. The preface announces that the main revision in this addition is a "simplification of the treatment of the representability of recursive functions". And the material on Robinson Arithmetic has been rewritten, and there's a more explicit discussion of the two uses we can make of Church's Thesis (essential and eliminable).

I confess to still perhaps preferring the more spartan elegance of the early editions. And I think that the book has always been, then and now, rather harder for students than the authors intended (which was one reason that I imagined that there was room for my Gödel book, even though it criss-crosses over quite a bit of the same territory). But credit where a great deal of credit is due: this is still a lovely book, full of good things and with some terrific explanations of tricky stuff. So hasten to your bookshop ...

Tuesday, November 27, 2007

Logic lives?

Well, this is moderately cheering (about logic matters, at any rate). First, about fifteen people have said that they are interested in a reading group working through the shorter Hodges on model theory. And now, about a fifth [update: over a quarter] of our second year undergrads already have said they are interested in another reading group working through Bell, DeVidi and Solomon (I've warned them it isn't a doddle). So despite my despondency about the state of logic in the UK more generally, interest does seem to live on here (bright students will see there's perhaps more serious sustenance than in e.g. worrying about how they know there's a coffee cup in front of them, which is a game which palls for most of us after about twelve minutes). It's just that we don't have enough people to sustain a proper logic teaching programme.

Losses

To Edinburgh yesterday, for the funeral of John Davidson, an old friend going back to early Aberystwyth days, who died far too young. The funeral was done with great aptness to the man. The poet Jeffrey Wainwright read 'Stoic', which he had written some ten years since for and about John. "The brook's lullaby" from Die Schöne Müllerin to conclude (for John found endless solace in Schubert's songs). Then later glasses were raised at The Scotch Malt Whisky Society members' rooms in Leith, and old acquaintances seen whom I suppose we are not likely to come across again with a last link gone. A hard day.

And on getting back to Cambridge I was shocked and saddened to hear of the sudden and quite unexpected death of Peter Lipton here.

Sunday, November 25, 2007

ACA0, #7: The last word

At last. If you go to my Gödel book's website, and click on the link under "Latest Additions" then the link no longer takes you to a rambling, unfinished essay but to a much crisper extended version of the talk I gave in Oxford a week ago. I argue that ACA0 strictly speaking overgenerates, and that the official conceptual motivation for the theory in fact favours a weaker theory (a theory that doesn't threaten to inductively inflate, yet which is as competent at generating proxies for theorems of classical analysis). And I make a similar claim about a different family of extensions to first-order PA as well, i.e. extensions by adding truth-theories. Here too I suggest that a familiar weak theory also overgenerates. (I suggest that this matters if we are concerned to evaluate and perhaps defend Dan Isaacson's Thesis that first-order PA sets a limit to what can be established from purely arithmetical considerations plus logic alone.)

Friday, November 23, 2007

I'm with Turgenev

Figes tells us that Turgenev wrote a famous bit of verse about the critic Stasov: the penultimate lines are ...

Argue even with a fool:
You will not gain glory
But sometimes it is fun.

I confess I've been spending a few minutes here and there blasting off again at a few fools on sci.logic. The highminded justification is that, given the thousands of people who do visit that group each day, someone ought occasionally to say "enough is enough" faced with streams of garbage. But let's be honest, Turgenev is right: sometimes it is just fun.

Thursday, November 22, 2007

Looking on the bright side, logically speaking

Heck, advancing years are a terrible thing. Apart from the obvious drawbacks -- and let's not go into those -- you start forgetting stuff that once upon a time you kinda knew a bit about. Ok, the things that you've regularly lectured on/written about sort of stay in place, more or less. It's the more peripheral stuff that you find has been dragged to trash by the passing of time. Humppphh. So there I was, sitting like an idiot in the math logic seminar, not having had time to do enough homework, and just not able to recover what I once knew(?) from Bell and Slomson's Models and Ultraproducts. Duh. But then I suppose I can look on the bright side. There will be the fun of rediscovery rereading the book (which I recall as very good) before the math logic seminar gets a bit serious next term. Though I'm not sure that is much compensation.

Something very different I'm currently devouring for the first time as my late night reading -- something I wanted to read when it came out, but I've only just got round to -- is Orlando Figes' Natasha's Dance. I'm only about a quarter through but it is simply fantastic. He writes with a novelist's flair, building up a multilayered picture that throws so much light on what is going on in Russian literature. It's wonderfully readable. I'm bowled over in admiration. So back to it ...

Tuesday, November 20, 2007

Back from Oxford

I'm back from Oxford where I was giving a talk at Dan Isaacson's philosophy of maths seminar. I was a bit anxious about the occasion in advance as I knew that Michael Dummett still tends to go. In the event he did come, and nodded and smiled encouragingly from time to time, and said he enjoyed the talk as he left at the end of the discussion. Which has certainly made my week. The discussion was helpful (Dan was pressing me hard to be clearer), but the basic line of argument seemed to survive. Phew. So I'll write up a version of this stuff over the next week and put it on my website. Watch for a link.

I'd not been to Oxford since The Daughter left about eight years ago: and I'm always struck anew how wonderful the place looks (even on a rainy late-autumn night). Cambridge really hasn't anything to compare with Radcliffe Square, or indeed the streets around and about.

I spent more than I should have done in Blackwell's. I've made a start since on James Ladyman and Don Ross's Every Thing Must Go: Metaphysics Naturalized which I bought despite the outrageous price, gripped by reading the first twenty pages or so in the shop. Having now finished the first long chapter, I find myself in considerable agreement with their basic line. Indeed, I'd say they rather pull their punches in criticizing arm-chair metaphysics based -- if based on science at all -- on a grasp of science that stops round about A-level chemistry (as they rudely but fairly put it); and some of the discussion so far is a bit oracular. Still, there's a lot of the book to come, so we'll see how well they sharpen their points and hammer them home. So far, so rather good!

"We segwayed"

An article in the Times today: "We segwayed in to a conversation about the au pair ...". Truly, the barbarians are already inside the gate.

Saturday, November 17, 2007

Ahem .... Leopard stacks icons

It is seriously sad to care about these things. I know. But for those who are equally miffed by the Leopard update's failure to sort the stack icon flaw -- ok, it's not exactly a bug but everyone thinks it is a quite daft design choice -- I've just found a very elegant solution here. (Apologies to all those who haven't the foggiest what I'm talking about!)

Friday, November 16, 2007

Homeopathy

A wonderful piece in today's Guardian, reprinted here, by the estimable Ben Goldacre, on quack science and the moral bankruptcy of professional defenders of homeopathy. Read it.

CMS at twilight

I was coming out of CMS this evening and was struck, not for the first time, how terrific it looks in twilight or at night. In daylight, the buildings look relatively sober, though very handsomely done. But later it can look almost magical.

The small image here links to a slide show of very atmospheric photos by Stefan Meinel, a grad student in DPMMS. They seem to me to hit off the place excellently.

(I bumped into Thomas Forster there, and the plans for a serious model theory reading group for philosophers/mathmos/compscis next term indeed look to be a runner. Excellent news.)

Thursday, November 15, 2007

Logical excitements ...

A day that didn't quite go to plan in a couple of ways, but still, three good logical things happened.

First, I'd been given the task of updating and expanding the logicky part of the entrance test done by undergraduates applying to read philosophy at Cambridge. It would be giving the game away to say very much about this. But I did have quite a bit more fun than I was expecting trawling around to see how others -- like USA grad schools -- manage these things, and putting together some suitably testing questions. (I'm not sure I'd have ever got in to Cambridge with all these new fangled tests, and interviews to find out if you are a well-rounded human being: in my day I just did a lot of nasty problems in projective geometry very fast, and bingo ...)

Second, it seems that -- thanks to Thomas Forster -- my plan to learn some more model theory by running a reading group with some hard core mathmos joining in seems as though it will come off next term: that should be terrific. The default suggestion is that we do the shorter Hodges (though the Marker book looks a possibility too).

And third, not least, it was Logic Seminar day -- the highpoint of the academic week really. At the moment, at Michael Potter's instigation, we are thinking about later Dummettian arguments against realism in mathematics. Or rather Michael and some of our grads are thinking, and I'm trying to keep up. Today we were looking at Peter Sullivan's contribution to the new "Schillp" volume on Dummett, where he seeks to locate in Frege: Philosophy of Mathematics a really rather simple argument for anti-realism. Applied to arithmetic, we have Premise One: The logicist claim that arithmetic is broadly analytic, its true claims being made true by the character of our concepts. Premise Two: what is given in our concepts does not suffice to fix the truth-value of every claim in the language arithmetic . So there is nothing to fix it that every such claim is determinately true or false.

This is an argument against realism in mathematics which (a) doesn't threaten to sprawl in ways that are difficult to handle to become a general anti-realism, and (b) doesn't depend on problematic claims about indefinite extensibility. Peter Sullivan certainly seems to show that the argument is indeed a thread running through the later Dummett, and his exploration is very illuminating (even if I think I lost the plot a bit towards the end of this long paper). I learnt a lot from the discussion. Great stuff.

Monday, November 12, 2007

Who needs an iPhone? ...

... but I sure hope that this rumour is true! That would be so very useful to have.

Soothing the troubled brow

I needed a bit of distraction after another faculty meeting. Inevitably, in my experience, as soon as departments start talking about root-and-branch reorganization of degree structures, it quickly becomes clear that the existing tolerance, more or less, of current arrangements in facts hides really quite deep disagreements about what we should really be doing, and why. The more you talk about possible changes, the less chance of coming to a happy consensus. Still, twenty minutes playing around making a few desktop pictures helped sooth the troubled brow afterwards. This is the view from San Gusmè near Siena, very early on an August morning this summer. 


Now back to ACA0 ...

Sunday, November 11, 2007

No names, no pack-drill

I've just been reading four or five assorted recent articles (as it happens, a bit away from my usual logical/phil. maths stamping ground: but sometimes I feel masochistic). They were tedious, unexciting, ephemeral stuff, on issues that is difficult to take very seriously, and all were about three times the necessary length. They were also all written by evidently very clever people, and were probably more or less right -- or at the very least, the pieces made sane-seeming moves in the scholastic game -- and they will give their respective authors brownie points for promotion. But I just couldn't see the point. Maybe I'm not cut out for this philosophy malarky. But I'd rather say: they just illustrate how philosophy loses its way when it stops engaging with serious foundational issues in logic, mathematics and science. I've quoted Steve Stich here before, but it's worth repeating what he wrote: "The idea that philosophy could be kept apart from the sciences would have been dismissed out of hand by most of the great philosophers of the 17th and 18th centuries. But many contemporary philosophers believe they can practice their craft without knowing what is going on in the natural and social sciences. ... The results of philosophy done in this way are typically sterile and often silly." Indeed.

ACA0 again

I'm coming round to see things like this. Despite its centrality in the reverse mathematics programme, ACA0 is in fact (as it were) conceptually unstable. For take any open sentence F(x) of the language of second-order arithmetic which may embed set-quantifiers. Then ACA0 proves ∀x(Fx v ¬Fx). But if, from ACA0's viewpoint each number indeed determinately falls under F or doesn't, then what good reason can someone who accepts ACA0 have for not allowing F also to appear in instances of induction? But following that reasoning inflates ACA0 to the much stronger full ACA.

So in fact, I want to argue that the usual conceptual grounding for ACA0 in fact shouldn't take you quite as far as that theory: roughly, it takes you to a more restricted theory with parametric (rather than quantified) versions of induction and comprehension and lacking formulae with embedded set quantifiers, though you can allow prenex existentials. That still -- or so it seems to me at the moment -- gives you all the usual constructions for proving proxies of theorems of analysis: it's just we can lose the unwanted stuff like ∀x(Fx v ¬Fx) when F embeds set-quantifiers.

I'm going to talk about this and related matters in Oxford in about a week's time. If the suggestion survives its outing, I'll put a souped-up version of this stuff on my webpage. Fingers crossed.

Friday, November 09, 2007

Such a rarity, this blogging lark!

The Guardian today estimates that there are four million bloggers in Britain alone. Yes, four million -- i.e. about 15% of the twenty six million web-active people in the country. Ye gods. So much for that famed British reserve ...

Wednesday, November 07, 2007

So, not Manzano ... but who?

(For new readers: in our math logic reading group, we are working through Maria Manzano's Model Theory, as a warm-up exercise to tackling Hodges's Shorter Model Theory.)

Writing logic books involves all kinds of comprises and trade-offs between approachability and absolute precision, between breadth and depth; and all kinds of decisions have to be made about coverage, the amount of more philosophical commentary that you give, and so on and so forth. It is, as I found writing my two logic books, a horribly difficult business making sensible decisions -- which is not to say that there is just one way of getting them right. So I'm now pretty reluctant to get too critical (am I mellowing with age?). Still, I have to say that I've come to think that Manzano's book really isn't terribly good. I certainly wouldn't recommend anyone following our example, and using the book. True, Manzano is not at all well served by her translator, but that's only a small part of it: key ideas are far too often just not sufficiently well-motivated and clearly explained. Which is a pity because there are certainly some nice sections. But it is all too uneven in execution to be the helpful introductory guide we were looking for. Though it is certainly not obvious who, at this sort of level, we should have been reading instead.

I hereby bag the title Model Theory without Tears: A Philosophical Introduction for the book which my counterpart in some more or less close possible world is beginning to wonder about writing before the equally not-yet-started Ordinals, Cuts and Consistency. Choices, choices.

Callas, Tosca ... and YouTube

There are some astonishing finds on YouTube, amongst all the trivial dross. Someone -- no doubt breaking all sorts of copyright law, but in what a cause! -- has posted the whole of the astonishing 1964 television film that was made of Maria Callas and Tito Gobbi in the second act of Tosca at Covent Garden, in seven parts. The first two parts 1, 2, are followed by the torture scene 3, 4, then 5 , followed by Vissi d'arte, and finally the last ten minutes of the act.

The style is indeed from a different era ... yet the visceral impact remains however many times you see these legendary performances (past their vocal prime though, by all accounts, Callas and Gobbi were). Words fail me. Except to say that, if you have never seen this film, you must and now can. (Start with Vissi d'arte, then backtrack to watch the whole in sequence.)

Sunday, November 04, 2007

Oh no .... more geekery

I've just discovered -- about eight months after everyone else -- that Wordpress (the cool alternative to Blogger for hosting/managing a blog) can handle LaTeX code, so can do nice looking logic stuff. I was already trying to resist following the daughter's lead and migrating there, but this might tip the balance. Watch this space ...

More books -- free this time

In his nice blog, Words and Other Things, Shawn Standefer helpfully notes that CSLI publications have digitalized their backlist, and put the PDFs online for free here. The list includes e.g. Peter Aczel's Non-well-founded sets, Jon Barwise and Lawrence Moss's Vicious circles: on the mathematics of non-wellfounded phenomena, and other very good things.

Friday, November 02, 2007

Leopard, second impressions

I'm still dosed up to the eyeballs and so finding it annoyingly difficult to concentrate on work. I am supposed to be thinking more about issues related to ACA0 -- especially as I'm due to talk about this sort of stuff at Dan Isaacson's seminar in Oxford in a couple of weeks. I certainly hope that normal brain functioning is restored sooner rather than later. Meanwhile, apologies for the consequent lack of any very interesting content here recently!

Still, I've been able to play with Leopard more than perhaps I'd have otherwise been able to. I'm still rather impressed. As well as the obvious things, there are lots of neat little improvements. To mention just one, being able to write yourself notes and handle reminders inside Mail turns out to be just very useful (more so than I'd have ever predicted). Of course, it is all very sad to get excited about things like that. But there it is. I'm sure I'll get better soon.

Spread the word!

Ah well, sales for the first three months of the Gödel book are not exactly on a Harry Potter scale. But my editor does seem reasonably pleased. Still, sales in North America are (proportionally) nowhere near as good as those in the UK and Europe. So c'mon, people, some serious spreading the word is needed out there! The Gödel book is terrific: every grad student should have a copy, and every library should have three .... You know it makes sense.

Ray Gravell

Ray GravellI was sad to read that Ray Gravell, a stalwart of the great Welsh team of the 1970s, has died. Reading some of the obituaries brought back memories of those glory days. Here are J.P.R. Williams, Gerald Davies, Ray Gravell and Steve Fenwick. What heroes!

Monday, October 29, 2007

Leopard, first impressions

Leopard arrived today, in its rather cool box. And I installed it by the book -- the book in question being the useful, confidence-inspiring, and very inexpensive e-book Take Control of Upgrading to Leopard. So I used SuperDuper! to clone my MacBook Pro's hard-disk onto a bootable external drive (well, two different ones actually), did an "erase and install" of Leopard, and then used the set-up assistant to migrate all my files back from one of the external drives. Doing things the longest way round like this, the whole business, after the initial cloning, took about two and a half hours. But much better safe than sorry. (The only hiccup was that the installer initially took a long term to recognize the presence of my laptop's hard disk, which would have been distinctly alarming, had I not seen talk of the phenomenon on MacRumors. And apart from losing the Cisco VPN Client -- the university computer services say they'll have a Leopard-compatible replacement available in a day or so -- everything seems to be working again just fine.)

What strikes you first, of course, is some of the eye-candy -- e.g. the new dock, semi-transparent menu bar and menus, the folder icons. Quite a few mac reviewers have hated all these (e.g. see the ars technica review which is very informative about the under-the-bonnet improvements). Well, I'm all for the dock and I quite like the semi-transparent menu bar; the transparent menus are I think are far too transparent; and the "recycled cardboard" folder icons seem quite out of keeping with the space-age look of the rest of the UI. Or at least, that's my two-pennyworth. And it is only worth about that much fuss (especially as my bet is that these things will be subtly adjusted in an early update for Leopard). Otherwise, the cleaned-up look of the windows across the system is all pretty neat, and the new Finder windows are that bit more useful. On the whole, things look terrific.

Still, looks aren't everything: here are four things I instantly really like about Leopard, and which even just by themselves make the upgrade worthwhile:

  1. The whole system is consistently quite a bit snappier (e.g. one bounce and iCal with seven calendars is open, similarly for Mail).
  2. Cover Flow and Quick Look are amazingly useful, as well as very pretty. For example, I have a folder into which I park PDFs and other documents as I download them. I can now just instantly browse through the folder to see what is in the various documents without opening the relevant application(s), and can eventually file them away or trash as appropriate.
  3. Spaces is a very nicely implemented way of getting much cleaner work-spaces. I'm an immediate convert. (One space for Safari, Mail, etc.; another space for TeXShop windows; etc. Very uncluttered.)
  4. Time Machine is wonderful. I was already pretty good about cloning my hard disk to a pair of external drives, one at home and one at work. But inevitably you do foul up and accidentally delete stuff. So I've set up a new big external drive to be an automatic Time Machine archive whenever I'm in my little study at home (drives have become so cheap, there's no reason at all not to err on the side of caution -- it would just be too painful now to lose everything): and I will still carry on cloning onto the other drives. Feels very virtuous!
Worth waiting for (and it can only get better).

Sunday, October 28, 2007

G. C. Lichtenberg

Fellow local blogger James Warren recently posted a seemingly depressing list of "top books" listed on Cambridge students' Facebook pages. But I'd not be too downhearted. Probably the moral is: don't believe all you read in Facebook entries! I know that when I was still a college fellow and "director of studies" and so able to get to know a few students very well over their three years here, I'd repeatedly be surprised when they eventually opened up about the books that they really loved and which meant something to them. I learnt a lot that way, about the students themselves, and about books too.

Just a couple of weeks ago -- I can't at all recall how it came up -- one of our graduate students warmly recommended to me the Hollingdale translation of excerpts from Georg Christoph Lichtenberg's The Waste Books. This version was quite new to me, and is a real delight. I had a much shorter collection of excerpts translated by Franz Mautner and Henry Hatfield almost forty years ago; and I first came across the aphorisms and their author in a favourite book that I had when a student, J. P. Stern's Lichtenberg: A Doctrine of Scattered Occasions. But the pleasure of re-discovery after a good few years is enormous.

I was moved too -- having found a number of enthusiastic reviews -- to send off for another book, Gert Hofmann's novel Lichtenberg and the Little Flower Girl (this indeed was the contents of the packet that should have contained Leopard!). But I found this really rather disappointing.

The novel is based on Lichtenberg's relationship with Maria Stechard, the thirteen year old girl he met selling flowers (he, a hunchback, was by then in his mid thirties); she became his housekeeper, then his lover, and died shortly after her seventeenth birthday to his intense distress (the story, though, is Beauty and the Beast, not Lolita). But the Lichtenberg in Hofmann's tale is just too far from the Lichtenberg I thought I knew from the Waste Books -- he seems a diminished and much less substantial figure than the highly successful and popular teacher, the science professor at Göttingen, who inhabits Stern's pages (he lives in too distant an alternative possible world). And die kleine Stechardin remains almost as blank in Hoffman's novel as she does in Lichtenberg's handful of references to her.

Saturday, October 27, 2007

The art of ordinal analysis

I've just come across Michael Rathjen's 2006 paper The art of ordinal analysis. A useful and pretty clear survey.

Friday, October 26, 2007

Not Leopard

Sigh. This was going to be the post where I gave you my calm, judicious, balanced, critical appraisal of the truly awesome OS X Leopard. So imagine my frustration -- or at least, fellow macheads will be able to imagine my frustration -- to find that (i) the announced package in my pigeon hole which I've at last been able to go in to pick up wasn't the expected one from Apple, and (ii) in fact the carrier tried to deliver the genuine article today, and there happened to be no-one in or around the Faculty Office to sign for it just at that time. How was that possible? Delivery rescheduled for Monday. It's not been my week.

Ah well. I'm on the mend from the earlier unpleasantness. Life goes on. And patience is a virtue. They say.

Tuesday, October 23, 2007

Mocking the pomos

I was amused to (re)discover Communications from Elsewhere's wonderful Postmodern Generator. Just refresh the link and you are served up each time with a new generous helping of a randomly generated postmodernist-style word-salad: very clever and very funny. And yes, intellectual rubbish should be mocked and parodied and satirized wherever we find it.

Much more seriously, but of course not unrelatedly, you are also served up with a constant link to Alan Sokal's page on the "Social Text Affair": if you don't know what that was, then exploring Sokal's page will be a very instructive treat. (Oh, and I was delighted to discover that Sokal has a another forthcoming book announced for next year.)