Cue sound of blowing one's own trumpet. The first "Customer Reviews" on Amazon USA for the Gödel book have just appeared. And there is a quite terrific one by Jon Cogburn of LSU at Baton Rouge. Very cheering indeed! (Though it is also another spur to get going with developing the online sets of exercises that I've been meaning to put together.)
Saturday, December 29, 2007
As I noted in my last post, I'm going to be working through Bell, DeVidi and Solomon's Logical Options in a seminar with some second year undergrads this coming term. To answer Richard Zach's question, I might have used Ted Sider's draft book as the main text if I'd known about it before plans were made in November -- though Logical Options does still perhaps more neatly dovetail with our tree-based first year course.
Because the seminar starts on the first teaching day of term, I can't presume much reading. So we'll have to make a slow start. Here though are the reading notes for the first session, in case anyone else is interested. (It goes without saying that corrections and/or suggestions are always welcome!)
Posted by Peter Smith at 12:07 AM
Thursday, December 27, 2007
Hmmm, the next ten weeks or so are going to be busy.
For a start, I must find time to finish reading the papers in the Absolute Generality collection, and I'll no doubt continue to try to say something about them here. Not that the topic thrills me anywhere near as much as the number of posts might suggest, but it does remain puzzling, I have promised to write a BSL review, and blogging about the papers is as good a way as any of making myself do the reading moderately carefully.
Then at the beginning of term there is a Graduate Conference on the Philosophy of Logic and Mathematics in the faculty which promises well, and I'm down to comment on the first paper on vagueness (a topic on which I've got pretty rusty).
During term, I'm now giving some seminars to second-year students on the Bell/DeVidi/Solomon text, Logical Options. That book isn't at all ideal, on a closer look, but I can't think of anything better to cover the sort of ground the students need to cover, though that will certainly involve writing some supplementary notes. I'll link to the notes here as they get done.
And then there's going to be all the homework for the grad seminar I'm running with Thomas Forster on model theory, working through the shorter Hodges.
Oh, and I've promised to talk to the Jowett Society in Oxford in February. Gulp.
Well, I've only myself to blame, given that those are all works beyond the call of duty. Still, it should be tolerable fun (on the logician's rather stretched understanding of 'fun'). Better make a start tomorrow, though ...
Posted by Peter Smith at 9:09 PM
Monday, December 24, 2007
There are still over twenty pages of Lavine's paper remaining. Since, to be frank, Lavine doesn't write with a light touch or Lewisian clarity, these are unnecessarily hard going. But having got this far, I suppose we might as well press on to the bitter end. And, as I've just indicated in the previous post in the series, I do have a bit of a vested interest in making better sense of his talk of schematic generalizations.
There are four Sections, 9 -- 11, ahead of us. First, in Sec. 8, Lavine argues that even with schematic generalizations as he understands them in play, we still can't get a good version of McGee's argument that the quantifier rules suffice to determine that we are ultimately quantifying over a unique domain of absolutely everything, and so McGee's attempt to respond to the Skolemite argument fails. I think I do agree that the rules even if interpreted schematically don't fix a unique domain: but I'm still finding Lavine's talk about schematic generalizations pretty murky, so I'm not sure whether that is right. Not that I particularly want to join Lavine in defending the Skolemite argument: but I am happy to agree that McGee's way with the argument isn't the way to go. So let's not delay now over this.
In Sec. 9, Lavine discusses Williamson's arguments in his 2003 paper 'Everything' and claims that everything Williamson wants to do with absolutely unrestricted quantification can be done with schematic generalizations. Is that right? Well, patience! For I guess I really ought now to pause here to (re)read Williamson's paper, which I've been meaning to do anyway, and then return to Lavine's discussion in the hope that, in setting his position up against Williamson, more light will be thrown on the notion of schematic general in play. But Williamson's paper is itself another fifty page monster ... So I think -- just a little wearily -- that maybe this is the point at which to take that needed holiday break from absolute generality and Absolute Generality.
Back to it, with renewed vigour let's hope, in 2008!
Sunday, December 23, 2007
I did fiddle around a bit today trying to get a hack to work for splitting long posts into an initial para or two, with the rest to be revealed by hitting a "Read more" link (if you want to know how to do it in Blogger, see here). But in the end, I decided I didn't like the result. It's not as if even my longest posts are more than about half-a-dozen moderate sized paragraphs (and it is a good discipline to keep it like that): so it is in any case easy enough to scan to the end of one post to jump on to the next. I'll stick to the current simple format.
The Daughter recommended that I try OmniFocus 'task management' software which implements Getting Things Done type lists. Well, it's not that I haven't tasks to do, and the GTD idea really does work. But, having played about a bit with the beta version you can download, I reckon my life isn't so cluttered that carrying on using NoteBook and iCal won't work for me. Again, I think I'll stick to the simpler thing.
In a subsection entitled 'Schemes are not reducible to quantification', Lavine writes
Schematic letters and quantifiable variables have different inferential roles. If n is a schematic letter then one can infer S0 ≠ 0 from Sn ≠ 0, but that is not so if n is a quantifiable variable -- in that case the inference is valid only if n did not occur free in any of the premisses of the argument.
But, in so far as that is true, how does it establish the non-reducibility claim?
Of course, one familiar way of using schemes is e.g. as in Sec. 8.1 of my Gödel book where I am describing a quantifier-free arithmetic I call Baby Arithmetic, and say "any sentence that you get from the scheme Sζ ≠ 0 by subsituting a standard numeral for the place-holder 'ζ ' is an axiom". And to be sure, the role of the metalinguistic scheme Sζ ≠ 0 is different from that of the object language Sx ≠ 0. Still, it would misleading to talk of inferring an instance like S0 ≠ 0 from the schema. And here the generality, signalled by 'any', can -- at least pending further, independent, argument -- be thought of as unproblematically quantificational (though not quantifying over numbers of course). So this sort of apparently anodyne use of numerical schemes doesn't make Lavine's point, unless he can offer some additional considerations. So what does he have in mind?
One who doubts that the natural numbers form an actually infinite class will not take the scheme φ(n) → φ(Sn) to have a well-circumscribed class of instances and hence will not be willing to infer φ(x) → φ(Sx) from it; for the latter formula involves a quantifiable variable with the actually infinite class of all numbers as its domain or the actually infinite class of all numerals included in its substitution class.
We seemingly get a related thought e.g. in Dummett's paper 'What is mathematics about?', where he argues that understanding quantification over some class of abstract objects requires that we should 'grasp' the domain, that is, the totality of objects of that class -- which seems to imply that if there is no totality to be grasped, then here there can be no universal quantification properly understood.
But do note two things about this. First, a generalization's failing to have a well-circumscribed class of instances because we are talking in a rough and ready way and haven't bothered to be precise because we don't need to be, and its failing because we can't circumscribe the class because there is no relevant completed infinity (e.g. because of considerations about indefinite extensibility), are surely quite different cases. Lavine's moving from an initial example of the first kind when he talked about arm-waving generalizations we make in introductory logic lectures to his later consideration of cases of the second kind suggests an unwarranted slide. Second, I can see no reason at all to suppose that sophisticated schematic talk to avoid being committed to actual infinities is "more primitive" than quantificational generality. On the contrary.
Still, with those caveats, I guess I am sympathetic to Lavine's core claim that there is room for issuing schematic generalizations which don't commit us to a clear conception of a complete(able) domain. In fact, I'd better be sympathetic, because I actually use the same idea myself here (where I talk about ACA0's quantifications over subsets of numbers, and argue that the core motivation for ACA0 in fact only warrants a weaker schematic version of the theory). So, even though I don't think he really makes the case in his Sect. 7, I'm going to grant that there is something in Lavine's idea here, and move on next to consider what he does with idea in the rest of the paper.
Saturday, December 22, 2007
Richard Zach, one of the subject editors, has noted on his blog that there's a new entry on the Stanford Encyclopedia by Herb Enderton on Second-order and Higher-order Logic. The SEP is really developing quite terrifically, and it seems to me that the average standard of the entries on this freely accessible resource is distinctly better than e.g. on the expensive Routledge Encyclopedia. (I did write one entry for the latter, on C.D. Broad, still one of my favourite early 20th century philosophers: but I was given just half the space of his entry in the old Edwards encyclopedia -- and that seems to be indicative of the shortcomings of the Routledge Encyclopedia: it covers too much too thinly.)
Friday, December 21, 2007
In Sec. 7 of his paper, Lavine argues that there is a distinct way of expressing generality, using "schemes" to declare that 'any instance [has a certain property], where "any" is to be sharply distinguished from "every"' (compare Russell's 1908 view). In fact, Lavine goes further, talking about the kind of generality involved here as 'more primitive than quantificational generality'. Here, talk about a full schematic variable is to indicate that 'what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.' But Lavine's motivating example doesn't impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of 'NOT' and 'AND'. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a 'substitution instance' of that form needs quite a bit of explanation, since plugging in a declarative English sentences won't even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness ('it is kinda raining and not raining, you know'), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I -- like Lavine -- will leave things in a sense pretty 'open-ended' at this stage. Does that mean, though, that I'm engaged in something other than 'quantificational generality'? Does it mean that I haven't at least gestured at some roughly delimited appropriate domain? Isn't it rather that -- as quite often -- my quantifications are cheerfully a bit rough and ready? 'Ah, but you are forgetting the key point that 'what counts as an acceptable substitution instance is ... expands as the language in use expands.' But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of 'All the rabbits at the bottom of the garden are white' changes as the population of rabbits expands. Does that make that claim not quantificational? A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn't multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn't, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the 'open-ended' usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes. So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?
We are supposed to be softened up for this idea by the thought that in fact distinctively schematic generalization is actually quite familiar to us:
When, early on in an introductory logic course, before a formal language has been introduced, one says that NOT(P AND NOT P) is valid, and gives natural language examples, the letter 'P' is being used as a full schematic letter. The students are not supposed to take it as having any particular domain -- there has as yet been no discussion of what the appropriate domain might be -- and it is, in the setting described, usually the case that it is not 'NOT(P AND NOT P)' that is being described as valid, but the natural-language examples that are instances of it.1
1. Quite beside the present point, of course, but surely it isn't a great idea -- when you are trying to drill into beginners the idea that truth is the dimension of assessment for propositions and validity is the dimension of assessment for inferences -- to turn round and mess up a clean distinction by calling logically necessary propositions 'valid'. I know quite a few logic books do this, but why follow them in this bad practice?
Here, talk about a full schematic variable is to indicate that 'what counts as an acceptable substitution instance is open ended and automatically expands as the language in use expands.'
But Lavine's motivating example doesn't impress. Sure, in an early lecture, I may say that any proposition of the form NOT(P AND NOT P) is logically true in virtue of the meanings of 'NOT' and 'AND'. But to get anywhere, I of course have to gloss this a bit (for a start, the very idea of a 'substitution instance' of that form needs quite a bit of explanation, since plugging in a declarative English sentences won't even yield a well-formed sentence). And, glossing such principles like non-contradiction and excluded middle, I for one certainly remark e.g. that we are setting aside issues about vagueness ('it is kinda raining and not raining, you know'), and issues about weird cases (liar sentences), and issues about sentences with empty names, and I may sometimes mention more possible exceptions. But yes, I -- like Lavine -- will leave things in a sense pretty 'open-ended' at this stage. Does that mean, though, that I'm engaged in something other than 'quantificational generality'? Does it mean that I haven't at least gestured at some roughly delimited appropriate domain? Isn't it rather that -- as quite often -- my quantifications are cheerfully a bit rough and ready?
'Ah, but you are forgetting the key point that 'what counts as an acceptable substitution instance is ... expands as the language in use expands.' But again, more needs to be said about the significance of this before we get a difference between schematic and quantificational generalizations. After all, what counts as an instance of 'All the rabbits at the bottom of the garden are white' changes as the population of rabbits expands. Does that make that claim not quantificational?
A general methodological point, famously emphasized by Kripke in his discussion of a supposed semantic ambiguity in the use of definite descriptions: we shouldn't multiply semantic interpretations beyond necessity, when we can explain variations in usage by using general principles of discourse in a broadly Gricean way. We shouldn't, in the present case, bifurcate interpretations of expressions of generality into the schematic and the genuinely quantificational cases if the apparent differences in usage here can be explained by the fact that we speak in ways which are only as more or less precise and circumscribed as are needed for the various purposes at hand. And it seems that the 'open-ended' usage in the quoted motivating example can be treated as just a case of loose talk sufficient for rough introductory purposes.
So has Lavine some stronger arguments for insisting on a serious schematic/quantification distinction here?
Wednesday, December 19, 2007
(3) "The third objection to everything is technical and a bit difficult to state, and in addition it is relatively easily countered," so Lavine is brief. I will be too. Start with the thought that there can be subject areas in which for every true (∃x)Fx -- with the quantifier taken as restricted to such an area -- there is a name c such that Fc. There is then an issue whether to treat those restricted quantifiers referentially or substitutionally, yet supposedly no fact of the matter can decide the issue. So then it is indeterminate whether to treat c as having a denotation which needs to be in the domain of an unrestricted "everything". And so "everything" is indeterminate.
Lavine himself comments, "the argument ... works only if the only data that can be used to distinguish substitutional from referential quantification are the truth values of sentences about the subject matter at issue". And there is no conclusive reason to accept that Quinean doctrine. Relatedly: the argument only works if we can have no prior reason to suppose that c is operating as a name with a referent in Fc (prior to issues about quantifications involving F). And there is no good reason to accept that either -- read Evans on The Varieties of Reference. So argument (3) looks a non-starter.
(4) Which takes us to the fourth "objection to everything" that Lavine considers, which is the Skolemite argument again. Or to use his label, the Hollywood objection. Why that label?
Hollywood routinely produces the appearance of large cities, huge crowds, entire alien worlds, and so forth, in movies ... the trick is only to produce those portions of the cities, crowds, and worlds at which the camera points, and even to produce only those parts the camera can see -- not barns, but barn façades. One can produce appearances indistinguishable from those of cities, crowds, and worlds using only a minisule part of those cities, crowds, and worlds. Skolem, using pretty much the Hollywood technique, showed that ... for every interpreted language with an infinite domain there is a small (countable) infinite substructure in which exactly the same sentences are true. Here, instead of just producing what the camera sees, one just keeps what the language "sees" or asserts to exist, one just takes out the original structure one witness to every true existential sentence, etc.
That's really a rather nice, memorable, analogy (one that will stick in the mind for lectures!). And the headline news is that Lavine aims to rebut the objections offered by McGee to the Skolemite argument against the determinacy of supposedly absolutely unrestricted quantification.
One of McGee's arguments, as we noted, appeals to considerations about learnability. I didn't follow the argument and it turns out that Lavine too is unsure what is supposed to be going on. He offers an interpretation and readily shows that on that interpretation McGee's argument cuts little ice. I can't do better on McGee's behalf (not that I feel much inclined to try).
McGee's other main argument, we noted, is that "[t]he recognition that the rules of logical inference need to be open-ended ... frustrates Skolemite skepticism." Lavine's riposte is long and actually its thrust isn't that easy to follow. But he seems, inter alia, to make two points that I did in my comments on McGee. First, talking about possible extensions of languages won't help since we can Skolemize on languages that are already expanded to contain terms "for any object for which a term can be added, in any suitable modal sense of 'can'" (though neither Lavine nor I am clear enough about those suitable modal senses -- there is work to be done there). And second, Lavine agrees with McGee that the rules of inference for the quantifiers fix (given an appropriate background semantic framework) the semantic values of the quantifiers. But while fixing semantic values -- fixing the function that maps the semantic values of quantified predicates to truth-values -- tells us how domains feature in fixing the truth-values of quantified sentences, that just doesn't tell us what the domain is. And Skolemite considerations aside, it doesn't tell us whether or not the widest domain available in a given context (what then counts as "absolutely everything") can vary with context as the anti-absolutist view would have it.
So where does all this leave us, twenty pages into Lavine's long paper? Pretty much where we were. Considerations of indefinite extensibility have been shelved for later treatment. And the Skolemite argument is still in play (though nothing has yet been said that really shakes me out of the view that -- as I said before -- issues about the Skolemite argument are in fact orthogonal to the interestingly distinctive issues, the special problems, about absolute generality). However, there is a lot more to come ...
Tuesday, December 18, 2007
People grumble that it isn't what it was, but the output on BBC Radio 3 remains pretty amazing, and these days there is the great bonus of being able to "listen again" via the web for up to a week. While driving earlier today, I caught a few minutes of what seemed a strikingly good performance of the Beethoven op. 132 quartet. It turned out to by the Skampa Quartet. I've just "listened again" to the whole performance: I'd say it is extremely well worth catching here (click on "Afternoon on 3" for Tuesday, the Beethoven quartet starts about 19 minutes in). Terrific stuff.
Posted by Peter Smith at 10:21 PM
Monday, December 17, 2007
Shaughan Lavine's is one of two fifty-page papers in Absolute Generality (I'm not sure that the editors' relaxed attitude to overlong papers does either the authors or the readers a great service, but there it is). In fact, the paper divides into two parts. The first six sections review four anti-absolutist arguments, and criticizes McGee's response on behalf of the absolutist to (in particular) the Skolemite argument. The last five sections are much more interesting, arguing that we can in fact do without absolute unrestricted generality -- rather, "full schematic generality will suffice", where Lavine is going to explain at some length what such schematic generality comes to.
But first things first. What are the four anti-absolutist arguments that Lavine considers? (1) First, there's the familiar argument from the paradoxes that suggests that certain concepts (set, ordinal) are indefinitely extensible and that it is not possible for a quantifier to have all sets or all ordinals in its domain. Now, unlike McGee, Lavine recognizes that that the "objection from paradox" raises serious issues. However, he evidently thinks that any direct engagement with the objection just leads to a stand-off between the absolutist and anti-absolutist sides, "each finds the other paradoxical", so he initially sets the argument aside.
(2) The second argument is the "framework objection" that Hellman also discusses in his contribution. "Different metaphysical frameworks differ on what there is ... If the answers to [questions like are there any mathematical entities? is space composed of points?] are not matters of facts, but of choice of framework, ... [then there is only] quantification over everything there is according to the framework [and not] absolutely unrestricted quantification." Well, as I noted before, if two frameworks differ just in what they take to be ontologically basic and what they take to be (in some broad sense) constructions out of the basics, then that is beside the present point. We can still quantify over all the same things in the different frameworks -- for quantifying over everything isn't to be thought of as restricted quantification over whatever is putatively basic. So to make trouble here, the idea would have to be that there can be equally good rival frameworks, with only a conventional choice to be made between them, where Xs exist according to one framework, and cannot even be constructed according to the other. If there are such cases, then there may be an argument to be had: but that is a pretty big "if", and Lavine doesn't give us any reason to suppose that the condition can be made good, so let's pass on.
[To be continued.]
Sunday, December 16, 2007
I've added a link alongside to Tim Gower's blog (thanks to Carrie Jenkins for recommending it), and I've removed links to a couple of seemingly dormant blogs.
I'm a bit staggered to find that the number visitors to this blog have doubled over recent weeks, to over a thousand this week. A sudden enthusiasm out there for musings about absolute generality or recommendations for logicky reading? Somehow I doubt it! Perhaps unsurprisingly, the tracker shows that this geeky post still gets a number of hits a day, even though it just links to someone else's neat solution to a Leopard irritation. Rather more surprisingly, there is a steady if small stream of people who've ended up here because they are searching for photos of Monica Vitti (so as not to disappoint, here is another screen-capture from L'Eclisse). But I wonder why they are searching ... Other one-time film buffs, perhaps moved by Antonioni's death earlier in the year to revisit the haunting films first seen so long ago? Or is it that the films, and Vitti's iconic presence, are being rediscovered? I find it so very difficult to imagine how they would now seem, if being seen for the first time, against such an utterly different cinematic background to that of the early 60s.
Posted by Peter Smith at 11:08 PM
The final section of McGee's paper is called "A rule for "everything"'. He argues that "the semantic values of the quantifiers are fixed by the rules of inference". The claim rests on noting that (i) two universal quantifiers governed by e.g. the same UE and UI rules will be interderivable [McGee credits "a remarkable theorem of J.H. Harris", but it is an easy result, which is surely a familiar observation in the Gentzen/Prawitz/Dummett tradition]. McGee then claims that, assuming the quantifier rules don't misfire completely [like the tonk rules?], this implies that (ii) they determine a uniquely optimal candidate for their semantic value. And further, (iii) "the Harris theorem ... gives us reason to anticipate that, when we develop a semantic theory, it will favor unambiguously unrestricted quantification."
The step from (i) to (ii) needs some heavy-duty assumptions -- after all, the intuitionist, for example, doesn't differ from the classical logician about the correct quantifier rules, but does have different things to say about semantic values. But McGee seems to be assuming a two-valued classical background; so let that pass. More seriously in the present context, the step on to (iii) is just question-begging, if it is supposed to be a defence of an absolutist reading of unrestricted quantification. Consider a non-absolutist like Glanzberg. He could cheerfully accept that the rules governing the use of unrestricted universal quantifiers fix that they run over the whole background domain, whatever that is (and that a pair of quantifiers governed by the same rules would both run over that same domain): but that leaves it entirely open whether the background domain available to us at any point is itself contextually fixed and can be subject to indefinite expansion.
Of course, says the anti-absolutist, there is no God's eye viewpoint from which we can squint sideways at our current practice and comment that right now "(absolutely) everything" on our lips doesn't really run over all the exists. "Everything" always means everything. What else? But that isn't what the anti-absolutist denies, and so it seems that McGee fails to really engage with the position.
Saturday, December 15, 2007
A few weeks back, I was asked to put together a twenty minute logicky multiple choice test for undergraduates applying to Cambridge to read philosophy. I had better not say much here at all, but I think I managed to produce a fairly tricky paper to do in the time. I'm told that scores spread out from worse-than-chance to almost-perfect, so I guess it worked to filter out the not-so-hot reasoners. I did, though, find it surprisingly difficult to put together a template that I was happy with and which we could weave variations around. I think next year we should instead use this splendid alternative.
In Sec. 2 of his paper, McGee reviews a number of grounds that might be offered for skepticism about absolutely unrestricted quantification. But he doesn't take the classic indefinite extensibility argument very seriously -- indeed he doesn't even mention Dummett, but rather offers a paragraph commenting on the disparity between what Russell and Whitehead say they are doing in PM to avoid vicious circles, and what they end up doing with the Axiom of Reducibility. Given the actual state of play in the debates, just ignoring the Dummettian version of the argument seems pretty odd to me.
But be that as it may, "the bothersome worry," according to McGee, "is not that our domain of quantification is always assuredly restricted [because of indefinite extensibility] but that the domain is never assuredly unrestricted [because of Skolemite arguments]". Here I am, trying to quantify all-inclusively in some canonical first-order formulation of my story of the world, and by the LS theorem there is a countable elementary model of story. So what can make it the case that I'm not talking about that instead?
OK, it is a good question how we should best respond to the Skolemite argument, and McGee offers some thoughts. He suggests two main responses. The first appeals very briefly to considerations about learnability. I just don't follow the argument (but I note that Lavine is going to discuss it, so let's hang fire on this argument for the moment). The second is that "[t]he recognition that the rules of logical inference need to be open-ended ... frustrates Skolemite skepticism." Why?
The LS construction requires that every individual that's named in the language be an element of the countable subdomain S. If the individual constant c named something outside the domain S, then if '(∀x)' is taken to mean 'for every member of S', the principle of universal instantiation [when c is added to the language] would not be truth-preserving. Following Skolem's recipe gets us a countable set S with the property that interpreting the quantifiers as ranging over S makes the classical modes of inference truth-preserving, but when we expand the language by adding new constants, truth preservation is not maintained. The hypothesis that the quantified variables range over S cannot explain the inferential practices of people whose acceptance of universal instantiation is open-ended.
But this line of response by itself surely won't faze the subtle Skolemite. After all, there is a finite limit to the constants that a finite being like me can add to his language with any comprehension of what he is doing. So start with my actual language L. Construct the ideal language L+ by expanding L with all those constants I could add (and add to my theory of the world such sentences involving the new constants that I would then accept). Now Skolemize on that, and we are back with trouble that McGee's response, by construction, doesn't touch.
Actually, it seems to me that issues about the Skolemite argument are orthogonal to the distinctive issues, the special problems, about absolute generality. Suppose we do have a satisfactory response to Skolemite worries when applied e.g. to talk about "all real numbers" (supposing here that "real number" doesn't indefinitely extend): that still leaves the Dummettian worries about "all sets", "all ordinals" and the like in place just as they were. Suppose on the other hand we struggle to find a response to the Skolemite skeptic. Then it isn't just quantifications that aim to be absolutely general that are in trouble, but even some seemingly tame highly restricted ones, like generalizations about all the reals. Given this, I'm all for trying to separate out the distinctive issues about absolute generality and focussing on those, and then treating quite separately the entirely general Skolemite arguments which apply to (some) restricted and unrestricted quantifications alike.
Friday, December 14, 2007
I was intending to look at the papers in Absolute Generality in the order in which they are printed. But Glanzberg's piece is followed by a long one by Shaughan Lavine which is in significant part a discussion of Vann McGee's views, including those expressed in the latter's contribution to this book. So it seems sensible to discuss McGee's paper first.
The first section of this paper addresses "semantic skepticism in general". McGee writes
The prevalent skeptical view, which is sometimes called deflationism or minimalism, allows that a speaker can say things that are true, but denies that her ability to do so depends on the linguistic practices of herself and her community. ... [D]isquotationalism doesn't connect truth-conditions with patterns of usage. ... The (T)-sentences for own language are, for the deflationist, an inexplicable brute fact.
And there is more in the same vein. Well, I thought I was a kind of deflationist, but certainly I don't take myself to be wedded to the idea that truth-conditions aren't connected to patterns of usage. Au contraire. I'd say that it is precisely because of facts about the way I use "snow is white" that I am interpretable as using it to say that snow is white. And because "snow is white" is used by me to say that snow is white, then indeed "snow is white" on my lips is true just in case snow is white. So, I'd certainly say that the truth of such a (T)-sentence isn't inexplicable, it isn't an ungrounded brute fact. But the core deflationist thought -- that there is in the end, bells and whistles apart, no more to the content of the truth predicate than is given in such (T)-sentences (the notion, so to speak, lacks metaphysical weight) -- is surely quite consistent with that.
Still, let's not fuss about who gets to choose which position counts as properly "deflationist". My point is merely that the sort of extreme position which McGee seems to talking about (though he is far from ideally clear) is remote from plausible versions of deflationism, is therefore to my mind not especially interesting, and in any case -- the key point here -- hasn't anything particularly to do with issues about absolute generality. So exactly why does he think it is going to be illuminating to come at the topic this way? I'm rather stumped. So I propose just to pass over his first section with a rather puzzled shrug.
Wednesday, December 12, 2007
I mentioned that Glanzberg's paper focuses on Williamson's version of Russell's paradox for interpretations. I can't say that I find that version very illuminating, but there it is. But it does shape Glanzberg's discussion, and he tells the story about background domain expansion in terms of someone's reflecting directly about the interpretation of their own language. But I don't think that this is of the essence, nor the clearest way to present a discussion about what Dummett would call indefinite extensibility.
What is central to the discussion is Glanzberg's reflections on how far we should iterate the expansion of our domain of "absolutely everything", once we grasp the (supposed) Dummettian imperative to start on the process. Dummett's talk about indefinite extensibility suggests that he thinks that there is no determinate limit point (which, I take it, isn't to say that the expansion definitely goes on for ever, but that there is no point where we have a clear reason to stop). Now, Glanzberg by constrast, things here might be reason just to embrace iteration up to the first non-recursive ordinal, or up to the first α + 1 ordinal, or up to the first α+ ordinal. He then says
In considering multiple options, I do not want to suggest that there is nothing to distinguish among them ... Rather, I think the moral to be drawn is that we do not yet know enough to be certain just how far iteration really does go.
But, once we play the Dummettian game, I just don't see why we should think that there will be a determinate answer here, and certainly Glanzberg gives us no clear reason to suppose otherwise.
Another day, another new book to mention. My current and recent colleagues Mary Leng, Alexander Paseau, and Michael Potter have edited revised versions of some of the papers from the 2004 conference held here in Cambridge on Mathematical Knowledge. It is quite a slim, expensive, hardback; but it should certainly at least be in your university library, if only for the three papers by the editors.
I wonder why they thought that Durer's 'Melancholia' was an apt illustration for the cover ...?
Tuesday, December 11, 2007
Having tried twice and miserably failed to explain to a sullenly sceptical Hungarian girl that you don't make an espresso macchiato by filling up the cup to the top with hot milk, I was perhaps not in the optimal mood to sit in a cafe and wrestle with the rest of Glanzberg's paper again. Still, I tried my best.
What I was expecting, after the first three sections, was a story about how "background domains" get contextually set, a story drawing on the thoughts about what goes on in more common-or-garden contextual settings of restricted domains. Though I confess I was sceptical about how this could be pulled off -- given that the business of restricting quantifications to some subdomain of everything available to be quantified over in a context and the business of (so to speak) reaching out to everything would seem to be intuitively quite different. But in fact, what Glanzberg gives us is not the whole story about background-domain-fixing ("very little has been said here about how an initial background domain is set" p. 71), but rather a story about how we might go about domain-expansion when we are (supposedly) brought to acknowledge new objects like the Russell class which cannot (on pain of paradox) already be in the domain of objects that we are currently countenancing as all that there are.
Now, Glanzberg says (p. 62) that although domain-expansion isn't a case of setting a restricted domain, "it is still the setting of a domain of quantification ... [so] it should be governed by the principles" discussed in Sec. 3 of the paper. But in fact, as we'll see, at most one of the principles to do with common-or-garden domain fixing arguably features in the discussion here about domain-expansion (and I'd say not even that one).
What Glanzberg does discuss is the following line of thought (I'll use a more familiar example, though he prefers to work with Williamson's variant Russell paradox about interpretations). Suppose I'm cheerfully quantifying over everything, including sets. You then -- how? invoking an All in One principle? -- get me to countenance that domain as itself a something, an object, and then show me that it is one which on pain of paradox can't be in the domain I started off with. Ok, so now this new object is a "topic of discourse", and what is now covered by "(absolutely) everything" should now include that new object -- and I suppose we could see this as an application of the same principle about domains including current topics which we mentioned as governing ordinary domain setting. (But equally, as I said before, it in fact isn't at all clear how we should handle that supposed general principle in the case of ordinary domains. And the thought in the present case just needn't invoke any wobbly notion of 'topic' but comes down to the following more basic point: if we are brought explicitly to acknowledge the existence of an object outside the previous domain of what we counted as "(absolutely) everything", then that forces an expansion of what we must now -- in our new situation -- include in "(absolutely) everything".)
So, to repeat, suppose you bring me to acknowledge e.g. a set-like object beyond those currently covered by "all sets". I expand my domain of quantification to contain that too. But not just that too. I'll also need to add ... well, what? At least, presumably, all the other objects that I can define in terms of it, using notions that I already have. And so then what? Thinking of all these objects together with the old ones, I can -- by the same move as before -- take all those together as a domain, and now we have a new object again. And off we go, iterating the procedure. It is, of course, not for nothing that Dummett called this sort of expansion indefinitely extensible!
We are now in very familiar territory, but territory unconnected with the early sections of the paper. We can now ask: just how far along the ordinals should we iterate? Maybe -- Glanzberg seems to be saying -- it isn't a matter of indefinite extensibility, but there is a natural limit. But I found the discussion here to be not very clear.
Where does all this leave us then? As I say, the early sections about common-or-garden domain setting in fact drop out as pretty irrelevant. If the paper has does have anything interesting to say, it is in the later sections, particularly in Sec. 6.2 about -- so to speak -- how indefinitely extensible indefinitely extensible concepts are. So, OK, I'll return to have another bash at that. (Though I will grumpily add that I think the editors could have taken a firmer line in getting Glanzberg to make his arguments more accessible.)
Monday, December 10, 2007
B. Hartley Slater has a book out, The De-Mathematisation of Logic, collecting together some of his papers. You can in fact download it free by filling in the form here. As you'll see just from reading the first few pages, he certainly thinks that most of us are skiing too fast down the conventional well-worn slopes, ending up in tangles of confusion. Methinks he protests too much; but still, you might find some thought provoking episodes here.
Sunday, December 09, 2007
It is bad luck to return from blue skies in Milan to a miserably wet and cold Cambridge (it is one of those times when those notices in the Botanical Gardens classifying us as falling into a 'semi-arid' region seem a mockery). And term has ended so the faculty is almost deserted, so that's not very cheering either. It all gives added attraction to the idea of spending a lot more of the year in Italy when I retire.
In an Italian mood, we've just watched L'Eclisse for the first time in very many years. It does remain quite astonishing. And what is remarkable is just how many of the images seem so very familiar, having been burnt into the memory by perhaps three viewings in the cinema decades ago. For a tiny example, there is a moment when Monica Vitti in longshot is unhappily walking home after a bad night alongside a grassy bank, carrying her handbag and perhaps a silk wrap or scarf, and she suddenly -- as one might -- swishes the scarf though some plants by the road. Why on earth should one remember that? Yet both of us watching did.
Section 3 of Glanzberg's paper gives an overview of the ways in explicit and common-or-garden-contextual restrictions on quantifiers work (as background to a discussion in later sections about how "background domains" are fixed). This section isn't intended to be more than just a quick set of reminders about familiar stuff, so we can be equally speedy here.
Going along with Glanzberg for the moment, suppose the "background domain" in a context is M (the absolutist says there is ultimately one fixed background domain containing everything: the contextualist says that M can vary from context to context). More or less explicit restrictions on quantifiers plus common-or-garden-contextual restrictions carve out from this background a subdomain D (so that Every A is B is interpreted as true just when the As in D are B, and so on). How?
Explicit restrictions are relatively unproblematic. But how is the contextual carving done? There are cases and cases. For example, there is carving by 'anaphora on predicates from the context', as in
1. Susan found most books which Bill needs, but few were important,
where 'few' is naturally heard as restricted to the books that Susan found and Bill needs. Then second, there is 'accommodation', where we rely perhaps on some Gricean mechanisms to read quantifiers so that claims made are sensible contributions to the conversational context. For example, as we are about to leave for the airport, I reassuringly say that, yes, I'm sure,
2. Everything is packed
when maybe some salient things (the passports, say) are in plain view in my hand and my keys are jingling in my pocket. Here, my claim is heard, and is intended to be heard, as generalizing over those things that it was appropriate to pack, or some such.
There's a third, rather different way, in which context can constrain domain selection, that isn't a matter of domain restriction but rather a matter of how, when an object which is already featuring prominently enough as a focal topic of discourse at a particular point, "we will expect contextually set quantifier domains to include it". (Though I guess that this point has to be handled with a bit of care. The taxi for the airport arrives very early: we comment on it. "But," I say, "we might as well leave in the taxi now. Everything is packed" The quantifier of course doesn't now include the taxi, even though it is the current topic of discourse.)
Well, so far, let's suppose, so good. But how are these reminders about common-or-garden contextual settings of domains going to help us with understanding what is going on in fixing 'background' domains? The story continues ...
Posted by Peter Smith at 9:08 AM
Saturday, December 08, 2007
The Guardian's Review of books on a Saturday is always worth reading, and the lead articles can be magnificent. This week, for example, it prints Doris Lessing's Nobel prize acceptance speech. Still, the reviews do occasionally get me spluttering into my coffee.
For a shaming display of sheer intellectual incompetence, how about one Colin Tudge, reviewing today a book on the conflict or otherwise between science and religion. Here's what he says about science:
Scientists study only those aspects of the universe that it is withintheir gift to study: what is observable; what is measurable and amenable to statistical analysis; and, indeed, what they can afford to study within the means and time available. Science thus emerges as a giant tautology, a "closed system".
That is simply fatuous. Firstly, science isn't and can't be a tautology -- science makes contentful checkable predictions, and no mere tautology (even in a stretched sense of "tautology") can do that. Second, the fact the scientists are limited by their endowments (both cognitive and financial) in no way implies that science is a "closed system" on any sensible understanding of that phrase -- at least in our early stage in the game, novel conjectures and new experimental techniques to test them are always possible.But it gets worse:
Religion, by contrast, accepts the limitations of our senses and brains and posits at least the possibility that there is more going on than meets the eye - a meta-dimension that might be called transcendental.
First, the "by contrast" is utterly inept. Anyone of a naturalistic disposition accepts the limitations of our senses and brains; science indeed already tells us that there is a lot more going on than meets the eye; and it is a pretty good bet, on scientific grounds, that there is a more going than we will ever be able to get our brains around. So what? Accepting such limitations has nothing whatever to do with "meta-dimensions" (ye gods, what on earth could that even mean?); and it would be just a horribly bad pun to slide from the thought that there might be aspects of the natural world that transcend our ability to get our heads around them to the thought that a properly modest view of our own cognitive limitations means countenancing murky religious claims about the "transcendental". Yet, we are told,
[A]theism - when you boil it down - is little more than dogma: simple denial, a refusal to take seriously the proposition that there could be more to the universe than meets the eye.
So, according to Tudge, no atheist takes a sensibly modest view about our cognitive limitations. What a daft thing to say. It is plainly entirely consistent -- I don't here say correct, but at least consistent -- to hold that (a) our cognitive capacities are limited, but (b) among the things we do have very good reason to believe are that Zeus, Thor, Baal and the like are fictions, and that the Gods of contemporary believers are in the same boat. And that entirely consistent position is, for a start, the one held by the atheist philosophers I know (in fact, by most of the philosophers I know, full stop).
There's more that is as bad in Tudge's piece, as you can read for yourself. If this kind of utterly shabby thinking is the best that can be done on behalf of religion, then it does indeed deserve everything that Dawkins and co. throw at it.
Added later, in response to comments elsewhere If I wrote with a little heat, I was just echoing Colin Tudge, who says of Dennett and Dawkins:
On matters of theology their arguments are a disgrace: assertive without substance; demanding evidence while offering none; staggeringly unscholarly.
I think it is appropriate to point out, in similar words, that -- whatever the merits or demerits of Dennett and Dawkins -- Tudge's arguments are a disgrace, assertive without substance, and staggeringly unscholarly.
According to Michael Glanzberg's "Context and Unrestricted Quantification", quantifiers always have to be understood as ranging over some contextually given domain; and paradoxes like Russell's show that, 'for any given context, there is a distinct context which provides a wider domain of quantification'. So he is defending 'a contextualist version of the view that there is no absolutely unrestricted quantification'.
The aim of this paper, however, isn't to directly defend the contextualist thesis as the best response to the paradoxes (Glanzberg has argued the case elsewhere), so much as to explore more closely how best to articulate the thesis, and in particular to explore how the idea that quantifiers always have to understood in terms of a background domain which is set contextually relates to more common-or-garden cases of quantifier domain restriction.
Consider, for example,
1. Every graduate student turned up to the party, and some undergraduates did too.
2. Everyone left before midnight.
In the first case there is are explicitly restricted quantifiers. But we of course don't mean every graduate student in the world turned up: there is also a contextually supplied restriction to e.g. students in the Cambridge philosophy department. In the second case, context does all the restricting -- e.g. to the people at the party.So far, so familiar. But what about
3. Absolutely everything that exists, without exception, is self-identical?
Here there is no explicit restriction to a subclass of what exists; nor need there be any common-or-garden-contextual restriction of the ordinary kind. Still, Glanzberg wants to say, in any given context there is a background domain ('the widest domain of quantification available' in that context). This is the domain over which quantifications as in an occurrence of (3) range, when there is no explicit restriction and no common-or-garden-contextual restriction. And, the argument goes, there is a kind of contextual relativity in fixing this background domain (so, in a sense, the likes of (3) involve contextually relative although unrestricted quantifiers):
Whereas the absolutist holds there is one fixed background domain, which is simply 'absolutely everything', the contextualist holds that different contexts can have different background domains.
But of course the contextualist needs to say more than that: it isn't just that different contexts might give different extensions to 'absolutely everything', it is also the case that there is no way of setting up a 'maximal' context in which our quantifiers do succeed in being maximally, unexpandably, all-embracing. For example, the contextualist must say that, even given a context of sophisticated philosophical reflection, when -- in fall awareness of the issues -- we essay a claim like
4. Absolutely everything that might fall under our quantifiers in any context whatsoever is self-identical,
we still somehow must fall short of our aim, because the context can be changed in a way that will expand what counts as everything. But how plausible is this? Well, we'll have to see how the explanations develop over the rest of the paper.
Friday, December 07, 2007
A while ago I made a start here on blogviewing Absolute Generality, edited by Augustín Rayo and Gabriel Uzquiano (OUP, 2006): but I only had a chance to comment on two papers before the chaos of term and other commitments brought things to a halt. I'm reviewing the book for the Bulletin of Symbolic Logic, so I really need to get back down to business, so I can write the 'proper' review over the next few weeks. Next up here, then, will be comments on the contribution by Michael Glanzberg: I'm planning to go through the pieces in roughly the order they are printed. If you want to know what I thought about the contributions by Kit Fine and Geoffrey Hellman, then -- rather than trawl back through the blog -- you can find what I wrote all in one place here.
Wednesday, December 05, 2007
Tim Chow has posted a draft "beginner's guide to forcing". I very much like these opening remarks:
All mathematicians are familiar with the concept of an open research problem. I propose the less familiar concept of an open exposition problem. Solving an open exposition problem means explaining a mathematical subject in a way that renders it totally perspicuous. Every step should be motivated and clear; ideally, students should feel that they could have arrived at the results themselves. The proofs should be 'natural' in Donald Newman's sense: "This term ... is introduced to mean not having any ad hoc constructions or brilliancies. A 'natural' proof, then, is one which proves itself, one available to the 'common mathematician in the streets'." I believe that it is an open exposition problem to explain forcing. Current treatments allow readers to verify the truth of the basic theorems, and to progress fairly rapidly to the point where they can use forcing to prove their own independence results .... However, in all treatments that I know of, one is left feeling that only a genius with fantastic intuition or technical virtuosity could have found the road to the final result.
Leaving aside the question of how well Tim Chow brings off his expository task -- though it looks a very interesting attempt to my inexpert eyes, and I'm off to read it more carefully -- I absolutely agree with him about the importance of such expository projects, giving "natural" proofs of key results in various levels of detail: these things are really difficult to do well yet are hugely worth attempting for the illumination that they bring.
Also, Philosophy of Mathematics: 5 Questions (which I've mentioned before as forthcoming) is now out. This is a rather different kind of exercise in standing back and trying to give an overview, with 28 philosophers and logicians giving their takes on the current state of play in philosophy of mathematics (the authors range from Jeremy Avigad, Steve Awodey and John L. Bell through to Philip Welch, Crispin Wright and Edward N. Zalta). The five questions are
- Why were you initially drawn to the foundations of mathematics and/or the philosophy of mathematics?
- What example(s) from your work (or the work of others) illustrates the use of mathematics for philosophy?
- What is the proper role of philosophy of mathematics in relation to logic, foundations of mathematics, the traditional core areas of mathematics, and science?
- What do you consider the most neglected topics and/or contributions in late 20th century philosophy of mathematics?
- What are the most important open problems in the philosophy of mathematics and what are the prospects for progress?
Tuesday, December 04, 2007
Continuing to wax lyrical about the food here would quickly get very boring, so I won't -- I'll just say that if you get a chance to eat at the Trattoria dei Cacciatori, something of a Milanese institution, in old farm buildings on the outskirts of the city, then do take it!
But enough already. It has been very good being here again, in all sorts of ways (not just gastronomic!). And in the gaps between other things, I'm having the welcome chance to idly turn over in my mind what my next work project(s) should be. For the first time in far too many years, I find myself a completely free agent. No journal to edit, no necessity to write anything to get brownie points for this or that purpose, no new courses I need to work up. The feeling of freedom is quite a novelty. Slightly disconverting, but I'm rather enjoying it. And some possible ideas are already beginning to take shape ...
Sunday, December 02, 2007
Two of life's mysteries. Why is it more or less impossible to get a decent cappuccino in England when any Autogrill stop on an Italian motorway can do a brilliant one? And just why is tagliatelle in butter with white truffle shaved over it to die for?
Saturday, December 01, 2007
I'm taking an overdue weekend away from the delights of Cambridge and from thinking/teaching about matters logical. Milan isn't at all my favourite Italian city (too big, too flat, too nineteenth century, too North European): but The Daughter is here, and the place has its moments. The front of the Duomo is almost revealed again after restoration and looks amazing. The shops in their way look equally wonderful (though it is mostly quite mad of course). L'Artigiano in Fiera is on at the moment, a huge fair, much of it showing off local products from all over Italy, including -- inevitably -- unending stalls of hams and sausages and salami and lardo and other delights of more variety than could last a lifetime. We were bowled over by the pride of the producers. Four people came back laden with porcine goodies.
And then there are the restaurants. Last night to La Riscacca Blu for the best sea food meal ever. Carpaccio of tuna and yellowtail and swordfish and anchovies. A salad of scampi and onions and tomatoes awash in oil. Then a very light fritto misto of scampi and octopus and squid and strips of courgettes. A pause. Pasta with red mullet. Then a whole turbot between four, simply baked but wonderful. Fine chardonnay from Alto Adige. Like many good Italian restaurants, the place looks nothing special. But it seems just impossible to eat like this in England even if you spend an absolute fortune and we didn't. (Our neighbours at the next table were a tough looking quartet of heavy-set guys, tucking in with great relish to a distinctly adventurous menu. It dawned on us that they must have been bodyguards for the minister of defence a couple of tables down.) So, when you are next in Milan ...