Tuesday, January 29, 2008

Three cheers for Jonathan Bennett

I rather doubt that it is worth teaching history of philosophy to undergraduates very early in their careers (ok, they might profitably read small gobbets, ripped from their contexts, but that's not the same, is it?). To my mind, the proper way of initially honouring the Great Dead Philosophers is to take some of their problems seriously -- but it seems to me pretty unlikely that the best way to take their problems seriously (in the company of near beginners) is to start where they did.

But be that as it may: if you are going to teach sizeable chunks of e.g. Locke, Berkeley and Hume to beginning undergraduates, the least you can do is make them accessible by translating them into modern English first. After all, we are supposed to be teaching philosophy, not teaching how to read 17th/18th century texts in order to excavate the arguments. So three cheers to Jonathan Bennett for taking on that task and doing it all so splendidly (and it is difficult to think of anyone you'd trust to do a better job).

I can imagine that lots of his colleagues in the history-of-philosophy trade will be dismissive of the enterprise or think it a waste of Bennett's great talents. But not me -- his early modern texts site is terrific. Check it out if you haven't done already! You might even find yourself, like me, reading chunks of the Great Dead Philosophers with some unaccustomed enjoyment.

Sunday, January 27, 2008

Logic Options, 2: Reprise

Having given the seminar, and also found out what Michael Potter was saying about related stuff in lectures, I've redone/expanded the stuff on natural deduction in my reading notes on Logical Options Sec. 1.5.

Each round of tinkering makes the notes a bit more stand-alone: so if a student who has been introduced to logic by trees wants to get a handle on how axiomatic systems, natural deduction systems, and sequent calculi work for propositional logic, then this stuff should be useful even if they never open Logic Options (if only because it is more expansive than Bell, DeVidi and Solomon, and also it shuffles the techno stuff about the soundness and completeness proofs for an axiomatic system to the end, rather than cluttering up the exposition of the key features of the various approaches).

I should say, though, that these notes are dashed off at great speed between a lot of other commitments, so there are no doubt typos/thinkos (which I'd of course be grateful to hear about!). Enjoy ...

Saturday, January 26, 2008

Another day, another logic lecture

These days, I use a data-projector for all my intro logic lectures. Mostly that works really well (I mean, compared with old-school chalk-on-the-blackboard, or fiddling about with transparencies on an OHP). Still, it can lead to new kinds of foul-ups. For a start, there can be those annoyingly distracting formal typo/thinko errors on the slides -- correcting LaTeX-generated slides on the fly in lectures isn't exactly as easy as reaching for the board rubber to smudge out the offending chalk or writing over transparencies! But worse, of course, you can suddenly realize that the slides just don't explain clearly what you wanted to explain, and/or don't tackle things in a sensible order, and your presentation falls to pieces in a rather conspicuous way! I managed to foul-up both ways in yesterday's first year lecture. Grrrrrr. So annoying.

Oh well. But on the other hand I don't have have the once common experience, e.g. when I was teaching a lot of philosophy of mind, of having a headful of objections and counter-objections to more or less anything I was saying -- something I never really got comfortable with. Now, at least I believe what I am saying in lectures ...

Thursday, January 24, 2008

Logic Options, 3: Reading Dummett

I've just been putting together the third instalment of notes on Logical Options. In part these briefly comment on some things in the first half of Bell, DeVidi and Solomon's Chapter 2, and review some key definitions (preparatory to doing the official hard-core semantics next time). But we are also reading Chapter 2 of Dummett's Frege: Philosophy of Language.

Now, Dummett is often regarded as a particularly difficult-to-read writer. But not so. Or at least, certainly not in this chapter, which is extremely clearly written. He does have one really rather annoying general habit though that promotes the reputation for difficulty. He will cheerfully give us even a forty-page chapter (or longer) without a single section break. That can make everything seem unnecessarily daunting -- and students can easily lose their bearings. So I thought that perhaps the most useful thing I could do is to suggest a way of chunking up the chapter we are reading into bite-sized sections (in the way that an interventionist editor might have insisted on Dummett's doing in the first place). My suggested divisions start along the following lines:

  1. Introduction (the problem of multiple generality) [pp. 9 - 10]
  2. The fundamental insight -- 'sentences are constructed in a series of stages' [pp. 10 - 12] In natural languages, the sequence of stages by which a sentence is constructed is not always transparently revealed by the linear order of the resulting sentence.
  3. The quantifier variable notation as a way of reflecting the fundamental insight [p. 12 - 15] 'The point of the new notation was to enable the constructional history of any sentence to be determined unambiguously.'
And so on and so forth through the paper (for an expansion and continuation, see my notes).

Now, is doing this sort of thing 'spoon-feeding'? I think not. Or at least, not in a bad way. We surely do want a near-beginner at logic to grasp Dummett's ideas on the depth and significance of the quantifier/variable notation (after all, these aren't just technical tricks which we are teaching). And anything that can be done to make the ideas more available by removing some merely surface difficulties in navigating through Dummett's text is going to be worth doing.

Kilvert's Diary

The Guardian last weekend had a piece on Kilvert's Diary which recounted the distressing history of the wanton destruction of the most of the originals, with only a small fraction published. It also claimed 'The one-volume abridgement, published by Penguin, and subsequently by Pimlico, has fallen out of print, while Plomer's three-volume edition has long been unavailable'. Well, a bit of idle googling quickly showed that at least that's happily wrong. The abridged version is in apparently in print from Read Books (see here), and the three-volume version is certainly in print too (see here). You can readily pick up e.g. inexpensive copies of the Folio Society abridged version from abebooks too. In part it is a wonderful evocation of the landscape around Clyro, in part a touching record of a lost and simpler world, in part a moving reminder of the pretty dire lot of the rural poor, especially in age or infirmity. Kilvert is a humane observer who can write wonderfully. Read it!

Wednesday, January 23, 2008

Theories, models and Galois Connections

Our second meeting of the Model Theory Reading Group, and Nathan Bowler -- a category theory PhD student -- gave a terrific overview talk taking us through the rest of the second chapter of Hodges. Moreover, he briefly introduced us to Martin Hyland's idea that it is illuminating to think of the relation between the set of all theories and the power set of the set of all structures, together with the 'semantics' function from theories to their set of models, and the 'syntax' function back from a set of structures to the smallest theory true of just that set as forming a Galois connection. A fascinating glimpse (resolution: I must get more on top of this sort of category-theoretic stuff).

Despite occasional light touches, Hodges's second chapter is pretty relentless stuff (a few more vivid examples of various notions, and neat illustrations of various distinctions would surely have been very welcome). So Nathan's talk turned a session that promised to be very hard going into an enjoyable and illuminating occasion.

Sunday, January 20, 2008

An Even Shorter Model Theory

Justin Bledin, a grad student at Berkeley who gave a paper today at the conference here, has a useful seeming 35 page set of notes on material drawn from Chang and Keisler, and from the shorter Hodges, outlining some of the highlights from relatively elementary model theory.

Saturday, January 19, 2008

Superdegree theories of vagueness

The first day of the first Cambridge Graduate Conference on the Philosophy of Logic and Mathematics. It seems to be going quite well. I was responding briefly to the first paper, by Elia Zardini, on a kind of degree theory of vagueness. Since he hasn't published the paper, I won't discuss it in detail here. But here are some rather general worries about certain kinds of logical theory of the family Elia seems to like.

Suppose then -- just suppose -- you like the general idea of a degree-theory of vagueness, according to which you assign propositions values belonging to some many-membered class of values. And it will do no harm for present purposes to simplify and suppose that the class of values is linearly ordered. The minimal proposal is that propositions attributing baldness, say, to borderline cases get appropriately arranged values between the maximum and minimum. There are lots of immediate problems about how on earth we are to interpret these values, but let that pass just for a moment. Let's instead note that there are of course going to be various ways of mapping values of propositions to unqualifiedly correct utterances of those propositions. And there are going to be various ways of mapping formal relations defined over values onto unqualifiedly correct inferences among evaluated propositions.

One way to go on the first issue would be to be strict. Unqualifiedly correct propositions are to get the maximum value: any other value corresponds to a less than unqualified assertoric success. Alternatively, we could be more relaxed. We could so assign values that any proposition getting a value above some threshold is quite properly assertible outright, but perhaps we give different such propositions different values, on the basis of some principle or other. (For a possible model, suppose, just suppose, you think of values as projections of rational credences; well, we can and do quite properly assert outright propositions for which our credence is less than the absolute maximum, and we can give principles for fine-tuning assignments of credences.)

Suppose then we take the relaxed view: we play the values game so that there's a threshold such that propositions which get a value above the threshold -- get a designated value, in the jargon -- are taken as good enough for assertion. Now, this generous view about values for assertion can be combined with various views about what makes for unqualifiedly correct inferences. The familiar line, perhaps, is to take the view that correct inferences must take us from premisses with designated values to a conclusion with a designated value. But there are certainly other ways to be play the game. We could, for example (to take us to something in the vicinity of Elia's recommendation), again be more relaxed and say that acceptable inferences -- inferences good enough to be endorsed without hesitation as unqualifiedly good -- are those that take us from premisses with good, designated, values to a conclusion which is, at worst, a pretty near miss. We could tolerate a small drop in value from premisses to conclusion.

Well, ok, suppose, just suppose, we play the game in that relaxed mode. Then we should be able to sprinkle values over a long sorites chain so that the initial premiss is designated (is unqualifiedly assertible): the first man is bald. Each conditional in the sorites series is designated (so they are all assertible too); if the n-th man is bald, so is the n+1th). Each little inference in the sorites is good enough (the value can't drop too much from premisses to conclusion). But still the value of 'man n is bald' can eventually drop below the threshold for being designated.

Terrific. Or it would be terrific if only we had some principled reason to suppose that the sprinkling of values made any kind of semantic sense. But do we? As we all know, degree theories are beset with really nasty problems. There are those problems which we shelved about how to interpret the values in the first place. And when we get down to details, there are all sorts of issues of detail. Just for a start, how many values are there? Too few (like three) and -- for example -- the tolerance story is difficult to take seriously, and in any case the many-valued theory tends to collapse into a different mode of presentation of a value-gap theory. Too many values and we seem to be faced with daft questions: what could make it the case that 'Smith is bald' gets the value 0.147 in the unit interval of values, as against the value 0.148? (Well, maybe talk about 'what makes it the case' is just too realist; maybe the degree theorist's plan is to tell some story about projected credences: but note the seriously anti-realist tendencies of such a theory.) And to return to issues of arbitrariness, even if we settle on some scale of values, what fixes the range of designated values? Why set the threshold at 0.950 rather than 0.951 in the unit interval, say? And what fixes the degree of tolerance that we allow in acceptable inference in taking us from designated premisses to near-miss conclusions?

Well, there is a prima facie response at least to the issues about arbitrariness, and it is the one that Elia likes. Don't fix on any one generous degree theory or any one version of the relaxedly tolerant story about inferences. Rather, generalize over such theories and go for a logic of inferences that is correct however we set the numerical details. Then, the story goes, we can diagnose what is happening in the sorites without committing ourselves to any particular assignments of values.

It would be misleading to call this a species of supervaluationism -- but there's a family likeness. For the supervaluationist, any one choice of acceptable boundary sharpening is arbitrary; so what we should accept as correct is what comes out robustly, irrespective of sharpenings. Similarly for what we might call the relaxed superdegree theorist: it can be conceded that any one assignment of values to propositions and degree of tolerance in inferences is arbitrary; the claims we are to take seriously about the logic of vagueness are those that come out robustly, irrespective of the detailed assignments.

Well, as they say, it is a view. Here is just one question about it. And it presses on an apparent significance difference of principle between supervaluationism and the relaxed superdegree theory. On the familiar supervaluationist story, faced with a vague predicate F, we imagine various ways of drawing an exact line between the Fs and the non-Fs. Now, that will be arbitrary up to a point, so long as we respect the uncontentious clear cases of Fs and non-Fs. Still, once we've drawn the boundary, and got ourselves a refined sharp predicate F*, we can understand perfectly well what we've done, and understand what it means for something to be F* or not F*. The supervaluation base, the various sharpenings of F*, can at least in principle be perfectly well understood. On the other hand, the relaxed superdegree theory is generalizing over a spectrum of many-valued assignments of degrees of truth (or whatever) to propositions. It's not clear what the constraints on allowed assignments would be. But there's a more basic problem. Take any one assignment. Take the 1742 value theory with the top 37 values designated and inferential tolerance set at a drop of 2 degrees. Well, I've said the words, but do you really understand what that theory could possibly come to? What could constitute there being 1742 different truth-values? I haven't the foggiest, and nor have you. We just wouldn't understand the supposed semantic content of such a theory. So, given that we don't begin to understand almost any particular degree theory, what (I wonder) can be so great about generalizing all over them? To put the point bluntly: can abstracting and generalizing away from the details of a lot of specific theories that we don't understand give us a supertheory we do understand and which is semantically satisfactory?

Friday, January 18, 2008

Stewart Shapiro, “Computability, Proof, and Open-Texture”

It was my turn briefly to introduce the Logic Seminar yesterday, and we were looking at Stewart Shapiro's “Computability, Proof, and Open-Texture” (which is published in Church’s Thesis After 70 Years). I've blogged about this before, and although I didn't look back at what I said then in rereading the paper, I seemed to come to much the same view of it. Here's a combination of some of what I said yesterday and what I said before. Though let me say straight away that it is a very nice paper, written with Stewart's characteristic clarity and good sense.

Leaving aside all considerations about physical computability, there are at least three ideas in play in the vicinity of the Church-Turing Thesis. Or rather there is first a cluster of inchoate, informal, open-ended, vaguely circumscribed ideas of computability, shaped by some paradigm examples of everyday computational exercises; and then second there is the semi-technical idea of effective computability (with quite a carefully circumscribed though still informal definition); and then thirdly there is the idea of Turing computability (and along with that, of course, the other provability equivalent characterizations of computability as recursiveness, etc.).

It should be agreed on all sides that our original inchoate, informal, open-ended ideas could and can be sharpened up in various ways. The notion of effective computability takes some strands in inchoate notion and refines and radically idealizes them in certain ways. But there are other notions, e.g. of feasible/practicable computability, that can be distilled out. It isn't that the notion of effective computability is -- so to speak -- the only clear concept waiting to be revealed as the initial fog clears.

Now, I think that Shapiro's rather Lakatosian comments in his paper about how concepts get refined and developed and sharpened in mathematical practice are all well-taken, as comments about how we get from our initial inchoate preformal ideas to the semi-technical notion of effective computability. And yes, I agree, it is important to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision -- guided of course by the desire for interesting, fruitful concepts -- in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept (or concept-cluster) and arriving at something more sharply bounded.

Where a question arises is about the relation between the semi-technical notion of effective computability and the notion of Turing computability. Shapiro writes as if the move onwards from the semi-technical notion is (as it were) just more of the same: the same Lakatosian dynamic (rational conceptual development under the pressure of proof-development) is at work in first getting from the original inchoate notion of computability to the notion of effective computability, as then it getting eventually to refine out the notion of Turing computability. Well, that's one picture. But an alternative picture is that once we have got as far as the notion of effective computable functions, we do have a notion which, though informal, is subject to sufficient constraints to ensure that it does indeed have a determinate extension (the class of Turing-computable functions). For some exploration of the latter view, see for example Robert Black's 2000 Philosophia Mathematica paper.

The key issue question here is which picture is right? Looking at Shapiro's paper, it is in fact difficult to discern any argument for supposing that things go his way. He is good and clear about how the notion of effective computability gets developed. But he seems to assume, rather than argue, that we need more of the same kind of conceptual development before we settle on the idea of Turing computability as a canonically privileged concept of computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

Wednesday, January 16, 2008

Starting the shorter Hodges

First model theory seminar today. We were just limbering up reading the first chapter and a bit into the second chapter. It fell to me to try to say something to introduce the reading -- difficult as nothing very exciting is going on yet: the interesting stuff starts next week. So I offered a few arm-waving thoughts about the way Hodges defines 'structure' and about his (and other model theorist's) stretched use of 'language'. Here is what I said.

[Later: In the light of comments, I've slightly revised the intro piece to make what I was saying clearer at a few points. And of course, I was struggling a bit to find anything thought-provoking to say about the very introductory opening pages of Hodges's book! -- so I'm no doubt slightly over-stating the points too.]

Monday, January 14, 2008

Logical Options, 2

Here are the reading notes for the second session on Logical Options. These are focussed on the discussion in Section 1.5 of styles of proof for propositional logic (other than trees/semantic tableaux), namely axiomatic, natural deduction, and sequent proofs. I've ended up writing rather more than Bell, DeVidi and Solomon do because I found their discussion oddly patchy (for example, why introduce the idea of natural deduction proofs without at least outlining how they are usually set out, either as Fitch-style indented proofs, or classically Gentzen-style?). I'll probably get round to editing these notes, energy and time permitting, once we've had the class and I get some feedback. But meanwhile, if you are involved with a similar course (as teacher or a student) you might find these notes useful. Comments welcome.

OK, that's another small beginning of term task done. Next up, I've got to put together some thoughts to introduce the first seminar (for a very different bunch!) on the shorter Hodges. Gulp.

Wednesday, January 09, 2008

Philosophy of Mathematics: Five Questions

I've mentioned before the now newly published Philosophy of Mathematics: Five Questions, in which some twenty eight philosophers, logicians and mathematicians respond to a bunch of questions related to how they see the current state of the philosophy of mathmatics. My copy arrived today. (Some of the contributions are on the authors' web-sites: Jeremy Avigad, Mark Colyvan, Solomon Feferman, Edward Zalta. There are other pieces worth reading by e.g. Geoffrey Hellman, Stewart Shapiro, Alan Weir, and Crispin Wright.)

My first impressions are that (i) it is worth ordering for your university's library (I imagine that some of the pieces would be quite useful and interesting orientation for students), but (ii) it is a very mixed bag (thus, Feferman offers twenty informative pages, while Thomas Jech and Penelope Maddy provide barely three pages between them), and overall (iii) I guess it is rather disappointing, with too many remarks too brisk and allusive to be very useful to anyone. It is no surprise, for example, that Steve Awodey's answer to the question "What do you consider the most neglected topics and/or contributions in late 2oth century philosophy of mathematics?" is "Category theory". But he doesn't tell us why.

Maybe the editors should have been more directive.

Monday, January 07, 2008

Lists

Over the holiday season, the reviews pages were full of lists of books of the year (inexplicably, no-one thought to do a list of "best books on Gödel's theorems in 2007"). I've read embarrasingly few of the listed books, though Orlando Figes's The Whisperers is on the table, still waiting for me to quite finish his wonderful Natasha's Dance.

The latest list was in The Times over the weekend, a rather bizarre list of the fifty Greatest British Writers Since 1945. They also published a further list of also-rans. One very odd omission who didn't even make their long list was Jonathan Raban. His Old Glory and Passage to Juneau (for example) are wonderful books. Here's part of what Douglas Kennedy said about the latter in the Independent:

Raban is, for my money, one of the key writers of the past three decades - not only for his immense stylistic showmanship, but also for the way he has taken that amorphous genre called "travel writing" and utterly redefined its frontiers ... Passage to Juneau is his finest achievement to date. Ostensibly an account of a voyage Raban took from his new home in Seattle to the Alaskan capital through that labyrinthine sea route called the Inside Passage, it is, in essence, a book about the nature of loss ...You close this extraordinary book marvelling at this most distressing but commonplace of ironies. He's home, but he's lost. Just like the rest of us.

But I don't think "stylistic showmanship" is quite right. The prose is faultless, "as beautiful and clear as the blue ocean on a crisp morning" as another reviewer put it, but not showy, and never inviting you to admire its cleverness. Raban, for my money, is worth a dozen Martin Amis's.

[Later I hadn't noticed that, as it happens, Raban wrote an illuminating piece on Obama in this last Saturday's Guardian.]

Thursday, January 03, 2008

Godard ... and model theory


Pierrot Le Fou
Two more books from the CUP sale. One is David Wills' Jean-Luc Godard: Pierrot le Fou in the Cambridge Film Handbooks series, which should be a fun read. Shame that there are no colour stills suggesting the amazingly vibrant look of the thing: but since I only paid £3 I really can't complain. I'm sure that I never even a quarter understood the film, and I've not seen it for many years: but I've always remembered it as terrific.

The other purchase is Joyal and Moerdik's Algebraic Set Theory. Frankly, I bought that more in ambitious hope than in any firm belief that I'll get my head around it, as my category theory is fragmentary and fragile. But I'll give it a shot.

Not soon, though, as this term is model-theory term: as I've mentioned before, Thomas Forster and I are going to be running a reading group for a mixed bunch, to work through Hodges's Shorter Model Theory. I'm going to be doing some more much needed background homework/revision over the next week, and at the moment I'm working through some of Chang and Keisler. Incidentally, that's surely another candidate for a Dover reprint: even though in some ways it isn't fantastically well written, and so the authors don't always make the reader's job a comfortable one, I guess it is still a rightly classic treatment.

Wednesday, January 02, 2008

Three cheers for the CUP bookshop sale

Ah, it is that time of year again: so off to gather up some absolute bargains at the CUP bookshop sale. Notionally, they are flogging off 'damaged' books: but the Press has a remarkably idealized view of what counts as damaged (indeed, in many cases, the only perceptible damage is produced by a large red "DAMAGED" stamped on the title page). My best buy today: I picked up a copy of Aczel/Simmons/Wainer's Proof Theory for a tenth of the list price -- ok, a paperback is in fact due next month, but that's still an amazing saving.

I also got the volume on Paul Churchland in the 'Contemporary Philosophy in Focus' series, which looks fun. And just to show that I'm not merely a scientistically minded logician, I bought a volume of Alasdair MacIntyre's essays which look, erm, uplifting (well, for a mere £3, I thought I could do with some enlightenment). If the threatened snow holds off, I'll return tomorrow, as they keep putting out more sale books to tempt one back. The fact that I haven't yet got round to reading the purchases from last year's sales of course doesn't dampen my enthusiasm one jot ...