Friday, June 29, 2007

Forthcoming attractions ...

Well, having perhaps rather foolishly said I was thinking about blogging on the Absolute Generality collection edited by Agustin Rayo and Gabriel Uzquiano, I’ve been asked to review it for the Bulletin of Symbolic Logic. So that decides the matter: it is unrestricted quantification, indefinite extensibility, and similar attractions next! Oh what fun ...

You can get a good idea of what is in the collection be reading the introduction here. And I’ll start commenting paper by paper next week -- starting with Kit Fine’s paper -- as a kind of warm up to reviewing the book "properly" for BSL. All comments as we go along will of course be very welcome!

Thursday, June 28, 2007

Church’s Thesis 15: Three last papers

The next paper in the Olszewski collection is Wilfried Sieg’s “Step by recursive step: Church’s analysis of effective computability”. And if that title seems familiar, that’s because the paper was first published ten years(!) ago in the Bulletin of Symbolic Logic. I’ve just reread the paper, and the historical fine detail is indeed interesting, but not (I think) particularly exciting if your concerns are about the status of Church’s Thesis now the dust has settled. So, given that the piece is familiar, I don’t feel moved to comment on it further here.

Sieg’s contribution is disappointing because it is old news; the last two papers are disappointing because neither says anything much about Church’s Thesis (properly understood as a claim about the coextensiveness of the notions of effective computability and recursiveness). Karl Svozil, in “Physics and Metaphysics Look at Computation”, instead writes about what physical processes can compute, and in particular says something about quantum computing (and says it too quickly to be other than fairly mystifying). And David Turner’s “Church’s Thesis and Functional Programming” really ought to be called “Church’s Lambda Calculus and Functional Programming”.

Which brings us to the end of the collection. A very disappointing (at times, rather depressing) read, I’m afraid. My blunt summary suggestion: read the papers by Copeland, Shagrir, and Shapiro and you can really give the other nineteen a miss ...

Tuesday, June 26, 2007

Church’s Thesis 14: Open texture and computability

Back at last to my blogview of the papers in Church’s Thesis After 70 Years (new readers can start here!) -- and we've reached a very nice paper by Stewart Shapiro, “Computability, Proof, and Open-Texture”, written with his characteristic clarity and good sense. One of the few ‘must read’ papers in the collection.

But I suspect that Shapiro somewhat misdescribes the basic logical geography of the issues in this area: so while I like many of the points he makes in his paper, I don’t think they support quite the conclusion that he draws. Let me explain.

There are three concepts hereabouts that need to be considered. First, there is the inchoate notion of what is computable, pinned down -- in so far as it is pinned down -- by examples of paradigm computations. Second, there is the idealized though still informal notion of effective computability. Third, there is the notion of Turing computability (or alternatively, recursive computability).

Church’s Thesis is standardly taken -- and I’ve been taking it -- to be a claim about the relation between the second and third concepts: they are co-extensive. And the point to emphasize is that we do indeed need to do some significant pre-processing of our initial inchoate notion of computability before we arrive at a notion, effective computability, that can reasonably be asserted to be co-extensive with Turing computability. After all, ‘computable’ means, roughly, ‘can be computed’: but ‘can’ relative to what constraints? Is the Ackermann function computable (even though for small arguments its value has more digits than particles in the known universe)? Our agreed judgements about elementary examples of common-or-garden computation don’t settle the answer to exotic questions like that. And there is an element of decision -- guided of course by the desire for interesting, fruitful concepts -- in the way we refine the inchoate notion of computability to arrive at the idea of effective computability (e.g. we abstract entirely away from consideration of the number of steps needed to execute an effective step-by-step computation, while insisting that we keep a low bound on the intelligence required to execute each particular step). Shapiro writes well about this kind of exercise of reducing the amount of ‘open texture’ in an inchoate informal concept and arriving at something more sharply bounded.

However, the question that has lately been the subject of some debate in the literature -- the question whether we can give an informal proof of Church’s Thesis -- is a question that arises after an initial exercise of conceptual refinement has been done, and we have arrived at the idea of effective computability. Is the next move from the idea of effective computability to the idea of Turing computability (or some equivalent) another move like the initial move from the notion of computability to the idea of effective computability? In other words, does this just involve further reduction in open texture, guided by more considerations ultimately of the same kind as are involved in the initial reduction of open texture in the inchoate concept of computability (so the move is rendered attractive for certain purposes but is not uniquely compulsory). Or could it be that once we have got as far as the notion of effective computability -- informal though that notion is -- we have in fact imposed sufficient constraints to force the effectively computable functions to be none other than the Turing computable functions?

Sieg, for example, has explored the second line, and I offer arguments for it in my Gödel book. And of course the viability of this line is not in the slightest bit affected by agreeing that the move from the initial notion of computability to the notion of effective computability involves a number of non-compulsory decisions in reducing open texture. Shapiro segues rather too smoothly from discussion of the conceptual move from the inchoate notion of computability to the notion of effective computability to discussion of the move from effective computability to Turing computability. But supposing that these are moves of the same kind is in fact exactly the point at issue in some recent debates. And that point, to my mind, isn’t sufficiently directly addressed by Shapiro in his last couple of pages to make his discussion of these matters entirely convincing.

But read his paper and judge for yourself!

Friday, June 22, 2007

Normal service will be soon resumed ...

The last tripos meeting today, and as always justice was of course done. I wish. (Oh, roll on the day when we stop having to place students in artificially bounded classes -- first, upper second, etc. -- and just rank-order on each paper and give transcripts ...). As chair of the examining boards for Parts IB and II, I've been much distracted this week, but I hope to get back to finishing my blogview of the Olszewski collection over the weekend, and then I’m thinking of turning to the papers in the recent collection on Absolute Generality edited by Agustin Rayo and Gabriel Uzquiano.

A delight in the post today (a belated birthday present). Four Haydn CDs to fill random holes in my otherwise almost complete collection of the Lindsays' recordings (almost, because I've never been moved to get the remainder of their Tippett disks). One of the terrific things about living in Sheffield in the nineties is that we overlapped with the Lindsays at their peak. They lived in the city, and often played at the Crucible Studio Theatre. That was a wonderful small space: they would sit facing each other in a square with three hundred or so closely packed around in tiers, and in that intimate atmosphere play with an unmatched intensity and directness, talking to the audience between pieces. Nothing has come close since.

Sunday, June 17, 2007

It's tough being a philosophy student ...

The last tripos scripts marked! And by happy chance, the answers I marked today were among the best of the season, and cheered me up a lot. (I'm Chair of the Examining Board for Parts IB and II this year so I've meetings to organize this coming week, and still have to examine three M.Phil. dissertations: but at least I've read all the relevant undergraduate scripts for the year. And I'm on leave next summer term, so that's it for two years. Terrific.)

It strikes me again while marking that it's quite tough being a philosophy student these days: the disputes you are supposed to get your head around have become so sophisticated, the to and fro of the dialectic often so intricate. An example. When I first started teaching, Donnellan's paper on 'Reference and Definite Descriptions' had quite recently been published -- it was state of the art. An undergraduate who could talk some sense about his distinction between referential and attributive uses of descriptions was thought to be doing really well. Just think what we'd expect a first class student to know about the Theory of Descriptions nowadays (for a start, Kripke's response to Donnellan, problems with Kripke's Gricean manoeuvres, etc.). True there are textbooks, Stanford Encyclopedia articles, and the like to help the student through: but still, the level of sophistication we now expect from our best undergraduates is daunting.

Saturday, June 16, 2007

Quackery and the philosophy of science

I’ve just added a link to Prof. David Colquhoun’s blog on quackery, now renamed Improbable Science (after a bit of a legal kerfuffle, recounted in today's Guardian). Great stuff.

And indeed, there are some really terrific internet resources out there, doing their best to counteract the tide of unreason that sometimes seems to be lapping at our doors. A question though. You’d have thought that philosophers of science would have a strong interest in joining in enthusiastically with the public defence of scientific reason, and doing their bit to chip away at the complete bollocks of creationism, intelligent design, and all the rest. Where are they all? Maybe it is just incompetent googling on my part, but I haven’t noticed any vigorous internet interventions. But I’d be very happy to be proved wrong ... any pointers to blogs for me to link to?

Thursday, June 14, 2007

Church’s Thesis 13: Gödel on Turing

Phew! At last, I can warmly recommend another paper in Church’s Thesis after 70 Years ...

Although initially Gödel was hesitant, by about 1938 he is praising Turing’s work as establishing the “correct definition” of computability. Yet in 1972 he writes a short paper on undecidability, which includes a section headed “A philosophical error in Turing’s work”, in which Gödel takes issue with Turing’s implicit assumption of the finitude of the human mind. Question: is this a change of view on Gödel's part (from seeing Turing’s work as successful to seeing it as fatally flawed)?

Surely not. What Gödel was praising in 1938 was Turing’s analysis of a finite step-by-step computational procedure. (Recall the context: Turing was originally fired up by the Entscheidungsproblem, which is precisely the question whether there is a finitistic procedure that can be mechanically applied to decide whether a sentence is a first-order logical theorem. So it is analysis of such procedures that is called for, and that was of concern to Gödel too.) What the later Gödel was resisting in 1972 was Turing’s further thought that, in the final analysis, human mental procedures cannot go beyond such finite mechanical procedures (he was inclined to think that, in the words of his Gibbs Lecture, the human mind “infinitely surpasses the powers of any finite machine”). There is no change of mind, just a change of topic.

That’s at any rate what I have previously taken to be the natural view, without doing any careful homework to support it (but it is of course the view that fits with the important distinction I’ve been stressing in these comments, between CT as a claim about effective computability, and bolder theses about what can be computed by machines and/or minds). And so it is very good to see that now Oron Shagrir has given a careful and convincing defence of this view, with lots of detailed references, in his (freely downloadable!) paper “Gödel on Turing on computability”. Good stuff!

Tuesday, June 12, 2007

Church’s Thesis 12: Kreisel, Church

I’m going to pass over the next three papers in the Olszewski collection pretty quickly, for various reasons.

First, there’s a piece on “Analog computation and Church’s Thesis” by Jerzy Mycka: I just don’t know anything about this stuff, and am certainly not in a position to comment on this piece, though it looks decent enough.

Then there is a paper by Piergiorgio Odifreddi on “Kreisel’s Church”. Now, I find Kreisel’s writings intriguing and irritating in roughly equal measure (all sorts of interesting things seem to be going on, but he just can’t or won’t write flat-footed technical exposition and direct, transparent, plain-spoken, philosophical commentary). So I’d hoped that Odifreddi might take up one or two of Kreisel’s themes from his discussions of CT and give them a careful development and exploration -- for Odifreddi's wonderful book on Recursion Theory shows how clear and judicious he can be. But regrettably, he does something quite different: he takes us on a lightening survey of Kreisel’s many relevant articles, with a lot of quotations and some references to related later work. But if you were puzzled by Kreisel before, you’ll stay pretty puzzled -- though you’ll have a longer bibliography!

Next, Adam Olszewski contributes a short piece on “Church’s Thesis as interpreted by Church”. Here Olszewski does at least pick up on the point that I noted that Murawski and Wolenski slurred over without comment. CT is nowadays usually taken to be a claim about the co-extensiveness of the two notions of effective computability and recursivess; but the Founding Fathers were wont to talk of the notions being identical, or of one notion being a definition of the other. Church in 1936 himself uses both identity talk and definition talk. But actually, it isn’t too likely that -- at that early date -- Church had a clearly worked out position in mind on the status of the correlation he was proposing between computability and recursivess, and I don’t think we can read too much into his choice of words here. The headline is that Church, at least originally, seemed to take the modest view of the correlation as having something like the status of a Carnapian explication, while Turing thought, more boldly, that it could be forced on us by an analysis of the very notion of a computation. This is familiar stuff, however, and Olszewski’s brief remarks don’t in any way change the familiar picture.

Monday, June 11, 2007

Diversions: Stripes, Bach, cats and soup

The hundreds -- nay, countless thousands -- who regularly look at this blog for its sheer entertainment value may be feeling just a bit short changed, what with all those heavy duty posts on Church’s Thesis interspersed with moans about tripos marking. What kind of way is that to treat a devoted readership? So here are some assorted diversions (the Monday Colour Section, if you like).

  1. First, you might have missed this about causation, late on BBC2 a few nights ago: and there’s a new CD out in a few days!
  2. Not to your taste? Then here's an even better reason to be cheerful -- dates for a world tour announced.
  3. Then there is the silly website of the day.
  4. Recipe of the day (from my all-time favourite restaurant -- worth a detour!)
  5. And announced today, more about Leopard ... looks truly amazing.

Church’s Thesis 11: Its status, once more

As you can imagine, it is going to be pretty difficult, by this stage in the game, to say something both novel and illuminating about ‘The status of Church’s Thesis’. And certainly, Roman Murawski and Jan Wolenski don’t really succeed in their paper of that title.

They don't help their cause by seeming to vacillate from the start about how to understand Church’s Thesis itself. Their official story is clear enough: CT is the claim that a function is effectively computable if and only if it is (partial) recursive. Fine: by my lights, that’s the right thing to say. But note, this makes CT a doctrine in the realm of reference. But then Murawski and Wolenski immediately offer, without any comment on the mismatch, quotations from Kalmar and Kleene which talk of CT as stating ‘the identity of two notions’ or as supplying ‘previously intuitive terms ... with a certain precise meaning’ -- talk which is only really appropriate if CT is taken as a doctrine in the realm of sense. Of course, our belief that the extensions of ‘effectively computable function’ and ‘recursive function’ are the same may well be grounded in beliefs about the relation between the senses of those expressions: but we surely ought to be clear here when we are talking about sense, and when we are talking about reference.

Anyway, Murawski and Wolenski proceed to distinguish four lines that have been taken about the status of CT, of increasing plausibility according to them: (1) it is an empirical hypothesis, (2) “is an axiom or theorem”, (3) it is a definition, (4) it is an explication.

But they seem just confused in supposing that (1) people have supposed that CT -- as they have officially defined it -- is an empirical hypothesis. Rather, it is claims like “every humanly computable function is recursive”, or “every function computable by a physically realizable machine is recursive”, that have been thought to be empirical. And as I've stressed before in this series of postings, CT is to be distinguished from those quite different claims. So put (1) aside.

Now, as Mendelson famously stressed, the easy half of CT looks to have a perfectly good informal proof (it is an informal theorem, if you like). Running the argument for total functions: the initial functions are plainly effectively computable; definitions by composition and recursion preserve effective computability; and definition by regular minimization preserves effective computability too. But all total recursive functions are definable from initial functions by composition, recursion and/or regular minimization. So all total recursive functions are effectively computable. Mutatis mutandis for partial recursive/partial computable functions. QED. So if we are to reject (2) we need a good reason to suppose that it is impossible to run a comparable argument for the difficult half of CT. But Murawski and Wolenski change the subject twice. First, they talk about what would be needed for a formal proof of CT and the problem of showing that “the axioms of the adopted axioms for computability do in fact reflect exactly the properties of computability (intuitively understood)” which looks too much like the problem of establishing CT. But whatever the force of those remarks about formal provability, they don’t in themselves sabotage the possibility of an argument for the hard half of CT which, while informal, is as cogent as the familiar argument for the easy half. Then secondly, Murawski and Wolenski go off at a tangent talking about the role of CT in proofs: but that’s irrelevant to (2).

We can skip over (3) as such remarks about CT as a definition that we find in the Founding Fathers actually don’t seem to distinguish the thought from (4). The real options do indeed seem to be (2) -- understood as a claim about there being an informal proof for CT -- versus (4). Murawski and Wolenski worry a bit about the kind of explication CT could provide, given that they say “it replaces a modal property of functions by one that is non-modal”. But while there may be worries about the notion of explication here, that isn’t one of them. For “f is effectively computable” is in the current context equivalent to “there is an algorithm which computes f”. So there isn’t anything excitingly modal going on.

Friday, June 08, 2007

Hemlock all round?

I remember Geoffrey Hunter (the author of the rather good Metalogic) telling me years ago that at the beginning of his intro logic course, having explained the idea of a valid argument, he gave out a sheet of examples to see which arguments beginners naively judged to be valid and which not. Then, at the end of the course, he gave out the same example sheet, asked which arguments were valid ... and people on average did worse.

Well, you can see why. Students learn some shiny new tools and are then tempted to apply the tools mindlessly, so e.g. faced with inferences involving conditionals, despite all your warnings, they crank a truth-table, and out comes a silly answer.

Likewise, students uncorrupted by philosophy could of course reel off a list of scientific theories that have been seriously proposed in the past but which -- they'd agree -- have been shown to be wrong (the phlogiston theory of combustion, the plum pudding model of the atom, and so on and so forth). But teach students some philosophy of science and ask them if you can falsify a theory and they now firmly tell you that it can't be done (merrily arguing from the flaws of Popper's falsificationism to the impossibility of showing any theory is false). Sigh. Of course, the same students will -- in another answer -- also tell you that scientific realism is in deep trouble because of the pessimistic induction from all those false past theories ...

We try not to addle our students' brains, but I fear we do.

Thursday, June 07, 2007

Church’s Thesis 10: Precision and pretension

So, we're halfway through my blogview of Church’s Thesis After 70 Years edited by Adam Olszewski, Jan Wolenski and Robert Janusz: I've commented on eleven papers, and there are eleven to go. I'm afraid that so far I've not been wildly enthusiastic: the average level of the papers is not great. But there are cheering names yet to come: Odifreddi, Shapiro, Sieg. So I live in hope ...

But the next paper -- Charles McCarty's 'Thesis and Variation' -- doesn't exactly raise my spirits either. The first couple of sections are pretentiously written, ill-focused, remarks about 'physical machines' and 'logical machines' (alluding to Wittgenstein). The remainder of the paper is unclear, badly expounded, stuff about modal formulations of CT (in the same ball park, then, as Horsten). Surely, in this of all areas of philosophy, we can and should demand direct straight talking and absolute transparency: and I've not the patience to wade through authors who can't be bothered to make themselves totally clear.

At least the next piece, five brisk sides by Elliott Mendelson, is clear. He returns to the topic of his well known 1990 paper, and explains again one of his key points:

I do not believe that the distinction between “precise” and “imprecise” serves to distinguish “partial recursive function” from “effectively computable function”.

To be sure, we offer more articulated definitions of the first notion: but, Mendelson insists, we only understand them insofar as we have an intuitive understanding of the notions that occur in the definition. Definitions give out at some point where we are (for the purposes at hand) content to rest: and in the end, that holds as much for “partial recursive function” as for “effectively computable function”

Mendelson's point then is that the possibility of establishing the 'hard' direction of CT can't be blocked just by saying that the idea of a partial recursive function is precise, the idea of an effectively computable function is isn't, so that there is some sort of categorial mismatch. (Actually, though I take Mendelson's point, I'd want stress a somewhat different angle on it. For CT is a doctrine about the co-extensiveness of two concepts. And there is nothing to stop one concept having the same extension as another, even if the first is in some good sense relatively 'imprecise' and the second is 'precise' -- any more than there is anything to stop an 'imprecise' designator like “those guys over there” in the circumstances picking out exactly the same as “Kurt, Stephen, and Alonzo”.)

As to the question whether the hard direction can actually be proved, Mendelson picks out Robert Black's “Proving Church’s Thesis”, Philosophia Mathematica 2000, as the best recent discussion. I warmly agree, and I take up Robert's story in the last chapter of my book.

Wednesday, June 06, 2007

Damned squiggles

I discovered earlier today that some symbols weren't showing correctly in a Windows browser. Sorry about that. I think I've sorted it and revised a couple of posts accordingly. But I won't be able to check on a Windows machine again for a couple of days.

Church’s Thesis 9: Epistemic arithmetic, algorithms, and an argument of Kleene’s

Ok, yes, yes, I should be marking tripos papers. But I need to take a break before plunging enthusiastically back to reading yet more about gruesome ravens ... So, let’s return to Horsten’s paper on ‘Formalizing Church’s Thesis’.

After talking about ‘formalizing’ CT in an intuitionistic framework, Horsten goes on to discuss briefly an analogous shot at formalizing CT in the context of so-called epistemic arithmetic. Now, I didn’t find this very satisfactory as a stand-alone section: but if in fact you first read another paper by Horsten, his 1998 Synthese paper ‘In defense of epistemic arithmetic’, then things do fall into place just a bit better. EA, due originally to Shapiro, is what you get by adding to first order PA an S4-ish modal operator L: and the thought is that this gives a framework in which the classicist can explicitly model something of what the intuitionist is after. So Horsten very briefly explores the following possible analogue of ICT in the framework of EA:

L∀xyLAxy → ∃exmn[T(e, x, m) ∧ U(m, n) ∧ A(x, n)]

But frankly, that supposed analogue seems to me to have very little going for it (for a start, there look to be real problems understanding the modality when it governs an open formula yet is read as being something to do with knowability/informal provability). Horsten’s own discussion seems thoroughly inconclusive too. So I fear that this looks to be an exercise in pretend precision where nothing very useful is going on.

Horsten’s next section on ‘Intensional aspects of Church’s Thesis’ is unsatisfactory in another way. He talks muddily about ‘an ambiguity at the heart of the notion of algorithm’. But there is no such ambiguity. There is the notion of an algorithmic procedure (intensional if you like); and there is the notion of a function in fact being computable by an an algorithmic procedure (so an extensional notion in that, if f has that property, it has it however it is presented). The distinction is clear and unambiguous.

Let’s move on, then, to the next paper, in the Olszewski collection, ‘Remarks on Church’s Thesis and Gödel’s Theorem’ by Stanisław Krajewski. This is too short to be very helpful, so I won’t comment on it -- except to say he mentions the unusually overlooked argument from Kleene’s Normal Form Theorem and (a dispensable use of) Church’s Thesis to Gödel's Theorem that I trailed here. The argument, as Krajewski points out, is due to Kleene himself, a fact which I now realize I didn’t footnote in my book. Oops! I think the book is being printed round about now, so it’s too late to do anything about it. I had better start a corrections page on the web site ...

Gruesome ravens

Marking a stack of tripos essays about ravens and white shoes, grue and green, theory and observation, Kuhn and rationality, laws and explanation, and the like, makes me realize (not for the first time) how out of sympathy I have become with philosophy done at that level of arm-waving generality. Kicking up the dust and complaining that we can’t see might amuse, but in the end, what’s the point?

My one serious foray into the philosophy of science was my book Explaining Chaos. And though of course I’d do things differently if rewriting the book now, I still think it exemplifies the right approach if we want to do something interesting: start from conceptual/foundational issues actually generated from within a particular field of science, and try to sort out what’s going on locally. That’s hard enough to do well, and at least we might learn something useful about the science in the process.

Sunday, June 03, 2007

Church’s Thesis 8: Formalizing the thesis

On, then, to the tenth paper in Church’s Thesis After 70 Years, Leon Horsten’s ‘Formalizing Church’s Thesis’.

Horsten explores two frameworks in which we can write down what look to be, in some sense, partial ‘formalizations’ of CT. Take first an intuitionistic framework (I’ll comment on the framework of so-called epistemic arithmetic in another post). And consider a claim of the form ∀xyAxy. Intuitionistically, Horsten suggests, this “means that there is a method for finding for all x at least one y which can be shown to stand in the relation A to x. And this seems close to asserting that an algorithm exists for computing A.” But then, Church’s Thesis tells us that there should be a recursive function that, given x, will do the job for finding a y such that Axy. So by an appeal to Kleene’s Normal Form Theorem, it might seem that in the intuitionistic framework Church’s Thesis implies this, call it ICT:

xyAxy → ∃exmn[T(e, x, m) ∧ U(m, n) ∧ A(x, n)]

where T and U express the familiar p.r. Kleene functions.

Now, Horsten has seemingly sane things to say about why, however, ICT doesn’t really capture CT -- and so the common intuitionistic rejection of ICT as too strong isn’t really a strike against CT. But that isn’t a novel thought. What was news to me is a theorem that Horsten credits to Anne Troelstra. Take Heyting Arithmetic plus ICT; then if this proves δ(φ), where this is the double-negation translation of φ, then Peano Arithmetic proves φ. Which is kind of cute. But what is the significance of that? Horsten’s comment is: “This conservativity phenomenon can be seen as a weak form of evidence for the thesis that CT is conservative over classical mathematics.” Why so?

OK, we often argue to some formal conclusion informally, using the assumption that a certain function is computable, and appeal to CT to avoid (i) bothering to prove the function in question is recursive and then (ii) turning our sketched argument into a stricter proof for the same formal conclusion. The assumption is that we can always do away with such an labour-saving appeal to CT: call that, if you like, the assumption that CT is conservative over classical mathematics. But it is equally, of course, the assumption that CT is true. For if CT weren’t conservative, there would be a disconnect between claims about computability and claims about recursiveness and CT would be false. And conversely, if CT were false (in the only way it can be false, by there being a computable but not recursive numerical function) then it wouldn’t be conservative: e.g., trivially, if f were such a function, then an appeal to CT followed by KNFT would deduce a false equivalence between f(x) = y and ∃e&existm[T(e, x, m) ∧ U(m, y)]. So Horsten’s claim comes to this: Troelstra’s theorem is evidence that CT is true.

But is it? After all, suppose the theorem had failed: would that have dented our confidence in CT? Surely not: we’d just have concluded that ICT -- which even intuitionists reject as too strong -- really is far too strong (compare other cases where wildly optimistic constructivist assumptions can prove classically unacceptable claims, e.g. about the continuity of functions). But if failure of the theorem wouldn’t have dented the classical mathematician’s confidence in CT, in what sense does its success give any kind of evidential boost for CT?

Friday, June 01, 2007

Good heavens, a decent meal in Cambridge

This won’t be of much interest to my enormous world-wide readership (well, putting a tracker on this blog certainly is a good corrective for any great delusions on that score!), but after the last logic seminar of the academic year a few of us went out to the Rice Boat, a relatively new Indian restaurant very near the faculty, specializing in Kerala cuisine. We all agreed it was simply terrific. Highly recommended.