The first day of the first Cambridge Graduate Conference on the Philosophy of Logic and Mathematics. It seems to be going quite well. I was responding briefly to the first paper, by Elia Zardini, on a kind of degree theory of vagueness. Since he hasn't published the paper, I won't discuss it in detail here. But here are some rather general worries about certain kinds of logical theory of the family Elia seems to like.

Suppose then -- just suppose -- you like the general idea of a degree-theory of vagueness, according to which you assign propositions values belonging to some many-membered class of values. And it will do no harm for present purposes to simplify and suppose that the class of values is linearly ordered. The minimal proposal is that propositions attributing baldness, say, to borderline cases get appropriately arranged values between the maximum and minimum. There are lots of immediate problems about how on earth we are to interpret these values, but let that pass just for a moment. Let's instead note that there are of course going to be various ways of mapping values of propositions to unqualifiedly correct utterances of those propositions. And there are going to be various ways of mapping formal relations defined over values onto unqualifiedly correct inferences among evaluated propositions.

One way to go on the first issue would be to be strict. Unqualifiedly correct propositions are to get the maximum value: any other value corresponds to a less than unqualified assertoric success. Alternatively, we could be more relaxed. We could so assign values that any proposition getting a value above some threshold is quite properly assertible outright, but perhaps we give different such propositions different values, on the basis of some principle or other. (For a possible model, suppose, just suppose, you think of values as projections of rational credences; well, we can and do quite properly assert outright propositions for which our credence is less than the absolute maximum, and we can give principles for fine-tuning assignments of credences.)

Suppose then we take the relaxed view: we play the values game so that there's a threshold such that propositions which get a value above the threshold -- get a designated value, in the jargon -- are taken as good enough for assertion. Now, this generous view about values for assertion can be combined with various views about what makes for unqualifiedly correct inferences. The familiar line, perhaps, is to take the view that correct inferences must take us from premisses with designated values to a conclusion with a designated value. But there are certainly other ways to be play the game. We could, for example (to take us to something in the vicinity of Elia's recommendation), again be more relaxed and say that acceptable inferences -- inferences good enough to be endorsed without hesitation as unqualifiedly good -- are those that take us from premisses with good, designated, values to a conclusion which is, at worst, a pretty near miss. We could tolerate a small drop in value from premisses to conclusion.

Well, ok, suppose, just suppose, we play the game in that relaxed mode. Then we should be able to sprinkle values over a long sorites chain so that the initial premiss is designated (is unqualifiedly assertible): the first man is bald. Each conditional in the sorites series is designated (so they are all assertible too); if the n-th man is bald, so is the n+1th). Each little inference in the sorites is good enough (the value can't drop too much from premisses to conclusion). But still the value of 'man n is bald' can eventually drop below the threshold for being designated.

Terrific. Or it would be terrific if only we had some principled reason to suppose that the sprinkling of values made any kind of semantic sense. But do we? As we all know, degree theories are beset with really nasty problems. There are those problems which we shelved about how to interpret the values in the first place. And when we get down to details, there are all sorts of issues of detail. Just for a start, how many values are there? Too few (like three) and -- for example -- the tolerance story is difficult to take seriously, and in any case the many-valued theory tends to collapse into a different mode of presentation of a value-gap theory. Too many values and we seem to be faced with daft questions: what could make it the case that 'Smith is bald' gets the value 0.147 in the unit interval of values, as against the value 0.148? (Well, maybe talk about 'what makes it the case' is just too realist; maybe the degree theorist's plan is to tell some story about projected credences: but note the seriously anti-realist tendencies of such a theory.) And to return to issues of arbitrariness, even if we settle on some scale of values, what fixes the range of designated values? Why set the threshold at 0.950 rather than 0.951 in the unit interval, say? And what fixes the degree of tolerance that we allow in acceptable inference in taking us from designated premisses to near-miss conclusions?

Well, there is a prima facie response at least to the issues about arbitrariness, and it is the one that Elia likes. Don't fix on any one generous degree theory or any one version of the relaxedly tolerant story about inferences. Rather, generalize over such theories and go for a logic of inferences that is correct however we set the numerical details. Then, the story goes, we can diagnose what is happening in the sorites without committing ourselves to any particular assignments of values.

It would be misleading to call this a species of supervaluationism -- but there's a family likeness. For the supervaluationist, any one choice of acceptable boundary sharpening is arbitrary; so what we should accept as correct is what comes out robustly, irrespective of sharpenings. Similarly for what we might call the relaxed superdegree theorist: it can be conceded that any one assignment of values to propositions and degree of tolerance in inferences is arbitrary; the claims we are to take seriously about the logic of vagueness are those that come out robustly, irrespective of the detailed assignments.

Well, as they say, it is a view. Here is just one question about it. And it presses on an apparent significance difference of principle between supervaluationism and the relaxed superdegree theory. On the familiar supervaluationist story, faced with a vague predicate F, we imagine various ways of drawing an exact line between the Fs and the non-Fs. Now, that will be arbitrary up to a point, so long as we respect the uncontentious clear cases of Fs and non-Fs. Still, once we've drawn the boundary, and got ourselves a refined sharp predicate F*, we can understand perfectly well what we've done, and understand what it means for something to be F* or not F*. The supervaluation base, the various sharpenings of F*, can at least in principle be perfectly well understood. On the other hand, the relaxed superdegree theory is generalizing over a spectrum of many-valued assignments of degrees of truth (or whatever) to propositions. It's not clear what the constraints on allowed assignments would be. But there's a more basic problem. Take any one assignment. Take the 1742 value theory with the top 37 values designated and inferential tolerance set at a drop of 2 degrees. Well, I've said the words, but do you really understand what that theory could possibly come to? What could constitute there being 1742 different truth-values? I haven't the foggiest, and nor have you. We just wouldn't understand the supposed semantic content of such a theory. So, given that we don't begin to understand almost any particular degree theory, what (I wonder) can be so great about generalizing all over them? To put the point bluntly: can abstracting and generalizing away from the details of a lot of specific theories that we don't understand give us a supertheory we do understand and which is semantically satisfactory?

## Saturday, January 19, 2008

### Superdegree theories of vagueness

Subscribe to:
Post Comments (Atom)

## 1 comment:

Hi Peter,

thanks so much for your penetrating thoughts about my paper/talk and apologies for the belatedness of this comment (I’ve been snowed under work since I came back from Cambridge). I’d like to make two points on your comments, a minor one and a more substantial one, which I hope clarifies a bit more my understanding of the technical apparatus of the paper (which neither in the paper nor in the talk I’ve had space to expand on).

The minor point—merely on behalf of someone who’s attracted by the kind of supervaluationist theory you sketch (which I’m not)—is that, in this dialectic, you seem to be willing to grant that sense can be made of the truth-value structure of at least the most well-behaved and “natural” many-valued logics, like e.g. Lukasiewicz logic when interpreted on the real interval [0,1]. Well, then the natural reply to your complain that you don’t understand sharpenings like the one with 37 designated values is simply to forget about those that you don’t understand and just supervaluate on those that you do understand (which, to repeat, it seems you’re willing to grant to exist).

The more substantial point is that your presentation of my view as a supervaluation on many-valued models, helpful as it might be as an introduction to some of the technical details of the view, is in certain important respects misleading re the underlying philosophical picture. I fully agree that a structure with sharp boundaries as to its main properties (the set of designated values, the set of tolerated values etc.) has more than one element of arbitrariness given the real semantic features of a vague natural language. But I don’t think that we can solve this problem by supervaluating. This is so because, according to my naïve, tolerance-accepting view, any such structure not only has an element of arbitrariness (i.e. decides things left undecided in natural language), but also an element of distortion (i.e. decides things in a way opposite to that in which they have been decided in natural language).

Let me explain. The semantics is crafted to track what I claim are intuitive judgements concerning valid and invalid patterns of reasoning with a vague language (prominent amongst them is of course the judgement that sorites arguments are slippery-slope fallacies of a certain peculiar kind). The structure on the values is justified only insofar as it enables to represent in a mathematically precise way these intuitive validity/invalidity judgements and to explore their consequences. For example, such judgements arguably presuppose a distinction between pieces of information so good as to be apt to be used reliably as fresh premises in further reasoning (“very good” pieces of information) and pieces of information that are only good enough as to be, as it were, a terminal point of acceptance, but no longer reliable if used as fresh premises in further reasoning (“merely good enough” pieces of information). The distinction is well-known from general probabilistic reasoning, and it should be uncontroversial (what should be controversial is rather my application of it to model deductive consequence). So the distinction should be represented in the mathematical model (and is in fact represented by the distinction between “designated” and “merely tolerated” values). But many other features of the model will be arbitrary, and some of them indeed distorting, like the fact that—given the classicality of the metalanguage—there is a sharp boundary between designated and merely tolerated values (the “representans”). This fact is distorting because the distinction between very good and merely good enough values (the “representanda”) is arguably vague, and so, at least on my naïve view, such that it lacks a sharp boundary.

What I’m appealing to here is the simple fact that not every property of a serviceable mathematical model of a phenomenon is justified by the phenomenon itself and rather accrues to the model in virtue of aspects of the background mathematical theory being used—as people say, it’s an artefact of it (see the work of Stewart Shapiro, Roy Cook and Dorothy Edgington on the representor/artefact distinction as applied to logics). For many phenomena there’s no absolute match between them and the best maths we have in order to represent, study and understand them, and the hope for such an absolute match is particularly forlorn (or so I would have thought!) in the case of vague languages.

Given this minimal and slightly instrumentalist understanding of the many-valuedness of the models, I would strongly oppose any interpretation of it in terms of degrees of truth. I’m no degree theorist. In my view, (a large part of) the truth about truth is that ‘P’ is true iff P, and such a concept is clearly too simple and coarse-grained to be substantially used in a faithful representation of the fine details of non-classical reasoning, which often require drawing distinctions between sentences that may be both true (or both false).

To sum up, my reply to the arbitrariness worry is to concede that some properties of the models are arbitrary (and some indeed distorting), but to explain that this is just to be expected from what is used simply as a useful mathematical model of the phenomenon. I would not accept the supervaluationist strategy that you propose, because I think it’s obvious that distortion cannot be eliminated by combining many distortions (contrast with arbitrariness, which might be eliminated by combining many of them). I’m so little inclined to attribute philosophical significance to the move of supervaluating that one of my favourite tolerant logics is gotten by restricting the admissible models to those isomorphic with a certain structure that has 4 designated values (it’s the simplest model to allow, among other nice things, the consistency of the law of non-contradiction together with the existence of positive and negative cases and the non-existence of a sharp boundary between them—terrific). Do I understand such models? Given the foregoing about what their function is supposed to be and what it isn’t supposed to be (e.g. they’re not supposed to represent different degrees in which a sentence can be true), I don’t see what I fail to understand. I understand them mathematically, and I understand what they are supposed to represent and how they do this, and moreover it turns out that they represent it very well, matching intuitive validity/invalidity judgements and giving guidance on more complicated cases where intuitions are harder to come by.

Of course, my worry with your friendly supervaluationist proposal is determined by my naïve assumption that there’s no sharp boundary between positive and negative cases (which leads me to impute distortion over and above arbitrariness). Someone who doesn’t share this assumption and is attracted by degree theories of vagueness (which I’m not) may well find good use of some version of the “superdegree” machinery against at least some of the objections usually raised against more simple-minded degree theories.

Post a Comment