Quine on logical truth, again

In a previous post, I asked about the relationship between Quine's definition of logical truth and the now-standard (model-theoretic) one. Here's the second installment, which I decided to finally post after sitting on it for a while, since Kenny just posted a nice set of thoughts on the very closely related topic of logical consequence.

The standard definition is:
(SLT) Sentence S is a logical truth of language L = S is true in all models of L.

Quine's definition is:
(QLT) S is a logical truth = "we get only truths when we substitute sentences for [the] simple [=atomic] sentences" of S (Philosophy of Logic, 50).
(Quine counts open formulas, e.g. 'x burns', as sentences; specifically, he calls them 'open sentences.' I'll follow his usage here.)

So what does Quine think is the relationship between the standard characterization of logical truth and his own? He argues fot the following equivalence claim:
(EQ) If “our object language is… rich enough for elementary number theory ,” then “[a]ny schema that comes out true under all substitutions of sentences, in such a language, will also be satisfied by all models, and conversely” (53).

In a nutshell, Quine argues for the 'only if' direction via the Löwenheim-Skolem theorem (hence the requirement of elementary number theory within the object language), and for the ‘if’ direction by appeal to the completeness of first-order logic. I'll spell out his reasoning in a bit more detail below, but I can sum up my worry about it here: in order to overcome Tarski’s objection to Quine’s substitutional version of logical truth, Quine appeals to the Löwenheim-Skolem theorem. However, for that appeal to work, Quine has to require the object language to be rich enough to fall afoul of the incompleteness results, thereby depriving him of one direction of the purported equivalence between the model-based and sentence-substitution-based notions of logical truth.

The 'only if' direction
Quine presents an extension of the Löwenheim-Skolem Theorem due to Hilbert and Bernays:
“If a [GFA: first-order] schema is satisfied by a model at all, it becomes true under some substitution of sentences of elementary number theory for its simple schemata” (54).

A little logic chopping will get us to
If all substitutions of sentences from elementary number theory make A true, then A is satisfied in all models.
-- which is what we wanted to show. (I think this argument is OK .)

The 'if' direction
Quine takes as his starting premise the completeness result for first-order logic:
(CT) “If a schema is satisfied by every model, it can be proved” (54).

(Quine then argues that if a schema can be proved within a given proof calculus whose inference rules are “visibly sound,” i.e., “visibly such as to generate only schemata that come out true under all substitutions” (54), then such a schema will of course ‘come out true under all substitutions of sentences,’ Q.E.D.) This theorem is of course true for first-order logic; however, Quine has imposed the demand that our object language contain the resources of elementary number theory (explicitly including both plus and times, so that Presburger arithmetic—which is complete—is not in play). And once our object language is that rich, then Gödel’s first incompleteness theorem comes into play. Specifically, in any consistent proof-calculus rich enough for elementary number theory, there will be sentences [and their associated schema] that are true, i.e., satisfied by every model, yet cannot be proved -- providing a counterexample to (CT). So the dialectic, as I see it, is as follows: to answer Tarski's objection to the substitutional version of logical truth, Quine requires the language to be rich enough to capture number theory. But once Quine has made that move, the crucial premise (viz., CT) for the other direction of his equivalence claim no longer holds.

I think I must be missing something -- first, Quine is orders of magnitude smarter than I am, and second, while Quine is fallible, this does not seem like the kind of mistake he's likely to make. So perhaps someone in the blogosphere can set me straight.

And I have one more complaint. Quine claims that his "definition of logical truth agrees with the alternative definition in terms of models, as long as the object language is not too weak for the modest idioms of elementary number theory. In the contrary case we can as well blame any discrepancies on the weakness of the language as on the definition of logical truth" (55). This defense strikes me as implausible: why would we only have (or demand) a well-defined notion of logical truth once we reach number theory? Don't we want a definition to cover all cases, including the simple ones? If the model-theoretic definition captures all the intuitive cases, and Quine's only when the language is sufficiently rich, isn't that a good argument against Quine's characterization of logical truth?


(And for those wondering what Quine thinks is the advantage of his characterization of logical truth over the model-theoretic one, the answer is: Quine's uses much less set theory. "The evident philosophical advantage of resting with this substitutional definition, and not broaching model theory, is that we save on ontology. Sentences suffice,… instead of a universe of sets specifiable and unspecifiable. … [W]e have progressed a step whenever we find a way of cutting the ontological costs of some particular development" (55). Quine recognizes that his characterization is not completely free of set theory, given the proof of the LS theorem, so he says his "retreat" from the model-based notion of logical truth "renders the notions of validity and logical truth independent of all but a modest bit of set theory; independent of the higher flights" (56).)


Knowledge via public ignorance

Yesterday Rohit Parikh gave a very interesting talk at Carnegie Mellon on a kind of modal epistemic logic he has been working on recently with several collaborators, cleverly called topologic, because it carries interesting topological properties. The first thing Parikh said was "I like formalisms, but I like examples more." In that spirit, I wanted to describe here one simple example he showed us yesterday, without digging into the technicalia, because it generates a potentially philosophically interesting situation: someone can (under suitable circumstances) gain knowledge merely via other people's declarations of ignorance.

Imagine two people play the following game: a natural number n>0 (1,2, ...) is selected. Then, one of the players has n written on his or her forehead, and the other player has n+1 written on his forehead. Each player can see what is written on the other's forehead, but cannot see what is written on their own. The game allows only two "moves": you can either say "I don't know what number is on my forehead" or state what you think the number on your forehead is.

So, for example, if I play the game, and I see that the other person has a 2 written on her forehead, I know that the number on my own forehead is either a 1 or a 3, but I do not know which. But here is the interesting part: if my fellow game-player wears a 2, and on her first move says "I don't know what my number is," then I know what my number is -- at least, if my fellow game-player is reasonably intelligent. Why? If I were wearing a 1, then my interlocutor would say, on her first move, "I know my own humber is a 2" -- because (1, 2) is the first allowable pair in the game. Thus, if she says "I don't know what my own number is" on her first move, then I know my number can't be 1, so it must be 3. This same process of reasoning can be extended: by playing enough rounds of "I don't know" moves, we can eventually successfully reach any pair of natural numbers, no matter how high. We just have to keep track of how many rounds have been played. (This may remind the mathematically-inclined in the audience of the Mr. Sum-Mr. Product dialogue.)

What is interesting to me about this is that the two players in such a game (and the other examples Prof. Parikh described) can eventually come to have knowledge about the world simply via declarations of ignorance. These cases prompt two questions for me:
(1) Is this type of justification for a belief different in kind from the others philosophers busy themselves with? Or is this just a completely normal/ standard/ etc. way of gathering knowledge, which differs only superficially from other cases? (I'm not qualified to answer this, since I'm not an epistemologist.)
(2) Are there any interesting real-world examples where we achieve knowledge via collective ignorance in (roughly) this way? (Prof. Parikh suggested that there might be a vague analogy between what happens in these sorts of games and game-theoretic treatments of evolution, but didn't have much further to say.)