Modal logic workshop at CMU

I spent yesterday at a workshop devoted to modal logic at Carnegie Mellon University. Rather than rehearse everything that happened, I'll simply point interested parties to the workshop webpage, which has abstracts for the talks. Mostly local folks presented their work, but Johan van Bentham from Amsterdam and Stanford was here, along with Rohit Parikh, who'll be presenting on Monday as well.

I certainly learned a lot, and even though my brain was hurting afterwards, I enjoyed myself too.


Mancosu and mathematical explanations

Last Friday, Paolo Mancosu was in Pittsburgh to give a talk on explanation in mathematics. His visit gives me the opportunity to correct an oversight in my last post -- Paolo helped me improve my dissertation substantially: he read an early partial draft very carefully, and brought his learned insight to bear on it. He is the only person in the universe who has written on the specific topic of my dissertation, and his comments were extremely helpful.

The basic claim of Paolo's talk was that Philip Kitcher's account of mathematical explanation falls afoul of certain apparently widely-shared intuitions about which proofs are explanatory and which are not. But I was intrigued by something else mentioned in the talk, which came to the fore more in the Q&A and dinner afterwards. When working on explanation in the natural sciences, there seems to be much more widespread agreement about what counts as a good explanation than in the mathematical/ formal sciences. That is, whereas most practitioners of a natural science can mostly agree on which purported explanations are good and which not, two mathematicians are much less likely to agree on whether a given proof counts as explanatory.

So I am wondering what accounts for this difference between the natural and formal sciences. Might this be due (in part) to mathematics lacking the 'onion structure' of the empirical sciences? For example, the claims of fundamental physics are not explained via results in chemistry, and observation reports (or whatever you want to call claims at the phenomenological level [in the physicist's sense]) are not used to explain any theoretical claim, and so on. My intuitions about mathematics are not as well-tutored, but I have the sense that the different branches of mathematics do not have such a clear direction of explanation. (Of course, there is no globally-defined univocal direction of explanation in the natural sciences [the cognoscenti can think of van Fraassen's flagpole-and-the-shadow story here], but there is nonetheless an appreciable difference between math and empirical sciences on this score.) At least in some cases, this clearer direction of explanation probably results from empirical sciences' explaining wholes in terms of their parts -- whereas mathematics lacks that clear part-whole structure. Often two bits of mathematics can each be embedded in one another, but we tend not to find this in the empirical sciences. (The concepts of thermodynamics [temperature, entropy] can be defined using the concepts of statistical mechanics [kinetic energy], while the converse is clearly out of the question.)

Pointing to the onion structure/ clearer direction of explanation in science might be just a re-statement of the original question; I'm not sure. Or maybe it's not relevant. In any case, I have to bury myself beneath a mountain's worth of student essays on the scientific revolution...


That's Doctor Obscure and Confused, to you

Yesterday I defended my dissertation. I wanted to thank, in this very small but nonetheless public space, all the folks here who helped me along with the dissertation: John Earman and Nuel Belnap, who served on my committee, Tom Ricketts, who unfortunately joined the Pittsburgh department a little too late to be on my official committee, but still read dissertation chapters with a fine-toothed comb, and talked to me about them -- and job-market issues -- for hours, Steve Awodey, who not only discussed Carnap with me innumerable times, but also helped me in the process of professionalization and acculturation into the circle of philosophers who study Carnap seriously, and finally my advisor, Laura Ruetsche, who not only gave me searching, insightful, and helpful comments at myriad points in the development of the dissertation project, but also helping me through the occasionally-panic-inducing job search process.

So, thanks to all -- I feel very lucky to have had the final years of my grad school career shaped by you. I hope other PhD students are as fortunate as I have been.


Semantics and necessary truth

I have recently been reading, with great profit, Jason Stanley's draft manuscript Philosophy of Language in the Twentieth Century, forthcoming in the Routledge Guide to Twentieth Century Philosophy. The paper's synoptic scope matches the ambitious title.

I'm curious about one relatively small claim Jason makes. On MS p.17, he says:
intuitively an instance of Tarski's schema T [GF-A: '...' is a true sentence if and only iff ...] such as (7) is not a necessary truth at all:
(7) "Bill Clinton is smart" is a true sentence if and only if Bill Clinton is smart.
(7) is not a necessary truth, because "Bill Clinton is smart" could have meant something other than it does. For example, "is smart" could have expressed the property of being from Mars, in which case (7) would be false.
This certainly has (as Stanley says) an "intuitive" ring. But now I'm not sure it's correct.

Here's my worry: as a preliminary, recall (as Tarski taught us) that semantic vocabulary should always be indexed to a particular language -- e.g., we must say 'is a true sentence of English' or 'x refers to y in Farsi' etc. in the full statement of sentences like (7). But then I am not so sure that such sentences are not true in all possible worlds. Is it really the case that, in English, "is smart" could have expressed the property of being from Mars? We specify a particular language (in part) by specifying the semantic values of the words of that language (at least, if we are not proceeding purely formally/ proof-theoretically). Wouldn't we be speaking another language at that point, that was similar to English, but not the same?

My intuitions lean towards saying that this would not be English, but those intuitions aren't firm. I think the question boils down to: "Is 'English' a rigid designator (i.e., does 'English' refer to the same thing(s) in all possible worlds)?", but I'm not sure about that, either. Which way do your intuitions run?


Tarski, Quine, and logical truth

The following must have been addressed already in the literature, but I'm going to mention it anyway--perhaps a better-informed reader can point me in the direction of the relevant research. W.V.O. Quine offers the following characterization of logical truth:
The logical truths are those true sentences which involve only logical words essentially. What this means is that any other words, though they may also occur in a logical truth (as witness 'Brutus,' 'kill,' and 'Caesar' in 'Brutus killed or did not kill Caesar'), can be varied at will without engendering falsity. ("Carnap and Logical Truth," §2)
All I wanted to mention here is that Alfred Tarski had already shown, in 1936's "On the Concept of Logical Consequence," that Quine's characterization is a necessary condition for a sentence to be a logical truth, but not a sufficient one. For example, if one is using an impoverished language that (i) only has proper names for things over five feet tall, and (ii) only has predicates applying only to things over five feet tall, then the sentence 'George W. Bush is over five feet tall' will be a logical truth -- because no matter what name from this impoverished language we substitute for 'George W. Bush' or what predicate we substitute for 'over 5 feet tall' in this sentence, the resulting sentence will be true.

Now some historical questions: did Quine think his condition was sufficient, or just necessary? (I quickly checked "Truth by Convention," and I didn't find any conclusive evidence that he considered it sufficient.) If Quine does consider this a proper definition of logical truth, how does he/ would he answer Tarski's objection? -- and/or why doesn't Quine simply adopt Tarski's definition of logical truth, viz. 'truth in all models'?

(You might think Tarski's objection shouldn't count for much, since I used a very contrived language to make the point against Quine. In Tarski's defense, however, (a) assuming that every object has a name in our language also seems somewhat artificial, and (b) Tarski proved (elsewhere) that a single language cannot contain names for all (sets of) real numbers (See "On Definable Sets of Real Numbers," reprinted in Logic, Semantics, Metamathematics).)