In a previous post, I asked about the relationship between Quine's definition of logical truth and the now-standard (model-theoretic) one. Here's the second installment, which I decided to finally post after sitting on it for a while, since Kenny just posted a nice set of thoughts on the very closely related topic of logical consequence.
The standard definition is:
(SLT) Sentence S is a logical truth of language L = S is true in all models of L.
Quine's definition is:
(QLT) S is a logical truth = "we get only truths when we substitute sentences for [the] simple [=atomic] sentences" of S (Philosophy of Logic, 50).
(Quine counts open formulas, e.g. 'x burns', as sentences; specifically, he calls them 'open sentences.' I'll follow his usage here.)
So what does Quine think is the relationship between the standard characterization of logical truth and his own? He argues fot the following equivalence claim:
(EQ) If “our object language is… rich enough for elementary number theory ,” then “[a]ny schema that comes out true under all substitutions of sentences, in such a language, will also be satisfied by all models, and conversely” (53).
In a nutshell, Quine argues for the 'only if' direction via the Löwenheim-Skolem theorem (hence the requirement of elementary number theory within the object language), and for the ‘if’ direction by appeal to the completeness of first-order logic. I'll spell out his reasoning in a bit more detail below, but I can sum up my worry about it here: in order to overcome Tarski’s objection to Quine’s substitutional version of logical truth, Quine appeals to the Löwenheim-Skolem theorem. However, for that appeal to work, Quine has to require the object language to be rich enough to fall afoul of the incompleteness results, thereby depriving him of one direction of the purported equivalence between the model-based and sentence-substitution-based notions of logical truth.
The 'only if' direction
Quine presents an extension of the Löwenheim-Skolem Theorem due to Hilbert and Bernays:
“If a [GFA: first-order] schema is satisfied by a model at all, it becomes true under some substitution of sentences of elementary number theory for its simple schemata” (54).
A little logic chopping will get us to
If all substitutions of sentences from elementary number theory make A true, then A is satisfied in all models.
-- which is what we wanted to show. (I think this argument is OK .)
The 'if' direction
Quine takes as his starting premise the completeness result for first-order logic:
(CT) “If a schema is satisfied by every model, it can be proved” (54).
(Quine then argues that if a schema can be proved within a given proof calculus whose inference rules are “visibly sound,” i.e., “visibly such as to generate only schemata that come out true under all substitutions” (54), then such a schema will of course ‘come out true under all substitutions of sentences,’ Q.E.D.) This theorem is of course true for first-order logic; however, Quine has imposed the demand that our object language contain the resources of elementary number theory (explicitly including both plus and times, so that Presburger arithmetic—which is complete—is not in play). And once our object language is that rich, then Gödel’s first incompleteness theorem comes into play. Specifically, in any consistent proof-calculus rich enough for elementary number theory, there will be sentences [and their associated schema] that are true, i.e., satisfied by every model, yet cannot be proved -- providing a counterexample to (CT). So the dialectic, as I see it, is as follows: to answer Tarski's objection to the substitutional version of logical truth, Quine requires the language to be rich enough to capture number theory. But once Quine has made that move, the crucial premise (viz., CT) for the other direction of his equivalence claim no longer holds.
I think I must be missing something -- first, Quine is orders of magnitude smarter than I am, and second, while Quine is fallible, this does not seem like the kind of mistake he's likely to make. So perhaps someone in the blogosphere can set me straight.
And I have one more complaint. Quine claims that his "definition of logical truth agrees with the alternative definition in terms of models, as long as the object language is not too weak for the modest idioms of elementary number theory. In the contrary case we can as well blame any discrepancies on the weakness of the language as on the definition of logical truth" (55). This defense strikes me as implausible: why would we only have (or demand) a well-defined notion of logical truth once we reach number theory? Don't we want a definition to cover all cases, including the simple ones? If the model-theoretic definition captures all the intuitive cases, and Quine's only when the language is sufficiently rich, isn't that a good argument against Quine's characterization of logical truth?
------
(And for those wondering what Quine thinks is the advantage of his characterization of logical truth over the model-theoretic one, the answer is: Quine's uses much less set theory. "The evident philosophical advantage of resting with this substitutional definition, and not broaching model theory, is that we save on ontology. Sentences suffice,… instead of a universe of sets specifiable and unspecifiable. … [W]e have progressed a step whenever we find a way of cutting the ontological costs of some particular development" (55). Quine recognizes that his characterization is not completely free of set theory, given the proof of the LS theorem, so he says his "retreat" from the model-based notion of logical truth "renders the notions of validity and logical truth independent of all but a modest bit of set theory; independent of the higher flights" (56).)
idiosyncratic perspectives on philosophy of science, its history, and related issues in logic
5/18/2006
5/02/2006
Knowledge via public ignorance
Yesterday Rohit Parikh gave a very interesting talk at Carnegie Mellon on a kind of modal epistemic logic he has been working on recently with several collaborators, cleverly called topologic, because it carries interesting topological properties. The first thing Parikh said was "I like formalisms, but I like examples more." In that spirit, I wanted to describe here one simple example he showed us yesterday, without digging into the technicalia, because it generates a potentially philosophically interesting situation: someone can (under suitable circumstances) gain knowledge merely via other people's declarations of ignorance.
Imagine two people play the following game: a natural number n>0 (1,2, ...) is selected. Then, one of the players has n written on his or her forehead, and the other player has n+1 written on his forehead. Each player can see what is written on the other's forehead, but cannot see what is written on their own. The game allows only two "moves": you can either say "I don't know what number is on my forehead" or state what you think the number on your forehead is.
So, for example, if I play the game, and I see that the other person has a 2 written on her forehead, I know that the number on my own forehead is either a 1 or a 3, but I do not know which. But here is the interesting part: if my fellow game-player wears a 2, and on her first move says "I don't know what my number is," then I know what my number is -- at least, if my fellow game-player is reasonably intelligent. Why? If I were wearing a 1, then my interlocutor would say, on her first move, "I know my own humber is a 2" -- because (1, 2) is the first allowable pair in the game. Thus, if she says "I don't know what my own number is" on her first move, then I know my number can't be 1, so it must be 3. This same process of reasoning can be extended: by playing enough rounds of "I don't know" moves, we can eventually successfully reach any pair of natural numbers, no matter how high. We just have to keep track of how many rounds have been played. (This may remind the mathematically-inclined in the audience of the Mr. Sum-Mr. Product dialogue.)
What is interesting to me about this is that the two players in such a game (and the other examples Prof. Parikh described) can eventually come to have knowledge about the world simply via declarations of ignorance. These cases prompt two questions for me:
(1) Is this type of justification for a belief different in kind from the others philosophers busy themselves with? Or is this just a completely normal/ standard/ etc. way of gathering knowledge, which differs only superficially from other cases? (I'm not qualified to answer this, since I'm not an epistemologist.)
(2) Are there any interesting real-world examples where we achieve knowledge via collective ignorance in (roughly) this way? (Prof. Parikh suggested that there might be a vague analogy between what happens in these sorts of games and game-theoretic treatments of evolution, but didn't have much further to say.)
Imagine two people play the following game: a natural number n>0 (1,2, ...) is selected. Then, one of the players has n written on his or her forehead, and the other player has n+1 written on his forehead. Each player can see what is written on the other's forehead, but cannot see what is written on their own. The game allows only two "moves": you can either say "I don't know what number is on my forehead" or state what you think the number on your forehead is.
So, for example, if I play the game, and I see that the other person has a 2 written on her forehead, I know that the number on my own forehead is either a 1 or a 3, but I do not know which. But here is the interesting part: if my fellow game-player wears a 2, and on her first move says "I don't know what my number is," then I know what my number is -- at least, if my fellow game-player is reasonably intelligent. Why? If I were wearing a 1, then my interlocutor would say, on her first move, "I know my own humber is a 2" -- because (1, 2) is the first allowable pair in the game. Thus, if she says "I don't know what my own number is" on her first move, then I know my number can't be 1, so it must be 3. This same process of reasoning can be extended: by playing enough rounds of "I don't know" moves, we can eventually successfully reach any pair of natural numbers, no matter how high. We just have to keep track of how many rounds have been played. (This may remind the mathematically-inclined in the audience of the Mr. Sum-Mr. Product dialogue.)
What is interesting to me about this is that the two players in such a game (and the other examples Prof. Parikh described) can eventually come to have knowledge about the world simply via declarations of ignorance. These cases prompt two questions for me:
(1) Is this type of justification for a belief different in kind from the others philosophers busy themselves with? Or is this just a completely normal/ standard/ etc. way of gathering knowledge, which differs only superficially from other cases? (I'm not qualified to answer this, since I'm not an epistemologist.)
(2) Are there any interesting real-world examples where we achieve knowledge via collective ignorance in (roughly) this way? (Prof. Parikh suggested that there might be a vague analogy between what happens in these sorts of games and game-theoretic treatments of evolution, but didn't have much further to say.)
4/29/2006
Modal logic workshop at CMU
I spent yesterday at a workshop devoted to modal logic at Carnegie Mellon University. Rather than rehearse everything that happened, I'll simply point interested parties to the workshop webpage, which has abstracts for the talks. Mostly local folks presented their work, but Johan van Bentham from Amsterdam and Stanford was here, along with Rohit Parikh, who'll be presenting on Monday as well.
I certainly learned a lot, and even though my brain was hurting afterwards, I enjoyed myself too.
I certainly learned a lot, and even though my brain was hurting afterwards, I enjoyed myself too.
4/25/2006
Mancosu and mathematical explanations
Last Friday, Paolo Mancosu was in Pittsburgh to give a talk on explanation in mathematics. His visit gives me the opportunity to correct an oversight in my last post -- Paolo helped me improve my dissertation substantially: he read an early partial draft very carefully, and brought his learned insight to bear on it. He is the only person in the universe who has written on the specific topic of my dissertation, and his comments were extremely helpful.
The basic claim of Paolo's talk was that Philip Kitcher's account of mathematical explanation falls afoul of certain apparently widely-shared intuitions about which proofs are explanatory and which are not. But I was intrigued by something else mentioned in the talk, which came to the fore more in the Q&A and dinner afterwards. When working on explanation in the natural sciences, there seems to be much more widespread agreement about what counts as a good explanation than in the mathematical/ formal sciences. That is, whereas most practitioners of a natural science can mostly agree on which purported explanations are good and which not, two mathematicians are much less likely to agree on whether a given proof counts as explanatory.
So I am wondering what accounts for this difference between the natural and formal sciences. Might this be due (in part) to mathematics lacking the 'onion structure' of the empirical sciences? For example, the claims of fundamental physics are not explained via results in chemistry, and observation reports (or whatever you want to call claims at the phenomenological level [in the physicist's sense]) are not used to explain any theoretical claim, and so on. My intuitions about mathematics are not as well-tutored, but I have the sense that the different branches of mathematics do not have such a clear direction of explanation. (Of course, there is no globally-defined univocal direction of explanation in the natural sciences [the cognoscenti can think of van Fraassen's flagpole-and-the-shadow story here], but there is nonetheless an appreciable difference between math and empirical sciences on this score.) At least in some cases, this clearer direction of explanation probably results from empirical sciences' explaining wholes in terms of their parts -- whereas mathematics lacks that clear part-whole structure. Often two bits of mathematics can each be embedded in one another, but we tend not to find this in the empirical sciences. (The concepts of thermodynamics [temperature, entropy] can be defined using the concepts of statistical mechanics [kinetic energy], while the converse is clearly out of the question.)
Pointing to the onion structure/ clearer direction of explanation in science might be just a re-statement of the original question; I'm not sure. Or maybe it's not relevant. In any case, I have to bury myself beneath a mountain's worth of student essays on the scientific revolution...
The basic claim of Paolo's talk was that Philip Kitcher's account of mathematical explanation falls afoul of certain apparently widely-shared intuitions about which proofs are explanatory and which are not. But I was intrigued by something else mentioned in the talk, which came to the fore more in the Q&A and dinner afterwards. When working on explanation in the natural sciences, there seems to be much more widespread agreement about what counts as a good explanation than in the mathematical/ formal sciences. That is, whereas most practitioners of a natural science can mostly agree on which purported explanations are good and which not, two mathematicians are much less likely to agree on whether a given proof counts as explanatory.
So I am wondering what accounts for this difference between the natural and formal sciences. Might this be due (in part) to mathematics lacking the 'onion structure' of the empirical sciences? For example, the claims of fundamental physics are not explained via results in chemistry, and observation reports (or whatever you want to call claims at the phenomenological level [in the physicist's sense]) are not used to explain any theoretical claim, and so on. My intuitions about mathematics are not as well-tutored, but I have the sense that the different branches of mathematics do not have such a clear direction of explanation. (Of course, there is no globally-defined univocal direction of explanation in the natural sciences [the cognoscenti can think of van Fraassen's flagpole-and-the-shadow story here], but there is nonetheless an appreciable difference between math and empirical sciences on this score.) At least in some cases, this clearer direction of explanation probably results from empirical sciences' explaining wholes in terms of their parts -- whereas mathematics lacks that clear part-whole structure. Often two bits of mathematics can each be embedded in one another, but we tend not to find this in the empirical sciences. (The concepts of thermodynamics [temperature, entropy] can be defined using the concepts of statistical mechanics [kinetic energy], while the converse is clearly out of the question.)
Pointing to the onion structure/ clearer direction of explanation in science might be just a re-statement of the original question; I'm not sure. Or maybe it's not relevant. In any case, I have to bury myself beneath a mountain's worth of student essays on the scientific revolution...
4/18/2006
That's Doctor Obscure and Confused, to you
Yesterday I defended my dissertation. I wanted to thank, in this very small but nonetheless public space, all the folks here who helped me along with the dissertation: John Earman and Nuel Belnap, who served on my committee, Tom Ricketts, who unfortunately joined the Pittsburgh department a little too late to be on my official committee, but still read dissertation chapters with a fine-toothed comb, and talked to me about them -- and job-market issues -- for hours, Steve Awodey, who not only discussed Carnap with me innumerable times, but also helped me in the process of professionalization and acculturation into the circle of philosophers who study Carnap seriously, and finally my advisor, Laura Ruetsche, who not only gave me searching, insightful, and helpful comments at myriad points in the development of the dissertation project, but also helping me through the occasionally-panic-inducing job search process.
So, thanks to all -- I feel very lucky to have had the final years of my grad school career shaped by you. I hope other PhD students are as fortunate as I have been.
So, thanks to all -- I feel very lucky to have had the final years of my grad school career shaped by you. I hope other PhD students are as fortunate as I have been.
4/04/2006
Semantics and necessary truth
I have recently been reading, with great profit, Jason Stanley's draft manuscript Philosophy of Language in the Twentieth Century, forthcoming in the Routledge Guide to Twentieth Century Philosophy. The paper's synoptic scope matches the ambitious title.
I'm curious about one relatively small claim Jason makes. On MS p.17, he says:
Here's my worry: as a preliminary, recall (as Tarski taught us) that semantic vocabulary should always be indexed to a particular language -- e.g., we must say 'is a true sentence of English' or 'x refers to y in Farsi' etc. in the full statement of sentences like (7). But then I am not so sure that such sentences are not true in all possible worlds. Is it really the case that, in English, "is smart" could have expressed the property of being from Mars? We specify a particular language (in part) by specifying the semantic values of the words of that language (at least, if we are not proceeding purely formally/ proof-theoretically). Wouldn't we be speaking another language at that point, that was similar to English, but not the same?
My intuitions lean towards saying that this would not be English, but those intuitions aren't firm. I think the question boils down to: "Is 'English' a rigid designator (i.e., does 'English' refer to the same thing(s) in all possible worlds)?", but I'm not sure about that, either. Which way do your intuitions run?
I'm curious about one relatively small claim Jason makes. On MS p.17, he says:
intuitively an instance of Tarski's schema T [GF-A: '...' is a true sentence if and only iff ...] such as (7) is not a necessary truth at all:This certainly has (as Stanley says) an "intuitive" ring. But now I'm not sure it's correct.
(7) "Bill Clinton is smart" is a true sentence if and only if Bill Clinton is smart.
(7) is not a necessary truth, because "Bill Clinton is smart" could have meant something other than it does. For example, "is smart" could have expressed the property of being from Mars, in which case (7) would be false.
Here's my worry: as a preliminary, recall (as Tarski taught us) that semantic vocabulary should always be indexed to a particular language -- e.g., we must say 'is a true sentence of English' or 'x refers to y in Farsi' etc. in the full statement of sentences like (7). But then I am not so sure that such sentences are not true in all possible worlds. Is it really the case that, in English, "is smart" could have expressed the property of being from Mars? We specify a particular language (in part) by specifying the semantic values of the words of that language (at least, if we are not proceeding purely formally/ proof-theoretically). Wouldn't we be speaking another language at that point, that was similar to English, but not the same?
My intuitions lean towards saying that this would not be English, but those intuitions aren't firm. I think the question boils down to: "Is 'English' a rigid designator (i.e., does 'English' refer to the same thing(s) in all possible worlds)?", but I'm not sure about that, either. Which way do your intuitions run?
4/03/2006
Tarski, Quine, and logical truth
The following must have been addressed already in the literature, but I'm going to mention it anyway--perhaps a better-informed reader can point me in the direction of the relevant research. W.V.O. Quine offers the following characterization of logical truth:
Now some historical questions: did Quine think his condition was sufficient, or just necessary? (I quickly checked "Truth by Convention," and I didn't find any conclusive evidence that he considered it sufficient.) If Quine does consider this a proper definition of logical truth, how does he/ would he answer Tarski's objection? -- and/or why doesn't Quine simply adopt Tarski's definition of logical truth, viz. 'truth in all models'?
(You might think Tarski's objection shouldn't count for much, since I used a very contrived language to make the point against Quine. In Tarski's defense, however, (a) assuming that every object has a name in our language also seems somewhat artificial, and (b) Tarski proved (elsewhere) that a single language cannot contain names for all (sets of) real numbers (See "On Definable Sets of Real Numbers," reprinted in Logic, Semantics, Metamathematics).)
The logical truths are those true sentences which involve only logical words essentially. What this means is that any other words, though they may also occur in a logical truth (as witness 'Brutus,' 'kill,' and 'Caesar' in 'Brutus killed or did not kill Caesar'), can be varied at will without engendering falsity. ("Carnap and Logical Truth," §2)All I wanted to mention here is that Alfred Tarski had already shown, in 1936's "On the Concept of Logical Consequence," that Quine's characterization is a necessary condition for a sentence to be a logical truth, but not a sufficient one. For example, if one is using an impoverished language that (i) only has proper names for things over five feet tall, and (ii) only has predicates applying only to things over five feet tall, then the sentence 'George W. Bush is over five feet tall' will be a logical truth -- because no matter what name from this impoverished language we substitute for 'George W. Bush' or what predicate we substitute for 'over 5 feet tall' in this sentence, the resulting sentence will be true.
Now some historical questions: did Quine think his condition was sufficient, or just necessary? (I quickly checked "Truth by Convention," and I didn't find any conclusive evidence that he considered it sufficient.) If Quine does consider this a proper definition of logical truth, how does he/ would he answer Tarski's objection? -- and/or why doesn't Quine simply adopt Tarski's definition of logical truth, viz. 'truth in all models'?
(You might think Tarski's objection shouldn't count for much, since I used a very contrived language to make the point against Quine. In Tarski's defense, however, (a) assuming that every object has a name in our language also seems somewhat artificial, and (b) Tarski proved (elsewhere) that a single language cannot contain names for all (sets of) real numbers (See "On Definable Sets of Real Numbers," reprinted in Logic, Semantics, Metamathematics).)
Labels:
history of analytic,
logic,
Quine,
Tarski
3/29/2006
Should a naturalist be a realist or not?
Unsurprisingly, the answer to the question in the title of this post depends on the details of what one takes 'naturalism' about science to mean. The shared conception of naturalism is something like 'There is no first philosophy' (I think Penny Maddy explicitly calls this her version of naturalism) -- that is, philosophy does not stand above or outside the sciences. "As in science, so in philosophy" is (one of) Bas van Fraassen's formulations.
Each of the following two quotes comes from a naturalist, but the first appeals to naturalism to justify realism (about mathematics), while the second appeals to naturalism in support of anti-realism (about science).
In his review of Charles Chihara's A Structuralist Account of Mathematics in Philosophia Mathematica 13 (2005), John Burgess writes:
The mathematician deals in proof. And proof is (at least a large part of) the source of mathematics' epistemic force. The number theorist (e.g.) assumes the existence of the integers and proves things about them; that's what she does qua mathematician. People with the proclivities of Burgess and van Fraassen would agree thus far, I think. But they part ways when we reach the question "Are there integers?" A Burgessite (if not John B. himslef) could say "If you're really going to defer to number theorists and their practice, they clearly take for granted the existence of the integers." A van-Fraassen-ite could instead say: "What gives mathematics its epistemic force and evidential weight is proof, and the number theorist has no proof of the existence of integers (or the set theorist of sets, etc.). Since there is no proof of the integers' existence forthcoming, asserting the existence of the integers (in some sense) goes beyond the evidential force of mathematics. Thus, a naturalist about mathematics should remain agnostic about the existence of numbers (unless there are other arguments forthcoming, not directly based on naturalism)."
Is there any way to decide between these forms of naturalism -- one which defers (for the most part) to the form and content of the sciences, and the other which defers only to the form? (Note: van Fraassen's Empirical Stance takes up this question, but this post is too long already to dig into his suggestions.)
Each of the following two quotes comes from a naturalist, but the first appeals to naturalism to justify realism (about mathematics), while the second appeals to naturalism in support of anti-realism (about science).
In his review of Charles Chihara's A Structuralist Account of Mathematics in Philosophia Mathematica 13 (2005), John Burgess writes:
"If you can't think how we could come justifiably to believe anything implyingCompare van Fraassen, in The Empirical Stance (2002):
(1) There are numbers.
then 'Don't think, look!' Look at how mathematicians come to accept
(2) There are numbers greater than 10^10 that are prime.
That's how one can come justifiably to believe something implying (1)." (p.87)
But [empiricism's] admiring attitude [towards science] is not directed so much to the content of the sciences as to their forms and practices of inquiry. Science is a paradigm of rational inquiry. ... But one may take it so while showing little deference to the content of any science per se.(p.63)Both Burgess and van Fraassen are naturalists about their respective disciplines (mathematics and empirical sciences) -- but they disagree on what the properly scientific reaction to questions like "Are there numbers?" and "Does science aim at truth or merely empirical adequacy?" is.
The mathematician deals in proof. And proof is (at least a large part of) the source of mathematics' epistemic force. The number theorist (e.g.) assumes the existence of the integers and proves things about them; that's what she does qua mathematician. People with the proclivities of Burgess and van Fraassen would agree thus far, I think. But they part ways when we reach the question "Are there integers?" A Burgessite (if not John B. himslef) could say "If you're really going to defer to number theorists and their practice, they clearly take for granted the existence of the integers." A van-Fraassen-ite could instead say: "What gives mathematics its epistemic force and evidential weight is proof, and the number theorist has no proof of the existence of integers (or the set theorist of sets, etc.). Since there is no proof of the integers' existence forthcoming, asserting the existence of the integers (in some sense) goes beyond the evidential force of mathematics. Thus, a naturalist about mathematics should remain agnostic about the existence of numbers (unless there are other arguments forthcoming, not directly based on naturalism)."
Is there any way to decide between these forms of naturalism -- one which defers (for the most part) to the form and content of the sciences, and the other which defers only to the form? (Note: van Fraassen's Empirical Stance takes up this question, but this post is too long already to dig into his suggestions.)
Labels:
naturalism,
philosophy of science,
realism
3/15/2006
Pessimistic induction + incommensurability = instrumentalism?
One popular formulation of Scientific Realism is: Mature scientific theories are (approximately) true.
One of the two main arguments against this claim is the so-called 'Pessimistic (Meta-)induction,' which is a very simple inductive argument from the history of science: Most (or even almost all) previously accepted, (apparently) mature scientific theories have been false -- even ones that were very predictively successful. Ptolemy's theory yielded very good predictions, but I think most people would shy away from saying 'It is approximately true that the Sun, other planets, and stars all rotate around a completely stationary Earth. So, since most previous scientific theories over the past centuries turned out to be false, our current theories will also probably turn out to be false. (There are many more niceties which I won't delve into here; an updated and sophisticated version of this kind of argument has been run by P. Kyle Stanford over the last few years.)
The kind of anti-realism suggested by the above argument is that the fundamental laws and claims of our scientific theories are (probably) false. But we could conceivably read the history of science differently. Many fundamental or revolutionary changes in science generate what Kuhn calls 'incommensurability': the fundamental worldview of the new theory is in some sense incompatible with that of the older theory -- the changes from the classical theories of space, time, and matter to the relativistic and quantum theories are supposed to be examples of such changes. Communication breaks down (in places) across the two worldviews, so that each side cannot fully understand what the other claims.
Cases of incommensurability (if any exist) could result in each side thinking the other is speaking incomprehensibly(or something like it), not merely that what the other side is saying is false in an ordinary, everyday way. An example from the transition from Newtonian to (special) relativistic mechanics may illustrate this: Suppose a Newtonian says 'The absolute velocity of particle p is 100 meters per second.' The relativistic physicist would (if she is a Strawsonian instead of a Russellian about definite descriptions) say such a sentence is neither true nor false -- because there is no such thing as absolute velocity. [A Russellian would judge it false.] If she merely said "That's false," the Newtonian physicist would (presumably) interpret that comment as 'Oh, she thinks p has some other absolute velocity besides 100 m/s; perhaps I should go back and re-measure.' To put the point in philosophical jargon: presuppositions differ between pre- and post-revolutionary science, and so the later science will view some claims of the earlier group as exhibiting presupposition failure, and therefore as lacking a truth-value, like the absolute velocity claim above. (Def.: A presupposes B = If B is false, then A is neither true nor false)
This leads us to a different kind of pessimistic induction: (many of) the fundamental theoretical claims of our current sciences probably lack a truth-value altogether, since past theories (such as Newtonian mechanics) have that feature. (If you want to call claims lacking truth-values 'meaningless,' feel free, but it is not necessary.) This is hard-core instrumentalism, a very unpopular view today; most modern anti-realists, following van Fraassen, think that all our scientific discourse is truth-valued (though we should be agnostic about the truth-value of claims about unobservable entities and processes). But this instrumentalism seems to be a natural outcome of (1) taking the graveyard of historical scientific theories seriously, (2) believing there is something like Kuhnian incommensurability, and (3) holding that incommensurability involves presupposition failure. And none of those three strike me as crazy.
Disclaimer: This argument has probably been suggested before, but I cannot recall seeing it anywhere.
One of the two main arguments against this claim is the so-called 'Pessimistic (Meta-)induction,' which is a very simple inductive argument from the history of science: Most (or even almost all) previously accepted, (apparently) mature scientific theories have been false -- even ones that were very predictively successful. Ptolemy's theory yielded very good predictions, but I think most people would shy away from saying 'It is approximately true that the Sun, other planets, and stars all rotate around a completely stationary Earth. So, since most previous scientific theories over the past centuries turned out to be false, our current theories will also probably turn out to be false. (There are many more niceties which I won't delve into here; an updated and sophisticated version of this kind of argument has been run by P. Kyle Stanford over the last few years.)
The kind of anti-realism suggested by the above argument is that the fundamental laws and claims of our scientific theories are (probably) false. But we could conceivably read the history of science differently. Many fundamental or revolutionary changes in science generate what Kuhn calls 'incommensurability': the fundamental worldview of the new theory is in some sense incompatible with that of the older theory -- the changes from the classical theories of space, time, and matter to the relativistic and quantum theories are supposed to be examples of such changes. Communication breaks down (in places) across the two worldviews, so that each side cannot fully understand what the other claims.
Cases of incommensurability (if any exist) could result in each side thinking the other is speaking incomprehensibly(or something like it), not merely that what the other side is saying is false in an ordinary, everyday way. An example from the transition from Newtonian to (special) relativistic mechanics may illustrate this: Suppose a Newtonian says 'The absolute velocity of particle p is 100 meters per second.' The relativistic physicist would (if she is a Strawsonian instead of a Russellian about definite descriptions) say such a sentence is neither true nor false -- because there is no such thing as absolute velocity. [A Russellian would judge it false.] If she merely said "That's false," the Newtonian physicist would (presumably) interpret that comment as 'Oh, she thinks p has some other absolute velocity besides 100 m/s; perhaps I should go back and re-measure.' To put the point in philosophical jargon: presuppositions differ between pre- and post-revolutionary science, and so the later science will view some claims of the earlier group as exhibiting presupposition failure, and therefore as lacking a truth-value, like the absolute velocity claim above. (Def.: A presupposes B = If B is false, then A is neither true nor false)
This leads us to a different kind of pessimistic induction: (many of) the fundamental theoretical claims of our current sciences probably lack a truth-value altogether, since past theories (such as Newtonian mechanics) have that feature. (If you want to call claims lacking truth-values 'meaningless,' feel free, but it is not necessary.) This is hard-core instrumentalism, a very unpopular view today; most modern anti-realists, following van Fraassen, think that all our scientific discourse is truth-valued (though we should be agnostic about the truth-value of claims about unobservable entities and processes). But this instrumentalism seems to be a natural outcome of (1) taking the graveyard of historical scientific theories seriously, (2) believing there is something like Kuhnian incommensurability, and (3) holding that incommensurability involves presupposition failure. And none of those three strike me as crazy.
Disclaimer: This argument has probably been suggested before, but I cannot recall seeing it anywhere.
3/10/2006
Want a job? Come to Pitt HPS
As some of you know, I am in the last stages (throes?) of my PhD program at the University of Pittsburgh, in the History and Philosophy of Science (HPS) department. For those who are not familiar with it, the department is relatively small: there are usually about 8-9 faculty whose primary appointment is in HPS, and about 30 or so graduate students.
In an average year, 1 to 3 people from my department go on the job market, and the department has had a very good placement record since I've been here: everyone who graduated from the program has gotten a tenure-track job, either straight out of grad school or after a 1-2 year post-doc. But this year, we had ten people go on the market, eight of them (myself included) for the first time. There was a lot of hand-wringing and worry, by students and faculty alike, about having so many HPS people on the market at one time. The demand for folks like us, who prove theorems in the foundations of quantum gravity or trace out the technical development of Galileo's kinematic theory, is just not as high as for people who work in ethics or epistemology.
I am now happy to report that all ten people have found good positions: 8 people are beginning tenure-track jobs, and the other two are taking enviable post-doc positions (including filling fellow-blogger Gillian Russell's old spot as Killiam fellow). I won't give the list of where everyone is headed, since I haven't asked their permission to broadcast that information to the three people who read this blog. But I grant myself permission to announce that I will be starting next fall as an assistant professor at UNLV (the University of Nevada-Las Vegas). It's a great position for me, in a department full of smart, sensible, and funny people. I'll probably blog in more detail soon about why I'm so excited about it -- but for starters, it was 70 degrees when I visited in January!
UPDATE: The Killam Fellow mentioned above will have a tenure-track position, though it is not yet determined where he will be yet. So 9 of 10 will start tenure-track jobs.
In an average year, 1 to 3 people from my department go on the job market, and the department has had a very good placement record since I've been here: everyone who graduated from the program has gotten a tenure-track job, either straight out of grad school or after a 1-2 year post-doc. But this year, we had ten people go on the market, eight of them (myself included) for the first time. There was a lot of hand-wringing and worry, by students and faculty alike, about having so many HPS people on the market at one time. The demand for folks like us, who prove theorems in the foundations of quantum gravity or trace out the technical development of Galileo's kinematic theory, is just not as high as for people who work in ethics or epistemology.
I am now happy to report that all ten people have found good positions: 8 people are beginning tenure-track jobs, and the other two are taking enviable post-doc positions (including filling fellow-blogger Gillian Russell's old spot as Killiam fellow). I won't give the list of where everyone is headed, since I haven't asked their permission to broadcast that information to the three people who read this blog. But I grant myself permission to announce that I will be starting next fall as an assistant professor at UNLV (the University of Nevada-Las Vegas). It's a great position for me, in a department full of smart, sensible, and funny people. I'll probably blog in more detail soon about why I'm so excited about it -- but for starters, it was 70 degrees when I visited in January!
UPDATE: The Killam Fellow mentioned above will have a tenure-track position, though it is not yet determined where he will be yet. So 9 of 10 will start tenure-track jobs.
Subscribe to:
Posts (Atom)