11/14/2008

From the disunity of science to anti-realism

The two most popular arguments against scientific realism today are (1) underdetermination arguments and (2) the pessimistic induction. There is another kind of anti-realistic argument besides these two, which does not get the attention today that (1) and (2) receive. (Note: I would appreciate it if any readers could point me to someone working today who has developed anything akin to this third line of argument in detail.)

I don't yet have a clean formulation of this third kind of argument for anti-realism that I find satisfactory. All I have so far is an analogy. We see an instance of this third kind of anti-realist argument in Andreas Osiander's (much-maligned) Preface to Copernicus' On the Revolution of the Heavenly Spheres. Osiander argues that Ptolemy's astronomy is not intended to be the literal truth about the structure of the solar system on the grounds that, if the epicycles Ptolemy ascribed to Venus literally described Venus's movement, then Venus's apparent size for an Earthly observer should vary 16-fold. But Venus's apparent size does not vary in this way. Thus, even though Ptolemy's astronomy predicts planetary positions within the margin of observable error, if it were the literal truth it would also get apparent planetary size right as well.

What does this have to do with present-day scientific anti-realism? This is sketchy, but perhaps the increasing sympathy for 'the disunity of science' and the related notion of 'pluralism in science' could be used to argue for some sort of anti-realism. Now, what exactly those two expressions mean varies greatly from one person to another (I think; I would be happy for better-informed readers to enlighten me), but I think I can illustrate my point by imagining what a 16th Century pluralist/ 'disunifier' would say about Osiander's point above.

Such a person could say: Different scientific theories explain different phenomena. Ptolemy's theory only aims to account for apparent planetary positions, not their apparent sizes (and/or brightnesses). To demand that Ptolemy's theory also account for apparent sizes/ brightnesses is to impose an ideal of unified science upon Ptolemaic astronomy.

However, we today (and Osiander 450 years ago) think that this problem with apparent sizes is good evidence against the truth of Ptolemaic astronomy. That is, the 16th century pluralist/ disunifier seems to 'let a bad theory off the hook': we have evidence the theory is not true, but that evidence is discounted or ignored on the grounds that the theory's target domain of explanation does not include the countervailing evidence.

Now of course the question for modern-day antirealism is: how good is this analogy? In particular, is the kind of pluralism or disunity that people are championing (or at least accepting) today relevantly similar to the Ptolemaic pluralism/ disunity described above?

10/31/2008

philosophy for children

I just learned about something that makes me very happy: the Northwest Center for Philosophy for Children. The director also has a great blog about teaching philosophy to K-12 students. Things like this center and the radio show Philosophy Talk give me hope for philosophy's prospects.

10/29/2008

Doris Lessing and idealization

In her 1993 Preface to The Golden Notebook, Nobel laureate Doris Lessing writes:
"Currently I am writing volume one of my autobiography, and ... I have to conclude that fiction is better at 'the truth' than a factual record. Why this should be so is a very large subject and one I don't begin to understand." (p. ix)

I cannot say that I understand it either. However, I wanted to explore the notion that it might be related to idealization and abstraction, which have interested philosophers of science over the last few decades.

Roughly, an idealization is something that is strictly speaking false. For example, in usual derivations of the ideal gas law (PV=nRT) from statistical mechanics, the gas molecules are assumed to be massless point particles. But everyone agrees that, actually, the gas molecules do have some mass and some volume. But this reveals something: the mass or volume of the constituent particles do not play a (significant) role in the gas's macro-level properties (like pressure and temperature).

Roughly, an abstraction leaves out something (as opposed to putting in something false); for example, when doing classical mechanics, we need to know the mass and velocity of bodies, but we do not need to know their colors. And in many cases, we don't need to factor in their shapes. This is like the idealization case, because this reveals that the shape of a body is irrelevant to its behavior after an elastic collision.

Now, fiction is also full of both straightforward falsehoods (there's no such person as Harry Potter) and abstractions (How many hairs, exactly, are on Harry Potter's head?). Perhaps we can 'begin to understand' the fact that Lessing finds so perplexing by saying that these two elements of fiction play a role similar to idealization and abstraction in science. That is, by stripping away the infinitely many traits that concrete, actual things and events have, fiction makes manifest the explanatory (perhaps causal?) core of the people and circumstances it depicts, leaving out (supposed) irrelevancies. So I would not say (as Lessing does "that fiction is better at 'the truth' than the factual record," which sounds paradoxical, but rather that fiction can provide a better explanation than a detailed diary -- and it does so for the same reason that idealization and abstraction in science provide better explanations than would a complete causal history of the universe.

10/16/2008

Carnap party!

Friday, November 7th, at the PSA. Thanks to Richard Zach for the pointer.

9/29/2008

Help

I've looked at something so long that I have confused myself, and am now hoping to get a little help. Paul Boghossian makes the following charge against analyticity (and smart people quote this approvingly):

What could it possibly mean to say that the truth of a statement is fixed exclusively by its meaning and not by the facts? Isn't it in general true--indeed, isn't it a truism--that for any statement S

S is true iff for some p, S means that p and p?

How could the mere fact that S means that p make it the case that S is true? Doesn't it also have to be the case that p? (Nous 1996, p.364)

Now my question: Is it fair to impute to Boghossian the view that there are no S, p such that S means that p is a sufficient condition for S is true?

(The upshot: if this is fair, then I think any case where S expresses a logical truth p is a counterexample. I still the the 'truism' is true; I just don't think it establishes the claim I'm imputing to Boghossian.)

9/26/2008

jurors do the impossible

I went in for jury duty yesterday for the first time in my life. I learned a lot about the nitty-gritty mechanics of a trial. I was put into the jury box for voir dire, where the judge and the lawyers ask you a bunch of questions to determine whether they want to leave you on the jury or get a replacement. (I was tossed out.)

One thing that struck me during this voir dire questioning is that the judge asks you to do two things that, I think, are impossible. First, he asks you if you can be completely and totally impartial. I'm not a psychologist, but I've seen plenty of studies showing that basically no one is thoroughly impartial; unconscious biases run through our thinking.

Second, the judge asks you if you can refrain from coming to a belief about the accused's guilt or innocence until you enter the jury deliberation room -- that is, after you have heard all the evidence, followed by the judge's specific instructions about the law. If you think that belief is involuntary (as many do), then this is impossible.

I recognize that these 2 things are ideals to strive for, and most likely the judge and lawyers recognize that they cannot be perfectly achieved. By re-interpreting these 2 demands to myself as ideal goals, I felt OK about agreeing to them. But I still felt weird asserting that I could do two things that, if understood literally, I think are impossible.

9/22/2008

a new job description

I just got home from the University of Utah in Salt Lake City, where I gave my talk arguing for the following slogan:
pessimistic induction + common accounts of reference = semantic anti-realism.
For the blogpost version, with a decent comment thread, see here.

Salt Lake is physically beautiful, and socially it seems to be a strange combination of hippies and Mormons. The Utah department was really great -- plus, Jim Tabery was a model host. The only thing I want to post, though, was Ron Mallon's description of my work on Carnap et al.: he characterized me as a 'boutique historian.' I don't know whether the expression is original to him, but I definitely plan to steal it to describe myself in the future.

9/05/2008

Chmess in the pages of Science

Dennett bad-mouths studying chmess, but it looks like chmessology has appeared in the latest issue of the esteemed journal Science.

Also, in case you haven't seen it, Greg Lavers wrote a good review of the Cambridge Companion to Carnap for NDPR, worth checking out.

9/01/2008

Are scientific theories full of truth-value gaps?

I'm almost done writing a paper that argues that if one accepts the pessimistic induction over the history of science, then one should be a semantic anti-realist about current science -- or should hold unorthodox views in philosophy of language. Here's the argument:

(P1) Certain claims that (a) contain terms with defective reference, or (b) exhibit presupposition failure, are neither true nor false.

(P2) Some fundamental theoretical claims of earlier science exhibit the type of (a) defective reference (e.g. 'phlogiston,' 'absolute velocity,' 'Vulcan') or (b) presupposition failure (e.g. 'Events A and B are simultaneous') described in (P1).

(C1) Therefore, some fundamental theoretical claims of earlier science are neither true nor false.

(P3) Present science probably resembles past science. [That's the step borrowed from the pessimistic induction]

(C2) Therefore, some fundamental theoretical claims of present science are neither true nor false.

The main objections, I think, are:
1. Sentences that contain non-referring terms or exhibit presupposition failure are false, not truth-valueless.
2. Specifically, natural kind terms that fail to pick out a kind/ property (e.g. 'phlogiston') do not generate truth-value gaps (even supposing proper names that fail to pick out an individual do generate truth-value gaps).

I offer replies to these objections, but none of them are absolutely decisive. So, my more tentative conclusion is that a proponent of the pessimistic induction either has to accept my original conclusion OR accept a currently unpopular position in philosophy of language -- e.g. one could justify objection 2 by appealing to a descriptivist account of natural kind terms, but that would fall afoul of the widely endorsed Kripke-Putnam arguments against such descriptivism.

I'll be giving this material as a talk at the University of Utah in a couple of weeks, so any feedback before then would be especially appreciated.

8/10/2008

Am I in a fictional character's address book?

I can't figure out whether the following worries about fictional characters are interesting or silly. They are certainly amateurish, because I don't know the literature about fiction well.

I recently got a Nintendo Wii. You can receive emails on your Wii; you also get various sorts of system updates there. In my Wii inbox yesterday, there was an email from a character in one of the games I'm playing. (Not that it matters, but for sake of concreteness, the email was from a character called the 'mailtoad.')

Now I think there's something strange about the sentence
'Greg got an email from the mailtoad.'*
Why? Because the mailtoad is fictional, and I would've thought it was impossible for me to receive communications from fictional characters. I would've thought that fictional characters can only send letters to other fictional characters, and I'm pretty sure I'm not a fictional character.

Often, if there is a sentence about a fictional character that intuitively feels true, that's because in the fiction the sentence is literally true. That is, while 'Sherlock Holmes lives on Baker Street' is not literally true, nonetheless 'In the stories written by Conan Doyle, Holmes lives on Baker Street' is literally true. And that accounts for why there is some intuitive pull towards calling it true.

But this won't really work in the present case of my email from the mailtoad: I received an email -- in the actual world. But I am not a fictional character, and I do not appear in the fiction, as e.g. Napoleon appears in War & Peace. This looks analogous to the real-life Napoleon finding a letter from one of the fictional characters of War & Peace in his mailbox.

It seems like the least crazy thing to say is that the mailtoad did not send me a letter, but rather some combination of hardware and software --which are non-fictional things -- sent me the email. But then what is the relationship between the hardware + software combination and the fictional character? I don't have a well-posed worry here, but it does seem pretty different from the way words on a page are related to Sherlock Holmes.

-----
* Actually, the email was addressed to Mario, the character I'm playing in the game. If you think this dissolves the problem, then for the rest of this post, just counterfactually suppose the word "Mario" was not in the first line of the email. But I think it is still weird to intercept a communique from one fictional character to another in my inbox... maybe not as weird, though.

8/08/2008

The advantages of theft over an honest post

I'll have a real post up soon, I hope, but in the meantime I wanted to let readers know about a new blog by Chris Pincock, Honest Toil. (Or maybe I'm just the last one to notice it.) His areas of interest are pretty similar to what you find on Obscure and Confused Ideas, and he's posting good stuff at an impressive pace.

7/17/2008

Are offprints obsolete?

In my department mailbox, I just found a big package, shipped over from Germany, containing 50 copies of a recent article of mine. This struck me as a waste of paper and postage, especially given the unbelievable subscription prices for some for-profit academic journals. I reckon that cutting out offprint services wouldn't save that much money, but lowering the prices even a little might increase access somewhat for institutions that are smaller or in developing countries.

And I personally have never sent anyone a preprint, and never received one. Every paper I've exchanged long-distance has been via email.

Perhaps things are different in other academic fields, but given my limited experience, offprints seem pretty obsolete. (That said, I do like receiving a physical paper copy of the journal issue where my article appears, but I think this is primarily because it somehow feeds my pridefulness and vanity.)

Are there reasons in favor of continuing the status quo offprint practices? If so, I'd like to hear them.

7/10/2008

Are truth-value gluts necessary to avoid explosion?

Before I get to the question in the title of the post, let me give a quick rehearsal of some uncontroversial material, for readers innocent of this particular topic. In classical logic, anything follows from a contradiction:
A & ~A, ∴ B
is a valid argument. This argument form is known as ex falso quodlibet (EFQ); Graham Priest calls it 'explosion.' This clearly runs counter to our intuitions about what follows from what: it just doesn't seem like '2+2=200' follows from 'Grass is green and grass is not green.' Yet it does in classical logic.

Because this seems counterintuitive, people have devised logics in which EFQ is not a valid argument form. The most prominent is the family of relevance logics. So relevance logics score a point because they fit our intuitions about EFQ.

How do relevance logics avoid EFQ? Semantically/ Model-theoretically, they allow 'truth-value gluts', that is, a sentence can be both true and false. Now we can see why (A & ~A), ∴ B is invalid in logics allowing gluts: assign A both true and false, and assign B false. Then all the premises are true and the conclusion is not true.

That was all set-up. Now the question: suppose it turns out that there are no truth-value gluts, i.e., no sentence (or proposition or whatever) is both true and false. Would (some) defenders of relevance logics then accept EFQ? Well, perhaps all the glut people need is that gluts are possible, and not that actually some sentence is both true and false. Then my question would be: would EFQ-deniers accept EFQ if gluts were impossible? From my limited exposure to the literature (Priest, C. Mortensen NDJFL 1983), it seemed like the answer might be yes, because they say things like 'Disjunctive Syllogism (which is valid classically but not relevantly) is valid in all consistent reasoning-contexts.' I would've hoped that we could discard EFQ without taking on such a contentious idea as truth-value gluts...

And a further question just out of ignorance: does anyone characterize logical validity in such a way that it (i) avoids EFQ and (ii) does not require truth-preservation? I don't see any other way besides gluts to declare EFQ invalid, if we stick to the standard characterization of validity.

7/07/2008

In Praise of Graham Priest's Intro to Non-Classical Logic

This summer, I am directing an independent study on non-classical logics. In part because of Ole's glowing recommendation, the primary text has been Graham Priest's Introduction to Non-Classical Logic. The book has been fantastic; I can recommend it without qualification. It is pitched at just the right level for a philosophy student with maybe one logic course under their belt -- neither too slow nor too quick. The end-of-chapter exercises are also just right: neither too difficult nor too easy. And each chapter closes with a couple of pages dealing with how the technical material presented there connects up with overtly philosophical questions, keeping up motivation for people whose primary interest is not in the formal/ mathematical side of things.

Another aspect of the book that appealed to me was that (partial) soundness and completeness proofs were given at the end of each chapter, separated from the main course of discussion as optional material. Such proofs are of course incredibly important to practicing logicians, but I sometimes think that the amount of time and effort needed for them is better spent elsewhere given the limitations of a classroom, and the fact that most philosophy students in logic classes won't go on to be practicing logicians. The nice thing about Priest's presentation is that if you think soundness and completeness proofs are essential, you can cover them, or if (like me) you'd rather spend that time covering a wider array of logics, you can easily skip over them without loss or inconvenience.

Last but not least, I certainly have learned a thing or two (or ten), even though it is labeled as an introductory textbook. I will definitely use this book again in future classes.

7/01/2008

committing armchairs to the flames

I have an avid spectator's interest in experimental philosophy, and do not pretend to expertise. I've recently seen, on email lists and blogposts, announcements for an experimental philosophy workshop called Armchair in Flames? It looks like an interesting conference; but a thought popped into my head about the title, which refers to the X-Phi anthem (performed here ).

Experimental philosophers take themselves to be committing the philosophical armchair to the flames, because (in part) they do surveys of the person-in-the-street's view on various philosophical topics. They thus test whether what professional philosophers say is commonsensical really is common sense, or rather some sort of idiosyncrasy or professional deformation.

But I just wanted to remark that this is only one way of committing the armchair to the flames. Another, which has been the dominant outlook in philosophy of science for the last few decades, is to discount heavily or even completely any deliverances of so-called common sense or intuition, and instead lean heavily or even completely upon the deliverances of mature sciences in formulating philosophical positions. This outlook was unequivocally in full effect in the department where I got my PhD, among faculty and students alike.

So in short, the experimental philosophers don't have a monopoly on casting armchairs into the flames -- the philosophers of science have been stoking that fire for a while already.

6/28/2008

HOPOS 2008

The 8th HOPOS (History of Philosophy of Science) conference finished last week. Unfortunately, because of teaching obligations, I didn't arrive until the end of the second day, so I missed some sessions that I really wanted to see.

I learned a lot, got to catch up with several folks I haven't seen in a while, and met some new interesting and smart people. And I got very helpful feedback on the material I presented (on Quine's trajectory from "Truth by Convention" to "Two Dogmas"). So it was a very good conference for me.

There was one thing that's been bothering me, however. This may just be the result of missing some of the presentations on logical empiricism that I would've liked to see, but I'm wondering whether -- speaking very generally -- scholarship on logical empiricism is in danger of losing a clear and coherent direction. Why? For the last 20 or so years, work on logical empiricism has improved our understanding of Carnap, Neurath, Schlick et al. by leaps and bounds. This is in large part because the old, received view of the logical empiricists was extremely inaccurate. Marginal returns on investment were quite high in the beginning; it is only natural that they come down as our picture of the logical empiricists becomes more and more refined. But my worry is that without the fairly well-defined project of locating and then refuting various caricatures of logical empiricism, the field might begin to drift.

It is not uncommon for a group that is very successful when fighting a common enemy has trouble thriving in peacetime. I hope the same thing is not happening to logical empiricism studies, now that its enemy -- the Received View of logical empiricism -- has been in large part defeated (though we are still waiting for news of that victory to reach the ears of everyone in philosophy). We know what we are against; but what are we for? -- that is, can we identify and rally around some set of interesting and fruitful further research questions about the logical empiricists?

As I said above, I hope this was just a sampling error on may part: I think I missed some really good presentations, which are truly representative of the state of the art in the field. So I'm not worried yet.

6/23/2008

in lieu of a real post

In this week's Science:

"The first scientific conference held in Azeroth, the online universe of the role-playing game World of Warcraft, went off virtually without a hitch. Although the participants all died during the final day's social event — a massive raid on an enemy fort — they agree that this event is a glimpse at the future of scientific exchange."

I may have a real post up soon: I just got back from Vancouver, where I attended the 8th History of Philosophy of Science conference, and I may file a brief report from the field.

5/29/2008

Experimental Philosophy of mathematical intuition

So this is straight psychological research, not done by 'experimental philosophers,' but it does seem highly relevant to any philosophers who appeal to the notion of mathematical intuition:

Log or Linear? Distinct Intuitions of the Number Scale in Western and Amazonian Indigene Cultures, Stanislas Dehaene et al. in Science May 30 2008.
We probed number-space mappings in the Mundurucu, an Amazonian indigene group with a reduced numerical lexicon and little or no formal education. At all ages, the Mundurucu mapped symbolic and nonsymbolic numbers onto a logarithmic scale, whereas Western adults used linear mapping with small or symbolic numbers and logarithmic mapping when numbers were presented nonsymbolically under conditions that discouraged counting. This indicates that the mapping of numbers onto space is a universal intuition and that this initial intuition of number is logarithmic. The concept of a linear number line appears to be a cultural invention that fails to develop in the absence of formal education.

5/27/2008

Scientific Realism via the internets

I recently found out that Philosophy of Science has conditionally accepted an article I wrote on the no-miracles argument. This is a stroke of good luck, and it's also a testament to the philosophical blogosphere: basic ideas in this paper were hashed out on this blog (see especially here), and honed by readers' astute criticism. Perhaps the paper wouldn't have been good enough for acceptance otherwise.

I would greatly appreciate further help on the paper before I send away the final version; the current draft (in rich text format) is here. Here's an abbreviated abstract:
1. Scientists (usually) do not accept explanations that explain only one type of already accepted fact.
2. Scientific realism (as it appears in the no-miracles argument, or NMA) explains only one type of already accepted fact.
3. Psillos, Boyd, and other proponents of the NMA explicitly adopt a naturalism that forbids philosophy of science from using any methods not employed by science itself.
Therefore, such naturalistic philosophers of science should not accept the version of scientific realism that appears in the NMA.
And as long as I am singing the praises of the blogosphere and begging for readers, P.D. Magnus (of the excellent Footnotes on Epicycles blog) and I have a draft of a paper on another aspect of the scientific realism debate (in pdf format) here. We ask, and give a partial answer to, the question: When should two empirically equivalent theories be regarded as variants of one and the same theory? Comments large and small are appreciated!

5/13/2008

Against negative free logic

'Free logic' is an abbreviation for 'logic whose terms are free of existential assumptions, both singular and general.' Free logics attempt to deal with languages containing singular terms that do not denote anything, such as 'Pegasus'.

Free logics come in 3 basic flavors, which differ over what truth-values should be assigned to (atomic) sentences containing non-denoting names.
- Negative free logics declare all such sentences false;
- Neutral free logics declare all such sentences neither true nor false; and
- Positive free logics declare at least some such sentences true (in particular, 'Pegasus=Pegasus').

Tyler Burge argued for negative free logic over its rivals in "Truth and Singular Terms," Nous (1975). I came up with a little argument against negative free logic; but I do not know the argumentative landscape for these 3 options particularly well, so this may be extant already. (Note: if any readers have references for arguments pro and con negative free logic, I'd be very interested. I've found a couple of nice articles by James Tomberlin, and a short response by Richard Grandy to Burge's piece, but not much else.)

According to the negative free logician, all atomic sentences containing non-denoting names are false. Some people reject this because calling 'Pegasus=Pegasus' false seems wrong; here's another problematic type of case. Consider the following three sentences (and assume for the sake of argument that 'Atlantis' is a non-denoting name):
(1) Atlantis is West of London.
(2) Atlantis is East of London.
(3) Atlantis and London have the same longitude.
In negative free logic, all three of these must be false. But for the three predicates 'is west of,' 'is east of,' and 'has the same longitude as,' any one of the three can be defined in terms of the other two using only negation and conjunction. E.g.:
'x is west of y' means 'x is not east of y, and x does not have the same longitude as y.'
But now we've got a problem: If 'Atlantis is west of London' is false (as the free logician says), then at least one of 'Atlantis is east of London' or 'Atlantis and London the same longitude' has to be true -- but that contradicts the earlier assumption (of the negative free logician) that all of (1)-(3) are false.

And this same problem will crop up in general when we have a set of predicates that are definable in terms of one another and negation; in the simplest case, P = ~Q. And this is not that rare: {'before', 'after', 'simultaneous'} is another example. The negative free logician could save her position by maintaining that two of the predicates are somehow really basic, and the other really derivative. But at least in these two cases, it doesn't look legitimate to hold that 'west' is somehow fundamental and 'east' merely derivative.

Does anyone see a good response to this objection on behalf of the proponent of negative free logic?

4/17/2008

HOPOS 2008 program posted

The International Society for the history of philosophy of science (a.k.a. HOPOS) has just posted the program for its 2008 conference in Vancouver, taking place June 18-21. (According to the program, I'm speaking about someone named 'Quien.')

Hope to see you there! If you are going, and want to meet up, send me an email.

4/14/2008

Are there empty natural kind terms? (The 2nd in a series)

There are empty names, like 'Santa' and '(Planet) Vulcan'; there is a fairly large literature dealing with them in both the philosophy of language and in logic (called 'free logic'). But there has not been any discussion of empty natural kind terms -- which prompts the question: are there any such terms? I ask because it seems to me that 'phlogiston,' 'caloric,' and other central terms of now-discarded scientific theories may qualify as empty natural kind terms.

This question is complicated by the fact that there is not widespread agreement on what natural kind terms are. The two candidates are (I) predicates and (II) names. (Predicates are the leading contender, so you can skip the final paragraphs if your patience for this kind of thing is limited.)

(I - Predicates) This takes us back to the subject of the previous post: are there any empty natural kind predicates? As noted in the last post on this subject, if 'empty' just means 'has the null set as its extension, and the whole domain of discourse as its anti-extension,' then the answer is obviously yes. But then it's uninteresting -- empty names are interesting because (both for direct reference theorists and Frege) sentences containing them will lack truth-value, unless the direct reference theorist proposes an ad hoc fix (cf. David Braun). If empty predicates behave classically/ nicely, they won't generate truth-value gaps.

So here's an argument that natural kind predicates like 'phlogiston' are empty in the same way that 'Vulcan' is, i.e. they can fail to express semantic content sufficient to determine truth-value. How? On a Kripkean account, natural kind terms express properties that objects have essentially. To determine what property is expressed by a natural kind term, we take samples of the stuff that that term refers to in the actual world, and determine what is essential to them, i.e. what property (or combination of properties) those samples must have in order to be that kind of stuff. So the natural kind term ‘water’ expresses the property of being H2O, because in the actual world, having that inner constitution suffices for something to be water. But what is the essential, inner constitution of all the stuff we call ‘phlogiston’ in the actual world? Nothing —- since there is no such thing as phlogiston, there is no essential property or inner constitution to discover. (Note: James Woodbridge helped me a lot with this argument; if you like it, you should attribute it to him, not me.)

So if phlogiston has no inner constitution, then 'phlogiston' lacks semantic content. And thus (atomic) sentences containing the term will be semantically defective, and thus (presumably) will lack a truth-value. But is this argument any good?

(II. - names) One might think natural kind terms are names, because they appear in subject position:
'Water is wet'
But if they are names, what are they names of? Scott Soames (Beyond Rigidity) gives two 'obvious candidates':
(i) the merelogical sum of all the water everywhere, or
(ii) an "abstract type" that is instantiated by the stuff that comes out of our faucets etc.

If it's (i), then there are clearly empty natural kind terms, and 'phlogiston' and 'caloric' are examples. However, Soames gives two arguments against (i): first, if (i) were correct, then 'Water weighs more than 1 million pounds' should be true and felicitous, but it seems clearly not so. Second, if (i) were right, then 'water' would not be (anywhere close to) a rigid designator, and there is a widespread intuition (or Kripkean dogma?) that it is.

If it's (ii), it's not clear to me that there are empty natural kind terms; I don't know how one shows a type does not exist (or, for that matter, how one shows a type does exist). As James Woodbridge and Seyed (in the comments on the first installment of this series) pointed out to me, it seems reasonable to say that an abstract type that is somehow contradictory does not exist, but we'll have nothing like the set-theoretic or semantic paradoxes when it comes to natural kind terms. But I am not all that worried about (ii), because it's not clear (again following Soames in Beyond Rigidity) that natural kind terms are names at all -- they seem to be predicates first and foremost. As Soames points out, 'Whales are mammals' is naturally understood as 'Anything that is a whale is a mammal.' So natural kind terms are not names.

4/07/2008

Indeterminism in developmental biology

There's a review article in this week's Science (v.320, April 4 2008, 65-68) that is potentially of philosophical interest, "Stochasticity and Cell Fate". The bumper sticker version: although a cell's transformation into a specialized subtype is deterministic in most cases, "[i]n some cases, however, and in organisms ranging from bacteria to humans, cells choose one or another pathway of differentiation stochastically, without apparent regard to environment or history."

Discussions of indeterminism in biology have usually been restricted to the 'random' mutations that drive evolutionary change. This, if it holds up, looks to be a quite different kind. And interestingly, the authors point out reasons why a certain degree of indeterminism may confer selective advantage upon organisms whose development contains stochastic elements.

4/03/2008

Are there empty predicates?

Empty names are names that fail to refer, like 'Santa,' 'Pegasus,' and 'Planet Vulcan.' 'Santa Claus' fails to refer because (on most semantics for empty names) there is no entity that is assigned to 'Santa' as its referent. This is clearly distinct from another view (e.g. Frege's) that 'Santa' should be assigned e.g. the empty set as its referent. That is, there is a difference from having no referent and referring to the empty set -- for my cat has no referent, but '∅' refers to the empty set.

So are there empty predicates? That is, are there predicates that do not signify properties (or extensions, kinds, intensions (= functions from possible worlds to extensions), or whatever your preferred semantic value for predicates is). There are of course predicates whose extension is the empty set (e.g. 'is not identical with itself') -- these predicates signify uninstantiated properties (assuming you think predicates signify properties). But they still signify a property.

There is a fairly massive literature on empty names. (I can recommend Ben Caplan's 2002 dissertation as a nice survey of the empty names landscape.) But there is no talk of empty predicates -- is this because somehow every predicate, unlike names, automatically refers?

Related issue: Philosophers of science often say things like 'phlogiston' and 'caloric' fail to refer. Often, in explaining their claim "The word 'phlogiston' does not refer", these philosophers will say things like "The extension of the predicate 'is phlogiston' (or 'contains phlogiston') is empty." But having the empty set for your extension is different from failing to refer. So when we say that 'contains phlogiston' fails to refer, it seems like we should be saying that it has no (determinate?) extension, not that its extension is empty.

So are there any empty predicates? Are such things even possible? And can the usage of the philosophers of science be defended?

3/28/2008

stuff seen in Science

There's been a few items in Science the last two weeks that are potentially philosophically interesting:

[This week]
1. Rats can learn rules and then generalize them to new, different situations to a degree that people previously thought was confined to humans or at least primates.

2. Two smells that are initially indiscriminable to a human can, with painful conditioning, be made discriminable. (The initial indiscriminability was not just in terms of verbal reports of conscious states; the scientists did fMRIs on the patients too.)

[Last week]
3. After your basic needs are met, having more money does not make you much happier (that's been known for a while); however, giving that money away instead of spending it on yourself does have a significant effect on your (self-reported) happiness.

4. One of my favorite biologists, Günter Wagner, argues that pleiotropy (one gene having several effects) is actually not as significant as once thought (philosophers of biology have used pleiotropy to argue for various points).

Of course, this is just the bumper-sticker version of each of these claims; the actual positions will be more sophisticated, and require various caveats. Nonetheless, it seems like each of these studies could merit philosophical attention.

3/14/2008

Logical Pluralism, take 2

This post supersedes the previous one on logical pluralism; it's the post I would've written, had I bothered to do a bit of research before posting. I apologize in advance for how long it is...

Beall and Restall say we should be pluralists about logical consequence, because there are multiple acceptable ways of spelling out the notion of case in the standard criterion of validity:
(V) C is a logical consequence of P1 ... Pn iff in every case where all of P1...Pn are true, C is true also.
Different specifications of case, say B&R, yield different consequence relations.

My impulse is to address this issue 'from above'; that is, in general, when is any sort of pluralism an acceptable stance? Well, one case (though probably not the only one) in which it can be acceptable is when ambiguity is present. E.g., there are multiple, equally acceptable ways of spelling out 'Aaron is at the bank'. And Beall and Restall, on the 3rd page of "Defending Logical Pluralism," state that they think logical consequence is ambiguous; presumably the ambiguity is traced back to the word 'case' in (V) (what else in (V) could it be?). So if 'case' really is ambiguous, and (V) captures the core notion of consequence correctly, then we should be logical pluralists.

But I'm not sure 'case' is ambiguous -- prima facie, it doesn't feel like 'bank,' or 'duck'. It might just be 'general in sense' or 'lack specificity': for example, 'sibling' is general in sense between 'brother' and 'sister', the same goes for 'parent' and 'mother' and 'father.' And nobody wants to be a 'sibling-pluralist' or 'parent-pluralist.' (Note: 'thing', which seems to me closer to 'case', does not seem ambiguous either.)

Linguists, fortunately, have devised a test to distinguish ambiguous expressions from ones that are general in sense: the 'conjunction reduction' test. Suppose my friend Pat is making a monetary deposit, and my friend Tracy is sitting next to a river. Then the two sentences
'Pat is at the bank' and
'Tracy is at the bank'
are both true. However, these truths cannot be expressed in a 'conjunction reduced' sentence
'Pat and Tracy are at the bank,'
for this can only mean that both of Pat and Tracy are next to a body of water, or both are at a financial institution. No 'crossed reading' is possible. The impossibility of crossed readings in the reduced sentence suffices for the ambiguity of 'bank' (Jay D. Atlas, Philosophy without Ambiguity, OUP 1989, p.40). From the little I've seen, almost all linguists and linguistically-inclined philosophers accept this as an ambiguity test. And most also accept: If crossed readings are possible, then there is no ambiguity. E.g., 'Pat and Tracy are parents' can be read as saying that Pat and Tracy are both fathers, both mothers, or one of each -- and the possibility of that 'one of each' reading is what makes 'parent' non-ambiguous, but rather just general in sense.

With a test for ambiguity in hand, we can see if 'case' and 'consequence' are ambiguous (so pluralism is the right stance) or merely general in sense (in which case pluralism is not obviously the right stance). Let C0 be a construction (of the sort used in semantics for intuitionistic logic), and let S0 be a situation (of the sort used in semantics for relevance logic). (Cf. previous pluralism post if that doesn't make sense.) Then, I think, Beall & Restall would say
'C0 is a case'
is true, and so is
'S0 is a case'.
(If they don't, then 'case' is not ambiguous, but something else.)
Now the question is: Are 'crossed readings' possible for
'C0 and S0 are cases'?
That is: can that last sentence express that C0 is a construction and S0 a situation, or can it only be read as saying that C0 and S0 are both constructions or both situations? I lean towards the possibility of crossed readings, but I'm just not sure.

We might try the conjunction-reduction test with logical consequence as well:
not-not-A (relevantly) implies A [but not intuitionistically]
not-not-A (intuitionistically) implies 'If B, then B' [but not relevantly]
The conjunction reduced sentence is:
not-not-A implies A and 'If B, then B'
Here, a crossed reading DOES seem impossible, to me at least. So that makes it look like 'implies' is ambiguous. (Note: I'm not certain I've formed the correct conjunction reduction sentence.)

Now I'm stuck: it looks like 'implies' is ambiguous, while 'case' is not. If this is right, then we could deny (V) -- but what could we put in its place?

Final speculative thought: 'implies' is like 'before' in relativistic physics. If two events are spacelike related (= no signal can be passed between them), then 'e1 is before e2' is neither true nor false until a frame of reference is specified. In one frame of reference e1 will be before e2, and in another frame, e2 will be before e1. But once the frame (and a simultaneity convention) is chosen, then it becomes fully determinate which precedes the other. BUT no one frame is the 'right' one; each frame 'has equal rights'.

The analogy: picking a particular frame of reference is like picking situations over constructions or models etc. As there is no one right frame, so too there is no one right truth-maker. So 'A implies B' is then NOT like 'Aaron is at the bank', for I can use that second sentence meaningfully (= truth-valued-ly) without supplying some further information -- unlike 'e1 is before e2' for spacelike-related events. But once one picks a frame, all instances of 'e1 is before e2' become truth-valued; analogously, once one picks a specification of 'case', every instance of 'Does C follow from {P1, ... Pn}?' has a determinate answer.

In sum: ambiguity may not be the best way of thinking about the different consequence relations; rather, perhaps we should see how well we can make out this analogy with relativistic physics.

2/28/2008

Logical pluralism and brother-in-law pluralism

In my philosophy of logic class this week, we discussed JC Beall and Greg Restall's version of logical pluralism. Our text was their 2000 Australasian Journal of Philosophy article, available on Restall's website here. I've been flipping through their fantastic book-length treatment (OUP, 2006) as well.

Here's their basic idea. The basic, accepted notion of logical consequence is adequately captured in the following:

(V) Consequence C is a logical consequence of premises P1, ... Pn = In every case in which P1, ... Pn is true, C is also true.

Beall and Restall further hold that the notion of case admits of a number of "precisifications" (2006, 88), that is, it can be 'spelled out' or 'fleshed out' in more than one way. [Note: I can't find the quotation now, but I think Beall and Restall said that 'case' is neither ambiguous nor vague (in the sense of having borderline examples). [CORRECTION (3/2/08): In their "Defending Logical Pluralism," Beall and Restall explicitly say that they think the concept of deductive consequence is ambiguous (p.3). This more or less vitiates the main point of this post. I say 'more or less' because there is a test accepted by linguists for distinguishing ambiguity from lack of specificity, and it's not clear that B&R's concept of 'case' passes the test; see my comment #6 in the comment thread.] Different spellings-out of 'case' give rise to different consequence relations (and thus different logics); as examples of cases, they give:
(i) Classical Tarskian models, (ii) possible worlds, (iii) constructions (which yield intuitionistic logic), and (iv) situations (which yield relevant logic).
Finally, because there are multiple ways of spelling out 'case', there is not one correct notion of consequence, since different consequence relations correspond to different ways of specifying the content of (V).

So if someone asks: "Does an arbitrary sentence p follow from a contradiction 'q and not-q'?", the pluralist answer is "Yes and No -- yes, it follows classically (when we take Tarskian models as cases), but no, it does not follow relevantly (when situations are the cases)." Similarly, the pluralist answers the question "Is 'p or not-p' a logical truth?" with "Yes and No -- yes, it is a classical logical truth (since it is true in all Tarskian models), but no, it is not an intuitionistic logical truth (since it is not true in all constructions)".

I find Beall and Restall's position attractive. But while thinking about it, I wondered about when, in general, pluralism is the right (or at least a reasonable) position to take. B&R's claim is the fact that 'case' can be precisified in more than one way -- the meaning of 'case' is somehow underspecified or indeterminate -- to justify being pluralists about 'case' and thereby via (V) about consequence. However, I wonder whether, if this rationale were accepted across the board, pluralism would be almost everywhere, and the appropriate answer to many, many questions would be "Yes and no".

Here's an example of what I mean. The meaning of the phrase 'my brother-in-law' is not completely specific; it is indeterminate between the brother of my spouse and the male spouse of my sibling. However, nobody is a "brother-in-law pluralist": When someone asks me "Is Leon your brother-in-law?", I shouldn't reply "Yes and No -- yes, he is the brother of my spouse, but no, he's not the male spouse of my sibling." And what holds for 'brother-in-law' holds for many, many other terms: lack of specificity is everywhere.

Hopefully the analogy is clear: 'case' and 'brother-in-law' can both be made (more) determinate in different ways. But if this underspecification in the notion of 'case' is all that is required to justify pluralism about consequence, then we should also be pluralists about 'brother-in-law', since there is underspecification there too.

How might someone sympathetic to logical pluralism (e.g. me) respond to this challenge? Well, we could find an example where pluralism seems like the right (or at least reasonable) attitude, and try to argue that 'case' is (more) like that example. For example, I think pluralism about the concept of 'thing' is reasonable: if someone holds out a deck of cards, and asks me "Are there 52 things here?", the right (or reasonable) answer should be "Yes and No -- yes, there are 52 cards, but no, there are far more than 52 molecules".

The question is then: What makes 'thing' different from 'brother-in-law'? And is 'case' (in Beall and Restall's use) more like 'thing' or 'brother-in-law'? The pluralist wants 'case' to be more like 'thing', but I haven't yet figured out how to draw a sharp line. Any ideas?

2/22/2008

Which came first: logical truth or consequence?

This term, I am teaching a philosophy of logic class. We've twice run across the following sentiment:

(CPT) Logical consequence is prior to logical truth.

This sentiment is also expressed as 'The real subject matter of logic is the notion of consequence, not a special body of truths.' (References: We've seen this in Stephen Read's Thinking about Logic Ch.2, and in John Etchemendy's work (1988, p.74) too.)

Seeing (CPT) surprised me, since logical truth is (in most cases -- see below) definable in terms of logical consequence, and vice versa: If C is a consequence of P1 ... Pn, then 'If P1 and ... and Pn, then C' is a logical truth. And if T is a logical truth, then T is a consequence of the null set of premises. This is well-known: Beall and Restall, in their recent Logical Pluralism, make exactly this point.

So, in light of the interdefinability of logical truth and consequence, what would prompt someone to say consequence is somehow prior to logical truth? Stephen Read appeals to valid arguments that ineliminably use infinitely many premisses: A(0), A(1), ... Therefore, ∀x Ax. We can't turn this into a logical truth ('If A(0) and A(1) and ..., then ∀x Ax') in standard languages, because standard languages don't allow for infinitely long sentences. This seems like a fair point in favor of (CPT), but it does assume that (i) you accept arguments with infinitely many premises, and (ii) reject languages with infinitely long expressions. [Edit: as Shawn correctly notes in the comments, these two assumptions are fairly widely held. But I have always been a bit suspicious (perhaps for no good reason) about the idea of an argument with infinitely many premises.]

Here's another argument for (CPT), from extremely weak languages. Imagine we have a propositional language with sentence letters p, q, ..., and only two sentential connectives: 'and' and 'or' specified in the usual way. In this language, there are no logical truths (because we don't have 'If... then...' or anything equivalent), but there are still logical consequences: A is still a logical consequence of 'A and B', and 'A or B' is a logical consequence of A. So here is a case where we have logical consequence without logical truth.

But both of these arguments (Read's and mine) rely on somewhat unusual cases. Are there other reasons to accept (CPT) that do not appeal to unusual circumstances? Is there a big literature out there that I don't know about? And does anything really hinge upon whether we think logical truth is prior to consequence, vice versa, or neither?

1/13/2008

Boghossian on ('metaphysical') analyticity

I've been thinking recently about an objection Paul Boghossian (and many others) make against the Tractarian/ Carnapian conception of an analytic truth, viz. a sentence that is true solely in virtue of the meaning of the sentence. (Boghossian calls this kind of analyticity 'metaphysical analyticity,' which I think is potentially misleading, given the staunch anti-metaphysical tastes of the logical empiricists. Oh well.)

Boghossian considers the notion of metaphysical analyticity untenable. Why? He asks a rhetorical question:
"Isn't it in general true---indeed, isn't it a truism---that for any statement S,

S is true iff for some p, S means that p and p?

How could the mere fact that S means that p make it the case that S is true?" (Boghossian 1996, "Analyticity Reconsidered," Nous [p.364]


Boghossian is not alone in this view: the basic idea can be found in Quine's "Carnap and Logical Truth," and is developed by Gilbert Harman, Elliott Sober, and Margolis & Laurence. How should we interpret this rhetorical question? Boghossian appears to be claiming that the truth of a sentence of the form 'S means that p' is never a sufficient condition for the the truth of a sentence of the form 'S is true'---that appears to be intended force of the rhetorical question in the quotation immediately above. And that is certainly one reasonable way of cashing out the notion of the truth of a sentence being `fixed exclusively by its meaning.'

If we do understand Boghossian's view in this way, then I think his claim is either misleading or incorrect. Consider a standard material biconditional of the form

(1) p iff q

If such a biconditional is true, we usually say that q is a necessary and sufficient condition for p. But as we teach undergraduates in Introduction to Logic classes, if this biconditional is true, then (within the classical propositional calculus) so is

(2) p iff [q and (r only if r)]

(Any other logical truth of the classical propositional calculus could be substituted for r only if r.) If we simply read off the surface structure of sentence-schema (2), one might think that q was no longer sufficient for the truth of p--because there appears to be a second condition that has to be met in order for p to be the case, namely that r only if r. Of course, strictly speaking, this is true: every sentence of the propositional calculus presupposes the truth of all the logical truths of the propositional calculus. However, it seems seriously misleading to me to say that the truth of q is not a sufficient condition for the truth of p in our original biconditional--for that is not the way we standardly understand sufficient conditions.

Hopefully the direct parallel with Boghossian's claim is clear. I certainly agree that his 'truism' quoted above is true. However, when a logical truth--which, as Carnap and Quine agree is a paradigmatic case of analytic truth (if there are any)--is substituted for p in his schema, then that instance of the truism will have (almost) exactly the form of the second biconditional (2). Then, in the usual sense of 'sufficient condition,' we will have a case in which (contra Boghossian) an instance of 'S means that p' is sufficient for 'S is true.' To say otherwise, we would have to give up either classical logic (specifically, the idea that (2) follows from (1)) or the usual understanding of sufficient conditions.

However, one could object that neither classical logic nor our standard view of sufficient conditions is sacrosanct. I think there are reasonable replies to these objections (telegraphically: for whatever non-classical logic you choose, you can substitute some other logical truth for 'r only if r' in (2) above, and the point carries); but I'll leave matters here since this post is too long already.

Comments and criticism from any angle are very welcome, but what I personally go back adn forth on with the above argument is whether it's a 'cheap point' or not... superficial logic-chopping, or genuine insight?