Newton's God

The following three claims from the General Scholium of Newton's Principia have never made sense to me:
God "is not duration or space, but he endures and is present. He endures forever, and is everywhere present; and by existing always and everywhere, he constitutes duration and space. ... He is utterly void of all body and bodily figure."
What I cannot understand is how anything could exist always and everywhere, yet be neither body nor space nor time. Is there some further option for Newton? (Additionally, what does it mean that God 'constitutes' space and time?)

One of my students cleverly noted that Newton introduced the notion of a mutually attactive force between every two bodies in the universe; perhaps this can serve as an analogy for a Newton's God, giving me the 'further option' I'm hoping for -- since such a force is neither body nor space nor time? Unfortunately, in that same Scholium, we find an important disanalogy between the universal attractive force and God:
"In him are all things contained and moved, yet neither affects the other: God suffers nothing from the motion of bodies; bodies find no resistance from the omnipresence of God."
Presumably, any supposed force that affects nothing at all is not a force at all.

Are there any other possibilities for helping Newton out?

p.s. I found the following attempt to save Newton, from my old teacher Ted McGuire, in his "Existence, Actuality and Necessity: Newton on Space and Time," Annals of Science 1978 (how did people do research before the internets?):
"We can plausibly reconstruct the following argument. To say that God is everywhere with respect to space, is to say that one and the same individual exists at each place in extended space. To make such a claim is a contradiction only with respect to the manner in which extended things exist spatially. For they cannot as complete beings exist at once in each "part" of the place they occupy. But there is no contradiction in holding that an essentially non-extended being is capable of so existing. ... God is therefore omnipresent just in the sense that he remains numerically and unalterably the same individual at all places whatsoever. The conception appears paradoxical. However, Newton would claim that it only seems so, if we persist in imagining God's presence in space as we think of bodies individually occupying their determinate places." (p. 506)

But we can't allow ourselves the premise that Newton's God is "an essentially non-extended being," since in the General Scholium, Newton (apparently) says exactly the opposite. (Ted is right that Newton's God is incorporeal, but that isn't the same as non-extended -- think of Newton's absolute space.)


Newton and Wittgenstein -- long lost cousins?

As I was preparing for my History of Scientific Thought class tomorrow, I noticed the following line in Newton's Principia: "the meaning of words is to be determined by their use." (It's in the 14th paragraph of the Scholium to the Definitions, if anyone cares.) This is a revolution in Wittgenstein historiography! We obviously need to re-write all the history textbooks: Wittgenstein should no longer be seen as making a radical break with tradition, but rather advocating a return to orthodox Newtonianism...

But seriously: does anyone else know of other clean expressions of the Wittgensteinian 'meaning is use' slogan in the centuries before Wittgenstein?


Intelligibility in 17th and 20th century scientific philosophy

Should scientific theories be understandable -- or at least, aim at understandability? Of course, the term 'understandable' is vague and/or ambiguous, perhaps so much so that this question is some combination of pointless and unanswerable. But let's suppose the concept of understandability (or, equivalently, intelligibility) is sufficiently determinate.

One might think that understandability is not a standard on scientific claims. Sure, it's nice when we can have it; but as certain physicists say, 'Nobody really understands quantum mechanics.' On this view, evidence can justify a theory to a sufficient degree to motivate acceptance of that theory, even if that theory is (in some sense) not completely intelligible.

I don't know what the answer to this question is. But I was recently struck by a similarity between the mechanical philosophers of the middle 17th C and the logical empiricists of the early 20th C. In The World, Descartes says he will describe a universe that only has properties that everyone, even the dense, can fully understand (viz., the shape, size and motion of matter). In "On the Grounds and Excellency of the Mechanical Hypothesis," Robert Boyle claims that one of the advantages of the mechanical philosophy over that of the Peripatetics and the Paracelsian chemists is that the mechanical philosophy only uses terms understood clearly by everybody. The idea is that the word 'cube' will call up the same idea for everyone, whereas 'substantial form' (Peripatetics) or 'active principle' (Chemists) will not. Just about everyone can agree on whether a particular thing is cubical or not -- we will enjoy less agreement about whether a particular substance contains an active principle or not... or about what an active principle is, exactly.

What I now recognize is that certain aspects of logical empiricism have very much the same spirit. For Carnap, Neurath, and other logical empiricists, one of the primary aims of the language reforms they proposed was to guarantee the intelligibility or understandability of sentences that aim to state facts. Neurath proposed the adoption of a 'Universal Jargon' that was supposed to be a refinement of everyday language; Carnap proposed various types of languages at various points in his career -- the language of sense perception was the most fully developed in the Aufbau, and the language of middle-sized dry goods was preferred after that. Why these languages? Because, in each case, such sentences were supposed to be the most paradigmatically or obviously meaningful sentences; if (e.g.) everyday language is not meaningful, then nothing is. Furthermore, these sentences mean the same thing (or close) to all competant speakers; Philipp Frank, for instance, embraced such language-reform projects in order to counteract the Tower of Babel-esque proliferation of jargons in various scientific sub-disciplines.

I think further parallels between the 17th and early 20th C philosophers can be tied into this issue: both groups want to unify science (though in different ways, I think) and to eliminate the excessive metaphysical speculations of their respective times. I think the impetus toward intelligibility, in both cases, drives (at least in part) these other programs.


On the Darwinian explanation of the success of science

I really don't have time to post now, but I'm going to anyway. Van Fraassen writes: "I claim that the success of current scientific theories is no miracle. It is not even surprising to the scientific (Darwinian) mind. For any scientific theory is born into a life of fierce competition, a jungle red in tooth and claw. Only the successful theories survive--the ones which in fact latched on to actual regularities in nature." (Scientific Image, p.40)

James Robert Brown, in "Explaining the Success of Science" (Ratio, 1985) agrees that this Darwinian explanation can account for the first two aspects of success, but not the third:
(1) The sciences "are able to organize and unift a great variety of known phenomena.
(2) This ability to systematize the empirical data is more extensive now than it was for previous theories.
(3) A statistically significant number of novel predictions pan out; that is, our theories get more predictions right than mere guessing would allow." Brown says of (3): "Here the Darwinian analogy breaks down since most species could not survive a radical change of environment, the analogue of a novel prediction."

First a small point: I don't think a novel prediction needs to be analogized to a radical change in environment -- perhaps some should be, but it's not necessary. If an organism can handle living and reproducing in any new environment, i.e., one for which its various features were not historically adapted, then that seems a decent enough analogy to a novel prediction (which makes a prediction different from the cases the thoery was originally designed to handle). A 'radical' change in environment might precipitate a scientific revolution -- i.e., the science (like the organism) might not survive.

Now, a more substantive point, and one which perhaps pushes the analogy farther than is fair. The paleobiologist David Jablonski has shown that genera that are more geographically widespread are more likely to survive mass extinction events (such as the meteor that killed off lots of the dinosaurs). The analogy would be, I suppose, to groups of related theories that 'organize and unify' a greater variety of phenomena -- which are precisely the groups of theories that we (including van Fraassen) count as most successful. So it appears that a van Fraassenite Darwinian has a nice answer to J.R. Brown: viz., the more successful groups of theories will be more likely to deliver novel predictions.

But unfortunately for the van Fraassenite, the biological story doesn't end there. What is strange about Jablonski's results is that a species' being geographically widespread has no statistical correlation with its probability of surviving a mass extinction event. The correlation only appears at the level of genera. (Side note: For the philosophers and biologists who think about group selection, this looks like an instance of it.) So, the analogy would go, the more unifying particular theories do not enjoy any advantage in novel prediction over the less unifying, but the more unifying groups of theories would. Hopefully you can see why I suggested that this may be pushing the analogy too far: I'm not sure there's anything in the domain of science that would correspond nicely to the concepts of genus and species in the evolutionary domain. Although (and now I'm really stretching), if one could be made out, perhaps the structural realists could cash out their notion of structure at the level of the genus, and thereby capture why particular theories come and go, but the structure tends to survive through revolutions.

p.s. -- Can anyone recommend a good article completely devoted to arguing for or against this Darwinian explanation of science's success? I've seen several parts of book chapters or parts of papers dealing with it, but I can't recall seeing a fine-tooth-comb analysis of it.


Aristotle's natural motion and modern inertial motion

Last week, in my history of science class, we finished the unit on ancient science and medicine. At the end of the unit, I asked my students the (ill-posed) question: so is any of this stuff we've been reading science or not? (We read Plato's Timaeus, Aristotle's Physics II and On the Heavens, Ptolemy's Almagest, and bits of Epicurus as well as Hippocratic writers.)

One sentiment that came to the fore was that the ancients were (on average) more willing to countenance teleological explanations in natural sciences than we are. I think this is definitely right on the whole. But I did want to ask a question about an example sometimes forwarded in defense of this claim. Aristotle says that part of what makes the element earth earth is its tendency to move towards the center of the universe (since Aristotle thought the planet Earth was at rest in the center of the universe). Air and fire move away from the center of the universe, and the celestial matter moves in a circle around the center. These 'natural motions' are taken to appeal to final causes in a way that modern science does not: water and earth have a 'goal' or 'end' (Greek telos), viz., the center of the universe. And (so the story goes) matter from the Early Modern period onwards, starting with Descartes at the latest, is not like that at all.

I, unfortunately, cannot make out a substantive difference here -- we can describe Aristotle and the moderns in the same terms: we can make Aristotle sound more modern, or make Newton et al. sound more teleological. In the modern dynamical picture, we have inertial motion: a body in motion will maintain that motion (speed and direction), unless acted upon by an outside force. If the telos of an Aristotelian hunk of earth is the center of the universe, the telos of a modern bit of matter is (something like) self-preservation. Its goal is resistance to change (of direction and speed).

Alternatively, we can characterize Aristotle as more modern: Aristotle is describing what happens to a bit of fire or water if it's just "left alone," i.e., what does a body do when it is free of any interference? Aristotle clearly disagrees with the moderns about what a body does when "left alone"... but Einstein disagreed with Newtonians over the same issue. In other words, we can think of the difference (on this topic) between Aristotle and the moderns as a disagreement over which trajectories are the ones bodies will follow when no external forces act upon them. Teleology doesn't appear here at all.

Of course, I could be overlooking something obvious in the Aristotelian text. Hopefully any real Aristotle scholars out there reading this will tell me if I have. If you want to check the text of On the Heavens for yourself, it’s online here.


You might be living in barn facade county if...

Something happened to me last week that was weird enought that I felt like I was living out a philosopher's thought-experiment. I can't figure out what philosophical thesis this supports, so I figured I'd throw it out to the blogosphere, and let sharper people appropriate it if possible. Here goes...

Before I was married, my name was Greg Frost. I did many things while my last name was Frost, including buying a car. The title of the car was in the name of Greg Frost, and I've been too lazy and cheap to get it switched in the interim. But since we just moved out to Las Vegas, I wanted to get the title switched from a Pennsylvania title to a Nevada one. Now, when you switch titles, what you have to do is sell your car to yourself (for free) -- don't ask me why, I don't know. But at the Nevada DMV, I ran into a problem: I had to be Greg Frost as the seller, and Greg Frost-Arnold as the buyer. But Greg Frost hasn't existed for the last 2.5 years... yet he had to be "brought back" for this transaction. [This is sloppy and metaphorical, but you get the idea.]

There has to be a new theory of reference somewhere in here...


Two improvements to the internets

For those of you not on the HOPOS distribution list: the Royal Scociety is making every single one of the journals its ever published freely available online for 2 months -- all the way back to the first issue of The Philosophical Transactions of the Royal Society in 1665. These are some of my favorite documents in the history of science to look through, because (like many texts in the history of science) it seems so foreign, yet traces of current practices are nonetheless there.

Also, the fearless leader of my department, Todd Jones, has just started a new blog, Anthrophilosophy. He has formal training in philosophy, anthropology, and cognitive science, so he brings a new perspective on many issues, different from the way most philosophers look at them. His first post is an interesting anthropological explanantion of recent election results -- if that's the sort of thing that floats your boat, please check out his blog and leave a comment, if you are so inclined.

Hopefully there will be a real post soon, but I haven't had any good blog-sized thoughts lately...


Doctrine of double effect & the Knobe effect

On Monday, Carl Ficarrotta gave a colloquium talk here on the Principle (or Doctrine) of double effect (PDE), focusing especially on applications of that principle in military targeting. The principle is (quoting from the Stanford Online Encyclopedia, which quotes from Joseph Mangan (1949)):
"A person may licitly perform an action that he foresees will produce a good effect and a bad effect provided that four conditions are verified at one and the same time:
(1) that the action in itself from its very object be good or at least indifferent;
(2) that the good effect and not the evil effect be intended;
(3) that the good effect be not produced by means of the evil effect;
(4) that there be a proportionately grave reason for permitting the evil effect” (1949, p. 43).
Ficarrotta suggested that the justification or grounds for (3) -- something in the neighborhood of Kantian respect for a might make the majority of cases of 'collateral damage' morally impermissible -- even though the PDE is often invoked to justify such military actions.

I'm interested in something else about the PDE; specifically, it seems like the Knobe effect shows that (2) is untenable. Roughly, the Knobe effect is: people judge bad side-effects to be intentionally caused, though people do not judge good side-effects to be intentional. The problem for (2) is obvious: if there's an evil side-effect of an action, then that side-effect will be judged to be intended. (Ficarrotta's own formulation of (2) perhaps makes the problem even more perspicuous: "Evil consequences are foreseen, but not intended." If the folk's attributions of intentionality are accepted, then there won't be any foreseen evil consequences that are not intentional.)

The New Catholic Encyclopedia version of (2) (again, this is in the Stanford Online Encyclopedia entry) actually reflects the Knobe idea: such actions are called 'indirectly voluntary', i.e., intentional, but somehow in a second-class sort of way.

And finally, a google search reveals that the PDE and the Knobe effect are briefly dealt with in a footnote to this paper by Jen Wright and John Bergson, which was recently posted to, and discussed on, the Experimental Philosophy blog. They make the good point that there is a further experimental question to ask the folk: do ascriptions of intentionality track blameworthiness or responsibility? Someone in a PDE situation appears to be responsible for the evil side-effect, even though she is not to be considered blameworthy on that account (at least, if the PDE is right).


Bioethicists take note

One of the bigger areas in bioethics is the cluster of questions involving the end of life, such as, Is euthanasia morally permissible? Some folks are attracted to the idea that, if a person is in a vegetative state, then they may lack conscious awareness, and as a result, euthanasia may be permitted for certain vegetative patients.

A new article in Science looks like it provides evidence that undermines one of the above premises, namely, that patients in vegetative states are not conscious. Here's the abstract:
We used functional magnetic resonance imaging to demonstrate preserved conscious awareness in a patient fulfilling the criteria for a diagnosis of vegetative state. When asked to imagine playing tennis or moving around her home, the patient activated predicted cortical areas in a manner indistinguishable from that of healthy volunteers.
The authors of course do not claim that every (or even many) vegetative patients maintain this level of consciousness. Furthermore, they recognize that some people may not consider even this amount of brain activity to amount to conscious thought. But their next step is an encouraging one: developing a battery of fMRI tests to determine whether a particular patient has this degree of consciousness or not -- so we no longer have to guess at whether someone who's been unresponsive for 5 months (as their test subject had) is conscious or not.

UPDATE: I now see that Brains not only beat me to this story, but Pete Mandik has much more insightful commentary on it too. Oh well.


Two bits of good news

First, the current issue of Nature contains an article providing evidence that the new pope is about to demur endorsing intelligent design, just as John Paul did before him. Since it's hidden behind a subscription wall, I'll reproduce the first 1.5 sentences:
Religion is religion, science is science, and good fences make good neighbours. That seems likely to be the thrust of an expected clarification by the Roman Catholic Church of its position on biological evolution.

Second, the program for the Philosophy of Science Association 2006 meeting is up, and it looks very interesting. I'm especially glad to see that the program committee selected Rick Creath's paper on the historical trajectory of Quine's use of the concept of simplicity in various arguments ("The Career of Simplicity in Quine's Philosophy of Science"). Rick has been a leader in the revival of Carnap (and logical empiricism more generally) as a legitimate object of study for philosophers of science (his "Was Carnap a Complete Verificationist in the Aufbau?" was part of the PSA all the way back in 1982). For my own sake, I'm glad to see that Quine is now on the radar as a figure who needs to be understood historically, instead of merely as an interlocutor or colleague.


Helpful site: Philosophy Conferences in Europe

A new member of my department, Marion Ledwig, alerted me to this very nice list of conferences in Europe for 2006-07. Speaking of which, I am very sorry to be missing GAP.6 [Gesellschaft fur Analytische Philosophie] two weeks from now, and especially their Carnap workshop organized by my man Steve Awodey.

In other news, I now feel a bit less guilty/ fraudluent about claiming philosophical logic as one of my Areas of Specialization on my CV: I just heard back from the Journal of Philosophical Logic that they'll be publishing a paper of mine. (The paper is the same one I presented at the 2005 Eastern APA, on formal semantics for languages containing confused/ ambiguous terms; if you want to look at it and help me improve it before the final submission, it's on my webpage.)


Israel, Lebanon, and the Knobe Effect

Despite the title, this post is not about politics. The Knobe Effect is roughly the following: people consider foreseen side effects to be (more) intentional (or on purpose) if those side effects are bad than if they are good. That is, if you do something that has a beneficial foreseen side-effect, you won't be seen as bringing about that side-effect on purpose, but you would if the side effect was harmful or bad. This result has been shown to be experimentally robust in several groups of subjects.

Disputes concerning the Knobe effect arise in the interpretation of this experimental finding. Knobe himself takes these results to show that our concept of intentional action is essentially tied by our moral sensibilities -- somewhat surprising, since we don't usually think of intention and morality as closely linked. Other philosophers have suggested more 'deflationary' readings of the experimental results; for example, we want to blame someone for bringing about a foreseen, bad side effect of their actions -- and as a general rule of thumb, we only legitimately blame people for things they do on purpose. So on this interpretation, the Knobe effect is seen as a sort of confabulation or rationalization for our practices of praising and blaming -- not as bearing on the very concept of intention itself. Several papers by Knobe and co-authors are available on Knobe's webpage, along with papers responding to his work. If you prefer your philosophy in blog form, there has been a great deal of discussion of this work over at Experimental Philosophy.

Recent events in Lebanon provide an example of the type of situation in which the Knobe Effect appears. Israel intends to destroy Hezbollah's military capabilities, and used various forms of military force as a means to that end. Since much of the Hezbollah forces are located in places with high civilian population density, one foreseen side effect of Israel's use of force to disarm Hezbollah is a tragically high number of civilian casualities.

On NPR, I heard a high-ranking Israeli military official justify his country's military action by saying, in effect: We Israelis are not aiming to hurt any civilians -- our goal is only to stop Hezbollah from launching strikes into Israeli territory. There's two things I wanted to say about this:
(1) If only this high-ranking Israeli offical had read the work of Knobe et al., he would have known that this excuse would not carry much water, if any at all -- we are blamed for foreseen bad side-effects, even if they are unintentional.
(2) My reaction/ intuition in this case is against Knobe's stronger interpretation of the experimental results, and with the deflationists': I think the defense official has a perfectly good grasp of the concept of purpose or intentional action, even when he says "We're not harming Lebanese civilians on purpose." This doesn't sound like "John is a married bachelor" or "This is a square circle" to me.


Now broadcasting from the desert

I have not posted in a long time -- I've been busy with my move from Pittsburgh to Las Vegas, and with all the craziness that attends moving cross-country and starting your life over. But we are starting to settle in, so blogging may pick up again soon.

Several months ago, Doug Patterson asked me if I could contribute something to a volume he's editing for OUP called Alfred Tarski: Philosophical Background, Development, and Influence. I was honored, since many of the other contributors breathe rarified logico-philosophical air, so I cobbled an article together out of various bits of my dissertation. I now have a draft of the paper, boringly entitled "Tarski's Nominalism," and I would greatly appreciate any and all feedback from interested readers. To help you determine whether you are an 'interested reader,' I've cut-and-pasted a bit of the intro:
"This essay aims to answer three related questions about Tarski's self-described 'nominalism with a materialistic taint' through an examination of Carnap’s 1941 dictation notes. First, what is Tarski’s view? Second, what are the rationales for his view? Finally, how does Tarski attempt to reconcile his nominalist philosophical scruples with mathematics, since mathematics deals with paradigmatically abstract objects, such as numbers and sets, whose rejection is a standard sine qua non of modern nominalism?"

As a brand-new Pitt HPS alumnus, I wanted to sing the praises of a couple members of the incoming class. First, Jonah Schupbach, of Berkeley, Bacon, & Bird blog-fame, just had a paper published in Philosophy of Science on one of my favorite topics, the evidential and explanatory role of unification in science. And Jason Byron (nee Baker) has written an article, forthcoming in BJPS, that argues for a point that I became convinced of while doing work for my dissertation. Just to get a sense for what the logical empiricists were thinking about in 1940, I flipped through a few journals from the 1930s that they were reading and publishing in, especially Erkenntnis and Synthese. I was very surprised to find that there were lots of articles on philosophical details related to biology -- some of them dealing rather closely with the science. This was surprising because the usual story among current philosophers of biology is that philosophy of science completely (or almost completely) ignored biology until the late 60s. Jason has now done the detailed spade-work needed to substantiate that impression I had.


Something else about Mary the neuroscientist

After writing my recent post about Sue Barry, the real-life example of Frank Jackson's Mary the neuroscientist, I received a very interesting email from... Sue Barry. Prof. Barry's email was generous and insightful, and I felt fortunate that she took the time to write me. I just wanted to mention a couple other striking things (for philosophers, at least) about her case, which came up in both her email to me and the NPR story about her that (at least here in Pittsburgh) was broadcast a day or two after my original post.

1. Sue had studied descriptions of what stereoscopic vision was like before she acquired it herself. She thought, back then, that she could imagine (at least roughly) what having stereoscopic vision was like. After getting such vision, though, she discovered that what she imagined it was like was completely different from the actual experience.

2. In describing the difference between her previous and current visual perceptions, Sue says that she can now perceive space, whereas she (now realizes that she) couldn't before. This strikes me as interesting, because we (and here I mean both philosophers working on particular problems in epistemology and philosophy of mind, as well as non-philosophers) usually think of the objects of perception as things, or attributes of things, or the like. (Consider: 'What do you see?' Compare "I see an apple" with "I see space".)

Finally, one thing that comes out clearly in the New Yorker article, the NPR story, and the email is that Prof. Barry is a generous, magnanimous person -- and that she is now getting a great deal of pleasure from an aspect of her perceptual system that most of us take for granted.


Happy birthday

This blog started one year ago today. Blogging has actually been a more rewarding experience than I expected, mostly because it's put me in contact with smart and interesting folks (in both cyberspace and meatspace) that I otherwise would not have met. It's repeatedly been extremely helpful to hear other people's reactions to what's bouncing around inside my head.

I'm not sure what the future holds for Obscure and Confused Ideas. On the one hand, I start my first academic job in just under 2 months, and everyone tells me that the first year is pretty brutal -- so if anything is squeezed out by time pressures, it might be blogging. On the other hand, it'll be the first time in seven years that I won't be surrounded by 30 or so other people interested in history and philosophy of science, so I may have to bounce my ideas off the online community instead of my current offline community. We'll see.

Just so this post contains something other than insufferable self-absorption, I'm linking to a Colloquium Bingo game card, produced by the grad students at Johns Hopkins's Program in the History of Science, Medicine, and Technology. And one other thing: I was looking over the Experimental Philosophy blog again recently, and was filled with a mixture of admiration and envy -- for it looks to an outsider like me that they really have a genuine online research community there. People post new papers-in-progress, which are given careful and serious feedback my several people, and the authors engage in a substantive conversation about their work. It appears to be a model of what people wish the web would do -- link up people, in a real and almost intimate way, separated by thousands of miles. I wonder why Experimental Philosophy has succeeded here, while other blogs with the same basic idea (for example, Philosophy of Biology, which has apparently disappeared) have not done as well.


Analyticity in model-theoretic languages

Part of why I am drawn to philosophy of science and logic is that I like to operate with clean and neat formulations of apparently messy concepts -- and these two sub-disciplines of philosophy embrace such tastes more than other sub-fields. Of course, I am not claiming that ethicists and metaphysicians are muddle-headed; most think about their sub-discipline's topics with far more clarity and rigor than I can. I am merely expressing a personal preference for studying deontic logic instead of the most recent form of consequentialism.

Enough autobiography -- I mention it only to explain my motivation for this post. And my point is this: if we adopt the usual formalization of an interpreted language (viz., the model-theoretic one), then we apparently cannot capture the notion of analyticity -- at least in the way Carnap, who is widely recognized as the champion of analyticity, conceives of it.

Conceiving of a language in model-theoretic terms is one widely-used way of introducing precision into a philosophical endeavor. Most readers probably can recite the definition of a model-theoretically understood langauge by heart, but for the innocent:
A language L consists of a ordered triple , where
- L carries grammatical information: which symbols belong to the language, which strings of symbols count as sentences, which grammatical category each symbol belongs to, etc.;
- M is a model =< D, f >, where the domain of discourse D is a set of individuals, and f is an interepretation function, which assigns an individual in D to each proper name in L, sets in D to one-place predicates, sets of ordered pairs drawn from D to two-place predicates, and so on; and
- r specifies the truth-values of certain compound sentences, given the truth-values of their components -- in other words, r basically specifies the truth-tables.

So much for the model-theoretic conception of language; what about analyticity? Carnap, throughout his career, identifies the analytic truths as those sentences that are true merely in virtue of the language one speaks. That is, if we specify that I am speaking a particular language, in the course of that specification, I might present enough information that the truth-values of certain sentences within that language are fixed. (For an obvious example: if I specify what 'and' and 'not' mean in my language via the usual truth tables for those words, any sentence of the form 'p and not-p' comes out false merely in virtue of the rules governing the language I am using.)

Now, after all that rehearsal of material most readers probably know well, I can get to my point. In a model-theoretically characterized language, the truth-values of ALL sentences are determined by the specification of that language. For example, the truth-value of atomic sentences such as 'Fb' are true iff the individual named by 'b' is in the extension of the set associated with 'F' (i.e. 'Fb' is true iff f(b) is an element of the set f(F)). And Carnap certainly never wanted every sentence of a (non-contradictory) language to be analytic.

The problem then is: one of my favorite tools for 'precisification' in philosophy -- model-theoretic languages -- apparently affords no way to characterize one of the concepts I'm most interested in: analyticity. What to make of this? The first, obvious thing to say is: "Of course there couldn't be any explication of analyticity in such languages, because such languages are extensional, and Carnap and Quine (who represent opposing positions in debates over analyticity) both basically agree that analyticity is an intensional notion."

This is right, but I think there is something further to note: in a straightforward sense, every sentence in a (classical) model-theoretic language has its truth-value determined by the specification of the language. That is, by specifying the language, we fix the truth-values of all the sentences in such a language. That seems odd -- the model-theoretic way of specifying a language that has proved very useful in certain situations, but it likely cannot be a fundamental and/or universally applicable one.

One further point: Carnap, Quine, and the other primary antagonists in battles over analyticity all agree that if there is any such thing as analytic truth, then the (so-called) logical truths are paradigm instances of analytic truths, i.e., truth in virtue of meaning (if you are thinking of "Two Dogmas" and don't believe me, look at Word & Object, sec. 14, fn.3, p.65). But the model-theoretic conception of language characterizes the logical truths as a class of sentences that are true across a set of related langauges. That is, to know whether a sentence is a logical truth in one model-theoretic language, you have to check whether that sentence is true in a bunch of other model-theoretic languages that share certain features with the first one.

So, one might think that the way to cash out analyticity in the idiom of the philosophical logician is to use something like Kripkean possible world semantics (which are used, with some variations, in modal, deontic, epistemic, and temporal logics). But these are usually not given linguistic interpretations, and it's not clear to me that it's possible to give a decent one... though I'd love to be wrong. Any thoughts?


Sue Barry, a real-life Mary the neuroscientist

I'm sure this will be noted all over the philosophical regions of the blogosphere, but in the latest issue of the New Yorker, there is an example of a real person who basically fits Frank Jackson's famous example of Mary the neuroscientist -- though in this case, it is not color vision, but stereoscopic vision, that the person gains. The person is named Sue Barry, and she actually is a neurobiologist. Unfortunately, the article is not online.

(For those unfamiliar with Frank Jackson's thought-experiment, Mary is a neuroscientist of color who knows all the neuroscientific theories associated with color vision (even those theories that have not yet been discovered and formulated) -- but she is raised in a completely monochrome/ black-and-white environment. If Mary suddenly sees colors one day, does she have a fundamentally new experience? Does she learn anything? A recent book, There's Something about Mary (publisher's page, review in NDPR), is entirely devoted to issues involving this thought-experiment.)


Quine on logical truth, again

In a previous post, I asked about the relationship between Quine's definition of logical truth and the now-standard (model-theoretic) one. Here's the second installment, which I decided to finally post after sitting on it for a while, since Kenny just posted a nice set of thoughts on the very closely related topic of logical consequence.

The standard definition is:
(SLT) Sentence S is a logical truth of language L = S is true in all models of L.

Quine's definition is:
(QLT) S is a logical truth = "we get only truths when we substitute sentences for [the] simple [=atomic] sentences" of S (Philosophy of Logic, 50).
(Quine counts open formulas, e.g. 'x burns', as sentences; specifically, he calls them 'open sentences.' I'll follow his usage here.)

So what does Quine think is the relationship between the standard characterization of logical truth and his own? He argues fot the following equivalence claim:
(EQ) If “our object language is… rich enough for elementary number theory ,” then “[a]ny schema that comes out true under all substitutions of sentences, in such a language, will also be satisfied by all models, and conversely” (53).

In a nutshell, Quine argues for the 'only if' direction via the Löwenheim-Skolem theorem (hence the requirement of elementary number theory within the object language), and for the ‘if’ direction by appeal to the completeness of first-order logic. I'll spell out his reasoning in a bit more detail below, but I can sum up my worry about it here: in order to overcome Tarski’s objection to Quine’s substitutional version of logical truth, Quine appeals to the Löwenheim-Skolem theorem. However, for that appeal to work, Quine has to require the object language to be rich enough to fall afoul of the incompleteness results, thereby depriving him of one direction of the purported equivalence between the model-based and sentence-substitution-based notions of logical truth.

The 'only if' direction
Quine presents an extension of the Löwenheim-Skolem Theorem due to Hilbert and Bernays:
“If a [GFA: first-order] schema is satisfied by a model at all, it becomes true under some substitution of sentences of elementary number theory for its simple schemata” (54).

A little logic chopping will get us to
If all substitutions of sentences from elementary number theory make A true, then A is satisfied in all models.
-- which is what we wanted to show. (I think this argument is OK .)

The 'if' direction
Quine takes as his starting premise the completeness result for first-order logic:
(CT) “If a schema is satisfied by every model, it can be proved” (54).

(Quine then argues that if a schema can be proved within a given proof calculus whose inference rules are “visibly sound,” i.e., “visibly such as to generate only schemata that come out true under all substitutions” (54), then such a schema will of course ‘come out true under all substitutions of sentences,’ Q.E.D.) This theorem is of course true for first-order logic; however, Quine has imposed the demand that our object language contain the resources of elementary number theory (explicitly including both plus and times, so that Presburger arithmetic—which is complete—is not in play). And once our object language is that rich, then Gödel’s first incompleteness theorem comes into play. Specifically, in any consistent proof-calculus rich enough for elementary number theory, there will be sentences [and their associated schema] that are true, i.e., satisfied by every model, yet cannot be proved -- providing a counterexample to (CT). So the dialectic, as I see it, is as follows: to answer Tarski's objection to the substitutional version of logical truth, Quine requires the language to be rich enough to capture number theory. But once Quine has made that move, the crucial premise (viz., CT) for the other direction of his equivalence claim no longer holds.

I think I must be missing something -- first, Quine is orders of magnitude smarter than I am, and second, while Quine is fallible, this does not seem like the kind of mistake he's likely to make. So perhaps someone in the blogosphere can set me straight.

And I have one more complaint. Quine claims that his "definition of logical truth agrees with the alternative definition in terms of models, as long as the object language is not too weak for the modest idioms of elementary number theory. In the contrary case we can as well blame any discrepancies on the weakness of the language as on the definition of logical truth" (55). This defense strikes me as implausible: why would we only have (or demand) a well-defined notion of logical truth once we reach number theory? Don't we want a definition to cover all cases, including the simple ones? If the model-theoretic definition captures all the intuitive cases, and Quine's only when the language is sufficiently rich, isn't that a good argument against Quine's characterization of logical truth?


(And for those wondering what Quine thinks is the advantage of his characterization of logical truth over the model-theoretic one, the answer is: Quine's uses much less set theory. "The evident philosophical advantage of resting with this substitutional definition, and not broaching model theory, is that we save on ontology. Sentences suffice,… instead of a universe of sets specifiable and unspecifiable. … [W]e have progressed a step whenever we find a way of cutting the ontological costs of some particular development" (55). Quine recognizes that his characterization is not completely free of set theory, given the proof of the LS theorem, so he says his "retreat" from the model-based notion of logical truth "renders the notions of validity and logical truth independent of all but a modest bit of set theory; independent of the higher flights" (56).)


Knowledge via public ignorance

Yesterday Rohit Parikh gave a very interesting talk at Carnegie Mellon on a kind of modal epistemic logic he has been working on recently with several collaborators, cleverly called topologic, because it carries interesting topological properties. The first thing Parikh said was "I like formalisms, but I like examples more." In that spirit, I wanted to describe here one simple example he showed us yesterday, without digging into the technicalia, because it generates a potentially philosophically interesting situation: someone can (under suitable circumstances) gain knowledge merely via other people's declarations of ignorance.

Imagine two people play the following game: a natural number n>0 (1,2, ...) is selected. Then, one of the players has n written on his or her forehead, and the other player has n+1 written on his forehead. Each player can see what is written on the other's forehead, but cannot see what is written on their own. The game allows only two "moves": you can either say "I don't know what number is on my forehead" or state what you think the number on your forehead is.

So, for example, if I play the game, and I see that the other person has a 2 written on her forehead, I know that the number on my own forehead is either a 1 or a 3, but I do not know which. But here is the interesting part: if my fellow game-player wears a 2, and on her first move says "I don't know what my number is," then I know what my number is -- at least, if my fellow game-player is reasonably intelligent. Why? If I were wearing a 1, then my interlocutor would say, on her first move, "I know my own humber is a 2" -- because (1, 2) is the first allowable pair in the game. Thus, if she says "I don't know what my own number is" on her first move, then I know my number can't be 1, so it must be 3. This same process of reasoning can be extended: by playing enough rounds of "I don't know" moves, we can eventually successfully reach any pair of natural numbers, no matter how high. We just have to keep track of how many rounds have been played. (This may remind the mathematically-inclined in the audience of the Mr. Sum-Mr. Product dialogue.)

What is interesting to me about this is that the two players in such a game (and the other examples Prof. Parikh described) can eventually come to have knowledge about the world simply via declarations of ignorance. These cases prompt two questions for me:
(1) Is this type of justification for a belief different in kind from the others philosophers busy themselves with? Or is this just a completely normal/ standard/ etc. way of gathering knowledge, which differs only superficially from other cases? (I'm not qualified to answer this, since I'm not an epistemologist.)
(2) Are there any interesting real-world examples where we achieve knowledge via collective ignorance in (roughly) this way? (Prof. Parikh suggested that there might be a vague analogy between what happens in these sorts of games and game-theoretic treatments of evolution, but didn't have much further to say.)


Modal logic workshop at CMU

I spent yesterday at a workshop devoted to modal logic at Carnegie Mellon University. Rather than rehearse everything that happened, I'll simply point interested parties to the workshop webpage, which has abstracts for the talks. Mostly local folks presented their work, but Johan van Bentham from Amsterdam and Stanford was here, along with Rohit Parikh, who'll be presenting on Monday as well.

I certainly learned a lot, and even though my brain was hurting afterwards, I enjoyed myself too.


Mancosu and mathematical explanations

Last Friday, Paolo Mancosu was in Pittsburgh to give a talk on explanation in mathematics. His visit gives me the opportunity to correct an oversight in my last post -- Paolo helped me improve my dissertation substantially: he read an early partial draft very carefully, and brought his learned insight to bear on it. He is the only person in the universe who has written on the specific topic of my dissertation, and his comments were extremely helpful.

The basic claim of Paolo's talk was that Philip Kitcher's account of mathematical explanation falls afoul of certain apparently widely-shared intuitions about which proofs are explanatory and which are not. But I was intrigued by something else mentioned in the talk, which came to the fore more in the Q&A and dinner afterwards. When working on explanation in the natural sciences, there seems to be much more widespread agreement about what counts as a good explanation than in the mathematical/ formal sciences. That is, whereas most practitioners of a natural science can mostly agree on which purported explanations are good and which not, two mathematicians are much less likely to agree on whether a given proof counts as explanatory.

So I am wondering what accounts for this difference between the natural and formal sciences. Might this be due (in part) to mathematics lacking the 'onion structure' of the empirical sciences? For example, the claims of fundamental physics are not explained via results in chemistry, and observation reports (or whatever you want to call claims at the phenomenological level [in the physicist's sense]) are not used to explain any theoretical claim, and so on. My intuitions about mathematics are not as well-tutored, but I have the sense that the different branches of mathematics do not have such a clear direction of explanation. (Of course, there is no globally-defined univocal direction of explanation in the natural sciences [the cognoscenti can think of van Fraassen's flagpole-and-the-shadow story here], but there is nonetheless an appreciable difference between math and empirical sciences on this score.) At least in some cases, this clearer direction of explanation probably results from empirical sciences' explaining wholes in terms of their parts -- whereas mathematics lacks that clear part-whole structure. Often two bits of mathematics can each be embedded in one another, but we tend not to find this in the empirical sciences. (The concepts of thermodynamics [temperature, entropy] can be defined using the concepts of statistical mechanics [kinetic energy], while the converse is clearly out of the question.)

Pointing to the onion structure/ clearer direction of explanation in science might be just a re-statement of the original question; I'm not sure. Or maybe it's not relevant. In any case, I have to bury myself beneath a mountain's worth of student essays on the scientific revolution...


That's Doctor Obscure and Confused, to you

Yesterday I defended my dissertation. I wanted to thank, in this very small but nonetheless public space, all the folks here who helped me along with the dissertation: John Earman and Nuel Belnap, who served on my committee, Tom Ricketts, who unfortunately joined the Pittsburgh department a little too late to be on my official committee, but still read dissertation chapters with a fine-toothed comb, and talked to me about them -- and job-market issues -- for hours, Steve Awodey, who not only discussed Carnap with me innumerable times, but also helped me in the process of professionalization and acculturation into the circle of philosophers who study Carnap seriously, and finally my advisor, Laura Ruetsche, who not only gave me searching, insightful, and helpful comments at myriad points in the development of the dissertation project, but also helping me through the occasionally-panic-inducing job search process.

So, thanks to all -- I feel very lucky to have had the final years of my grad school career shaped by you. I hope other PhD students are as fortunate as I have been.


Semantics and necessary truth

I have recently been reading, with great profit, Jason Stanley's draft manuscript Philosophy of Language in the Twentieth Century, forthcoming in the Routledge Guide to Twentieth Century Philosophy. The paper's synoptic scope matches the ambitious title.

I'm curious about one relatively small claim Jason makes. On MS p.17, he says:
intuitively an instance of Tarski's schema T [GF-A: '...' is a true sentence if and only iff ...] such as (7) is not a necessary truth at all:
(7) "Bill Clinton is smart" is a true sentence if and only if Bill Clinton is smart.
(7) is not a necessary truth, because "Bill Clinton is smart" could have meant something other than it does. For example, "is smart" could have expressed the property of being from Mars, in which case (7) would be false.
This certainly has (as Stanley says) an "intuitive" ring. But now I'm not sure it's correct.

Here's my worry: as a preliminary, recall (as Tarski taught us) that semantic vocabulary should always be indexed to a particular language -- e.g., we must say 'is a true sentence of English' or 'x refers to y in Farsi' etc. in the full statement of sentences like (7). But then I am not so sure that such sentences are not true in all possible worlds. Is it really the case that, in English, "is smart" could have expressed the property of being from Mars? We specify a particular language (in part) by specifying the semantic values of the words of that language (at least, if we are not proceeding purely formally/ proof-theoretically). Wouldn't we be speaking another language at that point, that was similar to English, but not the same?

My intuitions lean towards saying that this would not be English, but those intuitions aren't firm. I think the question boils down to: "Is 'English' a rigid designator (i.e., does 'English' refer to the same thing(s) in all possible worlds)?", but I'm not sure about that, either. Which way do your intuitions run?


Tarski, Quine, and logical truth

The following must have been addressed already in the literature, but I'm going to mention it anyway--perhaps a better-informed reader can point me in the direction of the relevant research. W.V.O. Quine offers the following characterization of logical truth:
The logical truths are those true sentences which involve only logical words essentially. What this means is that any other words, though they may also occur in a logical truth (as witness 'Brutus,' 'kill,' and 'Caesar' in 'Brutus killed or did not kill Caesar'), can be varied at will without engendering falsity. ("Carnap and Logical Truth," §2)
All I wanted to mention here is that Alfred Tarski had already shown, in 1936's "On the Concept of Logical Consequence," that Quine's characterization is a necessary condition for a sentence to be a logical truth, but not a sufficient one. For example, if one is using an impoverished language that (i) only has proper names for things over five feet tall, and (ii) only has predicates applying only to things over five feet tall, then the sentence 'George W. Bush is over five feet tall' will be a logical truth -- because no matter what name from this impoverished language we substitute for 'George W. Bush' or what predicate we substitute for 'over 5 feet tall' in this sentence, the resulting sentence will be true.

Now some historical questions: did Quine think his condition was sufficient, or just necessary? (I quickly checked "Truth by Convention," and I didn't find any conclusive evidence that he considered it sufficient.) If Quine does consider this a proper definition of logical truth, how does he/ would he answer Tarski's objection? -- and/or why doesn't Quine simply adopt Tarski's definition of logical truth, viz. 'truth in all models'?

(You might think Tarski's objection shouldn't count for much, since I used a very contrived language to make the point against Quine. In Tarski's defense, however, (a) assuming that every object has a name in our language also seems somewhat artificial, and (b) Tarski proved (elsewhere) that a single language cannot contain names for all (sets of) real numbers (See "On Definable Sets of Real Numbers," reprinted in Logic, Semantics, Metamathematics).)


Should a naturalist be a realist or not?

Unsurprisingly, the answer to the question in the title of this post depends on the details of what one takes 'naturalism' about science to mean. The shared conception of naturalism is something like 'There is no first philosophy' (I think Penny Maddy explicitly calls this her version of naturalism) -- that is, philosophy does not stand above or outside the sciences. "As in science, so in philosophy" is (one of) Bas van Fraassen's formulations.

Each of the following two quotes comes from a naturalist, but the first appeals to naturalism to justify realism (about mathematics), while the second appeals to naturalism in support of anti-realism (about science).

In his review of Charles Chihara's A Structuralist Account of Mathematics in Philosophia Mathematica 13 (2005), John Burgess writes:
"If you can't think how we could come justifiably to believe anything implying
(1) There are numbers.
then 'Don't think, look!' Look at how mathematicians come to accept
(2) There are numbers greater than 10^10 that are prime.
That's how one can come justifiably to believe something implying (1)." (p.87)
Compare van Fraassen, in The Empirical Stance (2002):
But [empiricism's] admiring attitude [towards science] is not directed so much to the content of the sciences as to their forms and practices of inquiry. Science is a paradigm of rational inquiry. ... But one may take it so while showing little deference to the content of any science per se.(p.63)
Both Burgess and van Fraassen are naturalists about their respective disciplines (mathematics and empirical sciences) -- but they disagree on what the properly scientific reaction to questions like "Are there numbers?" and "Does science aim at truth or merely empirical adequacy?" is.

The mathematician deals in proof. And proof is (at least a large part of) the source of mathematics' epistemic force. The number theorist (e.g.) assumes the existence of the integers and proves things about them; that's what she does qua mathematician. People with the proclivities of Burgess and van Fraassen would agree thus far, I think. But they part ways when we reach the question "Are there integers?" A Burgessite (if not John B. himslef) could say "If you're really going to defer to number theorists and their practice, they clearly take for granted the existence of the integers." A van-Fraassen-ite could instead say: "What gives mathematics its epistemic force and evidential weight is proof, and the number theorist has no proof of the existence of integers (or the set theorist of sets, etc.). Since there is no proof of the integers' existence forthcoming, asserting the existence of the integers (in some sense) goes beyond the evidential force of mathematics. Thus, a naturalist about mathematics should remain agnostic about the existence of numbers (unless there are other arguments forthcoming, not directly based on naturalism)."

Is there any way to decide between these forms of naturalism -- one which defers (for the most part) to the form and content of the sciences, and the other which defers only to the form? (Note: van Fraassen's Empirical Stance takes up this question, but this post is too long already to dig into his suggestions.)


Pessimistic induction + incommensurability = instrumentalism?

One popular formulation of Scientific Realism is: Mature scientific theories are (approximately) true.
One of the two main arguments against this claim is the so-called 'Pessimistic (Meta-)induction,' which is a very simple inductive argument from the history of science: Most (or even almost all) previously accepted, (apparently) mature scientific theories have been false -- even ones that were very predictively successful. Ptolemy's theory yielded very good predictions, but I think most people would shy away from saying 'It is approximately true that the Sun, other planets, and stars all rotate around a completely stationary Earth. So, since most previous scientific theories over the past centuries turned out to be false, our current theories will also probably turn out to be false. (There are many more niceties which I won't delve into here; an updated and sophisticated version of this kind of argument has been run by P. Kyle Stanford over the last few years.)

The kind of anti-realism suggested by the above argument is that the fundamental laws and claims of our scientific theories are (probably) false. But we could conceivably read the history of science differently. Many fundamental or revolutionary changes in science generate what Kuhn calls 'incommensurability': the fundamental worldview of the new theory is in some sense incompatible with that of the older theory -- the changes from the classical theories of space, time, and matter to the relativistic and quantum theories are supposed to be examples of such changes. Communication breaks down (in places) across the two worldviews, so that each side cannot fully understand what the other claims.

Cases of incommensurability (if any exist) could result in each side thinking the other is speaking incomprehensibly(or something like it), not merely that what the other side is saying is false in an ordinary, everyday way. An example from the transition from Newtonian to (special) relativistic mechanics may illustrate this: Suppose a Newtonian says 'The absolute velocity of particle p is 100 meters per second.' The relativistic physicist would (if she is a Strawsonian instead of a Russellian about definite descriptions) say such a sentence is neither true nor false -- because there is no such thing as absolute velocity. [A Russellian would judge it false.] If she merely said "That's false," the Newtonian physicist would (presumably) interpret that comment as 'Oh, she thinks p has some other absolute velocity besides 100 m/s; perhaps I should go back and re-measure.' To put the point in philosophical jargon: presuppositions differ between pre- and post-revolutionary science, and so the later science will view some claims of the earlier group as exhibiting presupposition failure, and therefore as lacking a truth-value, like the absolute velocity claim above. (Def.: A presupposes B = If B is false, then A is neither true nor false)

This leads us to a different kind of pessimistic induction: (many of) the fundamental theoretical claims of our current sciences probably lack a truth-value altogether, since past theories (such as Newtonian mechanics) have that feature. (If you want to call claims lacking truth-values 'meaningless,' feel free, but it is not necessary.) This is hard-core instrumentalism, a very unpopular view today; most modern anti-realists, following van Fraassen, think that all our scientific discourse is truth-valued (though we should be agnostic about the truth-value of claims about unobservable entities and processes). But this instrumentalism seems to be a natural outcome of (1) taking the graveyard of historical scientific theories seriously, (2) believing there is something like Kuhnian incommensurability, and (3) holding that incommensurability involves presupposition failure. And none of those three strike me as crazy.

Disclaimer: This argument has probably been suggested before, but I cannot recall seeing it anywhere.


Want a job? Come to Pitt HPS

As some of you know, I am in the last stages (throes?) of my PhD program at the University of Pittsburgh, in the History and Philosophy of Science (HPS) department. For those who are not familiar with it, the department is relatively small: there are usually about 8-9 faculty whose primary appointment is in HPS, and about 30 or so graduate students.

In an average year, 1 to 3 people from my department go on the job market, and the department has had a very good placement record since I've been here: everyone who graduated from the program has gotten a tenure-track job, either straight out of grad school or after a 1-2 year post-doc. But this year, we had ten people go on the market, eight of them (myself included) for the first time. There was a lot of hand-wringing and worry, by students and faculty alike, about having so many HPS people on the market at one time. The demand for folks like us, who prove theorems in the foundations of quantum gravity or trace out the technical development of Galileo's kinematic theory, is just not as high as for people who work in ethics or epistemology.

I am now happy to report that all ten people have found good positions: 8 people are beginning tenure-track jobs, and the other two are taking enviable post-doc positions (including filling fellow-blogger Gillian Russell's old spot as Killiam fellow). I won't give the list of where everyone is headed, since I haven't asked their permission to broadcast that information to the three people who read this blog. But I grant myself permission to announce that I will be starting next fall as an assistant professor at UNLV (the University of Nevada-Las Vegas). It's a great position for me, in a department full of smart, sensible, and funny people. I'll probably blog in more detail soon about why I'm so excited about it -- but for starters, it was 70 degrees when I visited in January!

UPDATE: The Killam Fellow mentioned above will have a tenure-track position, though it is not yet determined where he will be yet. So 9 of 10 will start tenure-track jobs.


Quantum logic question

I've been thinking about quantum logic (QL) recently, and in particular about the usual semantics for 'or' in QL. I've become puzzled, and hopefully someone out there in the blogosphere can help me clear up my confusion.

For the uninitiated: In QL, propositions are represented by/ interpreted as subspaces in a Hilbert space -- including one-dimensional subspaces, i.e. rays. There are multiple ways of formulating in colloquial language what these subspaces are to represent (see final paragraph below), but (atomic) sentences are usually taken to have the form:
'The value of observable O is o1'
where 'observable' just means any physical quantity (e.g., position, momentum, energy, spin), and o1 is just a particular value (or range of values) of that observable. (E.g. 'The energy of this system is between 4 and 6 Joules.') Such sentences are true iff the state-vector of the system lies within the subspace.

Now, think of a particle P in a superposition of spin up and spin down along the y-axis. This particle's state is of course represented by a different vector (call it V_s) than particles in the spin-up state (represented by V_up), or particles in the spin-down (V_down) state. However, because the usual QL semantics assigns to 'p or q' the linear span, instead of the union, of the rays associated with p and q, the claim 'P is spin-up or P is spin-down' will be true -- because V_s is in the linear span of the spin-up ray and the spin-down ray. Each of the disjuncts is false, but the whole disjunction is true. (To me, this feature of QL is even more striking than the failure of the so-called distributive law, i.e., [p&(q or r)] iff [(p&q) or (p&r)], which commentators on QL tend to focus on.)

This seems intuitively wrong to me (or at least as 'wrong' as something can be in formal semantics). In 2-D Euclidean space, suppose we have a unit vector V at a 45-degree angle to the x-axis. I don't think anyone would consider the sentence 'V lies along the x-axis or V lies along the y-axis' to be true. V is not a unit vector on the x-axis or on the y-axis, but a distinct third thing. I don't see why we would change policies in the quantum case, which appears analogous to me.

So now I can ask my question: could we change the semantics for 'or' to avoid these apparent problems? In particular, in the usual semantics for quantum logic, why must all propositions be represented as subspaces on a Hilbert space? -- why not also allow subsets (which might not be closed under linear combinations)? For then we could allow 'or' to mean the union of rays, and 'P is spin-up or P is spin-down' will come out false.

One further note: some people (e.g. R.I.G. Hughes, "Quantum Logic and the Interpretation of Quantum Mechanics," PSA 1980) take the atomic QL propositions to have a different correlate in colloquial language. Instead of
'The value of observable O in system S is within o1,'
they take the subsets of Hilbert space to mean
'The result of a measurement operation for observable O in system S is within o1.'
Under this understanding, my above worries disappear -- for the result of a spin-y measurement surely will be either spin-up or spin-down. However, QL then becomes much less interesting, because it is just about measurement outcomes, instead of about these supremely odd things, superpositions.


realism and the limits of scientific explanation

Long time, no blog. I finally got back a few days ago from the last of my visits to schools for final job interviews. It was very interesting and instructive to observe non-Pittsburgh philosophers in their native habitats. I should know by the end of this week where I'll be next year.

In lieu of an actual post, I am putting up the handout I used at a couple of my job talks. As a result, it looks programmatic/ bullet-pointy; but I tried condensing this into a normal post, and it was just far too long. If you can make out what's going on, I would really appreciate any feedback/ comments/ eviscerations from readers.


The argument

(P1) Scientists do not accept explanations that explain only one (type of) already accepted fact.
(P2) Scientific realism, as it appears in the no-miracles argument, explains only one type of already accepted fact (namely, the empirical adequacy or instrumental success of mature scientific theories).
(P3) Naturalistic philosophers of science “should employ no methods other than those used by the scientists themselves” (Psillos 1999, 78).

Therefore, naturalistic philosophers of science should not accept scientific realism as it appears in the no-miracles argument.

Explanation and defense of (P1)

Explanations that explain only one type of already accepted fact
(i) generate no new predictive content, even when conjoined with all relevant available background information [‘already accepted fact’], and
(ii) do not unify facts previously considered unrelated [‘only one type’].

Evidence for (P1): Scientists reject
- Virtus dormativa-style explanations
- ‘Vital forces’/ entelechies as explanations of developmental regularities
- Kepler’s explanation of the number of planets, and the ratios of distances between them, via the five perfect geometrical solids
- ‘Just-so stories’ in evolutionary biology

The no-miracles argument for scientific realism

Abductive inference schema
(1) p
(2) q is the best explanation of p
Therefore, q

No-miracles argument for scientific realism
(1) Mature scientific theories are predictively successful.
(2) The (approximate) truth of mature scientific theories best explains their predictive success.
Therefore, Mature scientific theories are (approximately) true.

Proponents of the no-miracles argument (Putnam, Boyd, Psillos) accept (P3), appealing to naturalism to justify their abductive inference to scientific realism. Putnam claims that scientific realism is “the only scientific explanation of the success of science” (1975, 73).

The argument for (P2): Scientific realism (i.e., the claim that mature scientific theories are approximately true)
(i) generates no new predictions,
(ii) unifies no apparently disparate facts, and
(iii) explains only one previously accepted fact, viz., science’s predictive success.


philosophy of science in the blogosphere

I've recently noticed two new philosophy of science blogs on the internets worth following:

(1) Brains, by Gualtiero Piccinini, a recent grad of my department. As the title indicates, this leans towards cognitive science issues. This is the area of philosophy of science I know the least about, so I'm hoping keeping up with Gualtiero's blog will show me at least the tip of the iceberg.

(2) Words of Mass Dissemination, by Mickael Dickson, the current editor of the journal Philosophy of Science (which, from what I can tell, is widely agreed to be the leading North American periodical on philosophy of science). I have no idea how he'll have time to keep up with his editorial duties and make blog posts, but I certainly hope he does manage to juggle them both. (Though, it looks like posting has slowed down a bit recently.)

I apologize that it has been so long since my last real/ philosophically substantive post. Virtually all of my brain waves are currently dedicated to the job search, but I am working on a post about quantum logic (which, coincidentally, the above-mentioned Prof. Dickson has written on recently) that I hope will be up soon, once I figure out a couple more things.


A job candidate at the APA

I just returned to Pittsburgh from the American Philosophical Association meeting, where I had job interviews and gave a talk. This was my first trip to the APA, and many people had painted for me a picture of it as red in tooth and claw. There was a fair amount of anxiety in the air, but that's to be expected when 600 or so job-seekers are stuffed into a cage (I'm making up the number 600; there may have been more). But the whole affair was less psychologically traumatic than I had expected -- it was good to see old friends, all my interviews led to interesting and enlightening conversations (I never felt like I was being 'grilled,' much less attacked), and I met people I had only previously known in blogospheric form.

There was one difficulty with the conference that I did not expect: it was physically exhausting. I remember one faculty member who, a few months ago, advised job-seekers not to apply to every single job that they could, on the grounds that you don't want to have too many interviews at the APA. "Too many interviews?" -- I thought -- "How can you have too many?" Well, that person was right. I had a hard time keeping up my energy and focus for the interviews I had, and some people in my department had many more than me... I don't know how they did it.

I expect blogging to remain light here for the next couple of months, since I'm now entering the final stage of the job search process.