11/22/2015

HOPOS 2016: Submit an abstract

HOPOS 2016 Call for Submissions
June 22-25, 2016, Minneapolis, Minnesota, USA
http://hopos2016.umn.edu/

Keynote Speakers:

Karine Chemla (REHSEIS, CNRS, and Université Paris Diderot)

Thomas Uebel (University of Manchester)

HOPOS: The International Society for the History of Philosophy of Science will hold its eleventh international congress in Minneapolis, on June 22-25, 2016. The Society hereby requests proposals for papers and for symposia to be presented at the meeting. HOPOS is devoted to promoting research on the history of the philosophy of science. We construe this subject broadly, to include topics in the history of related disciplines, including computing, in all historical periods, studied through diverse methodologies. In order to encourage scholarly exchange across the temporal reach of HOPOS, the program committee especially encourages submissions that take up philosophical themes that cross time periods. If you have inquiries about the conference or about the submission process, please write to Maarten van Dyck: maarten.vandyck [at] ugent.be.

SUBMISSION DEADLINE: January 4, 2016

To submit a proposal for a paper or symposium, please visit the conference website: http://hopos2016.umn.edu/call-submissions

10/15/2015

Descartes on Mathematical Truth and Mathematical Existence

This is not so much a post as a note to myself for something I would like to think about in the future.

In the first Meditation, Descartes writes:
"arithmetic, geometry, and other such disciplines, which treat of nothing but the simplest and most general things... are indifferent as to whether these things do or do not in fact exist, contain something certain and indubitable."
I should look more into this apparent 'truth-independent-of-reference' position, that mathematical truth is independent of the existence of mathematical entities, especially as an alternative to the Quine-Putnam indispensibility argument for the reality of mathematical objects.

Relevant secondary literature:
- Gregory Brown (in "Vera Entia: The Nature of Mathematical Objects in Descartes" Journal of the History of Philosophy, 1980:23-37) contains a nice discussion of the kind of existence mathematical objects have for Descartes, esp. section III:
"mathematical objects in particular, have a "being" that is independent of their actual existence in (physical) space or time, and that is characterized by what Descartes calls 'possible existence'"(p.36).

- Brown quotes Anthony Kenny ("The Cartesian Circle and Eternal Truths," Journal of Philosophy, 1970):
"the objects of mathematics are not independent of physical substances; but they do not support the view that the objects of mathematics depend for their essences on physical existents... . Descartes held that a geometrical figure was a mode of physical or corporeal substance; it could not exist, unless there existed a physical substance for it to exist in. But whether it existed or not, it had a kind of being that was sufficient to distinguish it from nothing, and it had its eternal and immutable essence."

9/05/2015

Ontological Commitment, "To be is to be the value of a bound variable," and Schematic Letters in Quine

I am currently working on a paper on Quine's shifting ontological thoughts. Something occurred to me while reading some of his stuff from the late 1930s and 40s, which probably won't make it into the paper, but that I wanted to try to get clear for myself.

Most readers of this blog have heard Quine's famous ontological dictum "To be is to be the value of a bound variable." This is a criterion of ontological commitment for a theory: what the theory says exists is whatever the values of its bound variables are.

Quine includes 'bound', I take it, so that (what he calls) schematic letters do not have existential import. For example, in the expression (x)(P(x) --> P(x)), the P cannot be bound by a quantifier (P) without the language being committed to the existence of properties (or traits, or sets, or whatever you think predicate letters signify). The P is instead a 'dummy letter': the full expression (x)(P(x) --> P(x)) is a schema, not a full sentence in first-order logic, but the schema allows us to say that any sentence that results from substituting an actual predicate for P is a theorem.

Now I can get to what's bothering me. Consider a theory+language, such as primitive recursive arithmetic (PRA), that has (what normally would be called) variables, but does not have any explicitly written-down quantifiers. In such a language, when we see a sentence like x=y = y+x, we can say ‘If we were expressing this in first-order logic, we would understand a pair of universal quantifiers ‘(x)(y)’ out front to make this a sentence,’ but there are actually no quantifier-symbols as part of the language we are considering. So what I’m wondering is: if someone accepts Quine’s line of thought about the difference between (ontologically-committing) variables vs. (ontologically-innocent) schematic letters, then should [/can] that person also say that the x’s and y’s of PRA are schematic letters, not variables? And thus that PRA does not [/need not] commit its users to the existence of the natural numbers -- or to anything else for that matter?

Here is a first potential problem for the Quinean. Let's call Language 1 (L1) the quantifier-free PRA described just above. And let L2 be the first-order logic translation of L1, i.e. L2 just puts the appropriate universal quantifiers in front of every sentence of L1 which contains variables. Now if to be is to be the value of a bound variable, L1 is not committed to numbers (or something number-like enough to satisfy the axioms of PRA), but L2 is. Yet L1 and L2 constitute a paradigm case of ‘merely notational variants’: the same theory, expressed using different notations. So L1 and L2 should either both be committed to the existence of numbers, or neither should.

Now, I can imagine a dedicated Quinean at this point could grasp the second option: we can consistently take the view that L2 is somehow not 'really' ontologically committed to numbers, because we can translate L2 back into (bound-variable-free) L1 (by just erasing every universal quantifier in every L2 sentence). The general principle underlying this is something like: a theory is committed to X just in case X is a value of a bound variable in every adequate formalization of that theory.

This position strikes me as unintuitive. But I think there is a further reason to reject it. For now consider language L3, which is just L2 + the standard definition (x) = ~(∃x)~. We will then clearly have some ontological commitments (albeit negative ones, i.e. commitments that such-and-such does NOT exist). So perhaps the Quinean will say that "To be is to be the value of a bound variable" is only a recipe for finding the positive ontological commitments of a theory. I'm not sure about that move; perhaps it can be made to work.

So in sum, this makes me wonder whether Quine’s contrast between schematic letters on the one hand, vs. genuine variables on the other, may not be as sharp as he needs it to be. In other words, it is not clear to me that schematic letters can be made ontologically innocent in the way Quine wants them to be.

8/11/2015

A few thoughts on Moti Mizrahi's "The Pessimistic Induction: A Bad Argument Gone Too Far"

This post is exactly what the title says. I found this paper especially thought-provoking, so I wanted to try writing down/ nailing down exactly what those provoked thoughts were.

If you want to read the paper, the free, penultimate version is here, and the published, ridiculously expensive version is here (Synthese, 2013: 3209-3226).

Here's the bit from the paper that I want to focus on:
The theories on Laudan's list were not randomly selected, but rather were cherry-picked in order to argue against a thesis of scientific realism. If this is correct, then the pessimistic inductive generalization is a weak inductive argument.

To this pessimists might object that, if we simply do the required random sampling, then the pessimistic inductive generalization would be vindicated and shown to be a strong inductive generalization. So, to get a random of sample of scientific theories (i.e., a sample where theories have an equal chance of being selected for the sample), I used the following methodology:

- Using Oxford Reference Online, I searched for instances of the word 'theory' in the following titles: A Dictionary of Biology, A Dictionary of Chemistry, A Dictionary of Physics, and The Oxford Companion to the History of Modern Science.
~ I limited myself to these reference sources to make the task more manageable.
~ Since it is not clear how to individuate theories (e.g., is the Modern Evolutionary Synthesis a theory or is each of its theoretical claims, such as the claims about natural selection and genetic drift, a theory in its own right?), I limited myself to instances of the word 'theory.'

- After collecting 124 instances of 'theory' and assigning a number to each instance, I used a random number generator to select 40 instances out of the 124.

- I divided the sample of 40 theories into three categories: accepted theories (i.e., theories that are accepted by the scientific community), abandoned theories (i.e., theories that were abandoned by the scientific community), and debated theories (i.e., theories whose status as accepted or rejected is in question) (See Table 1).

...
Based on this sample, pessimists could construct the following inductive generalization:

15% of sampled theories are abandoned theories (i.e., considered false). Therefore, 15% of all theories are abandoned theories (i.e., considered false).

Clearly, this inductive generalization hardly justifies the pessimistic claim that most successful theories are false. Even if we consider the debated theories as false, the percentages do not improve much in favor of pessimists:

27% of sampled theories are abandoned theories (i.e., considered false). Therefore, 27% of all theories are abandoned theories (i.e., considered false).

The first thing I wanted to say is that I really like Mizrahi's basic idea here. Philosophers (myself included) sometimes throw up our hands to soon and say that some question is intractable, so I really appreciate that Mizrahi did the work of collecting some data that could place constraints on answers to certain versions of the pessimistic induction.

Here are four thoughts I had about the particulars of Mizrahi's method.

1) 3 of the 4 textual sources are (apparently) supposed to be present reference works for contemporary science, and I assume discarded/ superceded theories are much less likely to appear in such a reference work than in history of science reference works (which the last of the 4 is -- so I am curious whether the percentages would change significantly if we just looked at that 4th one, and/or other works that purport to cover the history of science, up to the present day).

2) Anti-realists have said this before, but I think it's relevant here too. The more recent a theory, the less likely it is there is evidence against it: the theory was framed to capture the data available at the time, and so the more recent a theory is, the less time there has been to accumulate/discover anomalous data.

3) The scientific realism debate is often/usually supposed to be restricted to ‘fundamental’ theories -- whatever those are. I don’t know how many of the theories in Tables 1 and 2 would qualify as fundamental. I have attached the table, so you can see for yourself; I'm pretty sure a good portion of them are fundamental, but I also think some of them are not. I don't know several of these theories (RRKM theory, anyone?), but again I wonder how that would affect the percentages.

4) I don't have very strong leanings/ intuitions pro or contra scientific realism (I currently think of myself as an agnostic/ quietist, looking for slightly more well-posed questions in the neighborhood). But something that happens either 15% or 27% of the time does not feel like a miracle (as in 'No-Miracles Argument') to me. Of course, more moderate realists may well say that all they claim is that Pr(Theory is true | Theory is successful) > 0.5. But I have heard a few realistically-inclined people recently talk about 'the no-miracles intuition' or something similar -- but presumably a miracle does not need to be invoked if I predicted your dice roll would come up '4', and then you rolled a 4.

5/01/2015

"outgroup homogeneity" and 'continental philosophy'

One phenomenon that social psychologists have found pretty consistently is called 'outgroup homogeneity.' The idea, as I understand it, is that people judge outgroup members (i.e. people who are not in a group they identify with) as being more homogeneous in the stereotypical traits attributed to the outgroup than they judge ingroup members on those same traits.

What gets lumped under the heading 'continental philosophy' today is a very diverse range of traditions and thinkers: phenomenology, structuralism, post-structuralism, deconstruction, existentialism, Nietzsche, Kierkegaard, and so on. Many of these are so different and even opposed to one another that it doesn't really make all that much sense to lump them together under one heading. 'Continental philosophy' is a phrase analytic philosophers devised (Glendinning 2006). So what I'm wondering now is whether the creation of that phrase/ category was facilitated by the outgroup homogeneity effect -- since without it, it would have been harder to amass together, under a single heading, all the disparate traditions.

4/07/2015

Das beste Blog der Welt

I'm sure most folks who check this blog already know about this, but just in case you missed it: André Carus has recently started writing some really interesting posts on his (aptly titled) Carnap Blog. It is required reading for anyone interested in Carnapia.

4/05/2015

Thoughts from the Pacific APA meeting


I just got back from the Pacific APA meeting. There were a lot of highlights for me: the session on Eugenics and Philosophy was really excellent (I especially got a lot out of Rob Wilson's opening remarks about his work on sterilized people in his province, and well as Adam Cureton's paper on disability and parenting); Nancy Cartwright's Dewey lecture was really interesting; and I was happy to see History of Analytic very well represented in several spots on the program. That included the author-meets-critics session on my Carnap, Tarski, Quine book -- I was very fortunate to have great commentators: Rick Creath, Gary Ebbs, and Greg Lavers. I'm very thankful for Richard Zach for organizing the session too, and Sean Morris for stepping in to chair at the last minute. And the conversation with the audience was helpful to me as well. Happily, even if you weren't at the session, you'll still be able to see what they said: their insightful comments will eventually appear in a symposium in Metascience.

One thing that I noticed was that there were not a lot of talks on philosophy of science proper. (Though happily there were some, e.g. an author-meets-critics on Jim Tabery's Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture.) Interestingly, there were a decent number of philosophers of science there, but often they were presenting something that was not philosophy of science (like me), or speaking on something philosophy-of-science adjacent (e.g. a philosopher of biology speaking on bioethics).

I was wondering whether anyone had hypotheses about this -- one hypothesis is that because the PSA exists and is pretty big, the PSA 'cannibalizes' the presentations from the APA. Another tack would be that my perception of the percentage of the profession that identify as philosophers of science is not accurate, and the APA program accurately reflected the true percentage. But I am very curious to hear other explanations.

(And the baked-goods highlight of the trip was the coffee bun at Papparoti -- it was the most interesting pastry I've had in a while.)