the no-miracles argument may not commit the base-rate fallacy

Certain philosophers argue that the No-Miracles Argument for realism (Colin Howson, Peter Lipton), the Pessimistic Induction against realism (Peter Lewis), or both arguments (P.D. Magnus and Craig Callender) commit the base-rate fallacy. I am not sure these objections are correct, and will try to articulate the reason for my doubt here.

I need to give some set-up; many readers will be familiar with some or all of this. So you can skip the next few paragraphs if you already know about the base-rate objection to the No-Miracles Argument and the Pessimistic Induction.

I suspect many readers are familiar with the base-rate fallacy; there are plenty of explanations of it around the internet. But just to have a concrete example, let’s consider a classic case of base-rate neglect. We are given information like the following, about a disease and a diagnostic test for this disease:

(1) There is a disease D that, at any given time, 1 in every 1000 members of the population has: Pr(D)=.001.

(2) If someone actually has disease D, then the test always comes back positive: Pr(+|D)=1.

(3) But the test has a false positive rate of 5%. That is, if someone does NOT have D, there is a 5% chance the test still comes back positive: Pr(+|~D)=.05.

Now suppose a patient tests positive. What is the probability that this patient actually has disease D?
Someone commits the base-rate fallacy if they say the probability is fairly high, because they discount or ignore the information about the ‘base rate’ of the disease in the population. Only 1 in 1000 people have the disease. But for every 1000 people who don’t have it, 50 people will test positive. You have to use Bayes’ Theorem to get the exact probability that someone who tests positive has the disease; the probability turns out to be slightly under 2%.

In the context of the No-Miracles and Pessimistic Induction arguments, the objection is that both arguments ignore a relevant base rate. For example, the No-Miracles argument says:

(A) Pr (T is empirically successful | T is approximately true) = 1

(B) Pr (T is empirically successful | ~ (T is approximately true)) <<1. Inequality (B) is supposed to capture the ‘no-miracles intuition’: the probability that a false theory would be empirically successful is so low that it would be a MIRACLE if that theory were empirically successful. Hopefully you can see that (A) corresponds to (2) in the original, medical base-rate fallacy example, and (B) corresponds to (3). Empirical success is analogous to a positive test for the truth of a theory, and the no-miracles intuition is that the false-positive rate is very low (so low that a false positive would be a miracle). The base-rate objection to the No-Miracles argument is just that the No-Miracles argument ignores the base rate of true theories in the population of theories. In other words, in the NMA, there is no analogue of (1) in the original example. Without that information, even a very low false-positive rate cannot license the conclusion that an arbitrary empirically successful theory is probably true. (And furthermore, that base rate is somewhere between extremely difficult and impossible to obtain: what exactly is the probability that an arbitrary theory in the space of all possible theories is approximately true?)

OK, that concludes the set-up. Now I can state my concern: I am not sure the objectors’ demand for the base rate of approximately true theories in the space of all possible theories is legitimate. Why? Think about the original medical example again. There, we are simply GIVEN the base rate, namely (1). But how would one acquire that sort of information, if one did not already have it? Well, you would have to run tests on large numbers of people in the population at large, to determine whether or not they had disease D. These tests need not be fast-diagnosing blood or swab tests; they might involve looking for symptoms more ‘directly,’ but they will still be tests. And this test, which we are using to establish the base rate of D in the population, will still presumably have SOME false positives. (I’m guessing that most diagnostic tests are not perfect.) But if there are some false positives, and we don’t yet know the base rate of the disease in the population, then—if we follow the reasoning of the base-rate objectors to the NMA and the PI—any conclusion we draw about the proportion of the population that has the disease is fallacious, for we have neglected the base rate. But on that reasoning, we can never determine the base rate of a disease (unless we have an absolutely perfect diagnostic test), because of an infinite regress.

In short: if the NMA commits the base-rate fallacy, then any attempt to discover a base rate (when detection tools have false positives) also commits the base-rate fallacy. But presumably, we do sometimes discover base rates (at least approximately) without committing the base-rate fallacy, so by modus tollens, the NMA does not commit the base-rate fallacy.

NMA does not commit the base rate fallacy, because it does not ignore AVAILABLE evidence about the base rate of true theories in the population of theories. In the medical example above, the base rate (1) is available information; under-weighing generates the fallacy. In the scientific realism case, however, the base rate is not available. If we did somehow have the base rate of approximately true theories in the population of all theories (the gods of science revealed it to us, say), then yes, it would be fallacious to ignore or discount that information when drawing conclusions about the approximate truth of a theory from its empirical success, i.e. the NMA would be committing the base-rate fallacy. But unfortunately the gods of science have not revealed that information to us. Not taking into account unavailable information is not a fallacy; in other words, the base-rate fallacy only occurs when one fails to take into account available information.

I am not certain about the above. I definitely want to talk to some more statistically savvy people about this. Any thoughts?



wait, what?

I'm not sure I can articulate why, but this makes me want to stop blogging...


HOPOS 2016: Submit an abstract

HOPOS 2016 Call for Submissions
June 22-25, 2016, Minneapolis, Minnesota, USA

Keynote Speakers:

Karine Chemla (REHSEIS, CNRS, and Université Paris Diderot)

Thomas Uebel (University of Manchester)

HOPOS: The International Society for the History of Philosophy of Science will hold its eleventh international congress in Minneapolis, on June 22-25, 2016. The Society hereby requests proposals for papers and for symposia to be presented at the meeting. HOPOS is devoted to promoting research on the history of the philosophy of science. We construe this subject broadly, to include topics in the history of related disciplines, including computing, in all historical periods, studied through diverse methodologies. In order to encourage scholarly exchange across the temporal reach of HOPOS, the program committee especially encourages submissions that take up philosophical themes that cross time periods. If you have inquiries about the conference or about the submission process, please write to Maarten van Dyck: maarten.vandyck [at] ugent.be.


To submit a proposal for a paper or symposium, please visit the conference website: http://hopos2016.umn.edu/call-submissions


Descartes on Mathematical Truth and Mathematical Existence

This is not so much a post as a note to myself for something I would like to think about in the future.

In the first Meditation, Descartes writes:
"arithmetic, geometry, and other such disciplines, which treat of nothing but the simplest and most general things... are indifferent as to whether these things do or do not in fact exist, contain something certain and indubitable."
I should look more into this apparent 'truth-independent-of-reference' position, that mathematical truth is independent of the existence of mathematical entities, especially as an alternative to the Quine-Putnam indispensibility argument for the reality of mathematical objects.

Relevant secondary literature:
- Gregory Brown (in "Vera Entia: The Nature of Mathematical Objects in Descartes" Journal of the History of Philosophy, 1980:23-37) contains a nice discussion of the kind of existence mathematical objects have for Descartes, esp. section III:
"mathematical objects in particular, have a "being" that is independent of their actual existence in (physical) space or time, and that is characterized by what Descartes calls 'possible existence'"(p.36).

- Brown quotes Anthony Kenny ("The Cartesian Circle and Eternal Truths," Journal of Philosophy, 1970):
"the objects of mathematics are not independent of physical substances; but they do not support the view that the objects of mathematics depend for their essences on physical existents... . Descartes held that a geometrical figure was a mode of physical or corporeal substance; it could not exist, unless there existed a physical substance for it to exist in. But whether it existed or not, it had a kind of being that was sufficient to distinguish it from nothing, and it had its eternal and immutable essence."


Ontological Commitment, "To be is to be the value of a bound variable," and Schematic Letters in Quine

I am currently working on a paper on Quine's shifting ontological thoughts. Something occurred to me while reading some of his stuff from the late 1930s and 40s, which probably won't make it into the paper, but that I wanted to try to get clear for myself.

Most readers of this blog have heard Quine's famous ontological dictum "To be is to be the value of a bound variable." This is a criterion of ontological commitment for a theory: what the theory says exists is whatever the values of its bound variables are.

Quine includes 'bound', I take it, so that (what he calls) schematic letters do not have existential import. For example, in the expression (x)(P(x) --> P(x)), the P cannot be bound by a quantifier (P) without the language being committed to the existence of properties (or traits, or sets, or whatever you think predicate letters signify). The P is instead a 'dummy letter': the full expression (x)(P(x) --> P(x)) is a schema, not a full sentence in first-order logic, but the schema allows us to say that any sentence that results from substituting an actual predicate for P is a theorem.

Now I can get to what's bothering me. Consider a theory+language, such as primitive recursive arithmetic (PRA), that has (what normally would be called) variables, but does not have any explicitly written-down quantifiers. In such a language, when we see a sentence like x=y = y+x, we can say ‘If we were expressing this in first-order logic, we would understand a pair of universal quantifiers ‘(x)(y)’ out front to make this a sentence,’ but there are actually no quantifier-symbols as part of the language we are considering. So what I’m wondering is: if someone accepts Quine’s line of thought about the difference between (ontologically-committing) variables vs. (ontologically-innocent) schematic letters, then should [/can] that person also say that the x’s and y’s of PRA are schematic letters, not variables? And thus that PRA does not [/need not] commit its users to the existence of the natural numbers -- or to anything else for that matter?

Here is a first potential problem for the Quinean. Let's call Language 1 (L1) the quantifier-free PRA described just above. And let L2 be the first-order logic translation of L1, i.e. L2 just puts the appropriate universal quantifiers in front of every sentence of L1 which contains variables. Now if to be is to be the value of a bound variable, L1 is not committed to numbers (or something number-like enough to satisfy the axioms of PRA), but L2 is. Yet L1 and L2 constitute a paradigm case of ‘merely notational variants’: the same theory, expressed using different notations. So L1 and L2 should either both be committed to the existence of numbers, or neither should.

Now, I can imagine a dedicated Quinean at this point could grasp the second option: we can consistently take the view that L2 is somehow not 'really' ontologically committed to numbers, because we can translate L2 back into (bound-variable-free) L1 (by just erasing every universal quantifier in every L2 sentence). The general principle underlying this is something like: a theory is committed to X just in case X is a value of a bound variable in every adequate formalization of that theory.

This position strikes me as unintuitive. But I think there is a further reason to reject it. For now consider language L3, which is just L2 + the standard definition (x) = ~(∃x)~. We will then clearly have some ontological commitments (albeit negative ones, i.e. commitments that such-and-such does NOT exist). So perhaps the Quinean will say that "To be is to be the value of a bound variable" is only a recipe for finding the positive ontological commitments of a theory. I'm not sure about that move; perhaps it can be made to work.

So in sum, this makes me wonder whether Quine’s contrast between schematic letters on the one hand, vs. genuine variables on the other, may not be as sharp as he needs it to be. In other words, it is not clear to me that schematic letters can be made ontologically innocent in the way Quine wants them to be.


A few thoughts on Moti Mizrahi's "The Pessimistic Induction: A Bad Argument Gone Too Far"

This post is exactly what the title says. I found this paper especially thought-provoking, so I wanted to try writing down/ nailing down exactly what those provoked thoughts were.

If you want to read the paper, the free, penultimate version is here, and the published, ridiculously expensive version is here (Synthese, 2013: 3209-3226).

Here's the bit from the paper that I want to focus on:
The theories on Laudan's list were not randomly selected, but rather were cherry-picked in order to argue against a thesis of scientific realism. If this is correct, then the pessimistic inductive generalization is a weak inductive argument.

To this pessimists might object that, if we simply do the required random sampling, then the pessimistic inductive generalization would be vindicated and shown to be a strong inductive generalization. So, to get a random of sample of scientific theories (i.e., a sample where theories have an equal chance of being selected for the sample), I used the following methodology:

- Using Oxford Reference Online, I searched for instances of the word 'theory' in the following titles: A Dictionary of Biology, A Dictionary of Chemistry, A Dictionary of Physics, and The Oxford Companion to the History of Modern Science.
~ I limited myself to these reference sources to make the task more manageable.
~ Since it is not clear how to individuate theories (e.g., is the Modern Evolutionary Synthesis a theory or is each of its theoretical claims, such as the claims about natural selection and genetic drift, a theory in its own right?), I limited myself to instances of the word 'theory.'

- After collecting 124 instances of 'theory' and assigning a number to each instance, I used a random number generator to select 40 instances out of the 124.

- I divided the sample of 40 theories into three categories: accepted theories (i.e., theories that are accepted by the scientific community), abandoned theories (i.e., theories that were abandoned by the scientific community), and debated theories (i.e., theories whose status as accepted or rejected is in question) (See Table 1).

Based on this sample, pessimists could construct the following inductive generalization:

15% of sampled theories are abandoned theories (i.e., considered false). Therefore, 15% of all theories are abandoned theories (i.e., considered false).

Clearly, this inductive generalization hardly justifies the pessimistic claim that most successful theories are false. Even if we consider the debated theories as false, the percentages do not improve much in favor of pessimists:

27% of sampled theories are abandoned theories (i.e., considered false). Therefore, 27% of all theories are abandoned theories (i.e., considered false).

The first thing I wanted to say is that I really like Mizrahi's basic idea here. Philosophers (myself included) sometimes throw up our hands to soon and say that some question is intractable, so I really appreciate that Mizrahi did the work of collecting some data that could place constraints on answers to certain versions of the pessimistic induction.

Here are four thoughts I had about the particulars of Mizrahi's method.

1) 3 of the 4 textual sources are (apparently) supposed to be present reference works for contemporary science, and I assume discarded/ superceded theories are much less likely to appear in such a reference work than in history of science reference works (which the last of the 4 is -- so I am curious whether the percentages would change significantly if we just looked at that 4th one, and/or other works that purport to cover the history of science, up to the present day).

2) Anti-realists have said this before, but I think it's relevant here too. The more recent a theory, the less likely it is there is evidence against it: the theory was framed to capture the data available at the time, and so the more recent a theory is, the less time there has been to accumulate/discover anomalous data.

3) The scientific realism debate is often/usually supposed to be restricted to ‘fundamental’ theories -- whatever those are. I don’t know how many of the theories in Tables 1 and 2 would qualify as fundamental. I have attached the table, so you can see for yourself; I'm pretty sure a good portion of them are fundamental, but I also think some of them are not. I don't know several of these theories (RRKM theory, anyone?), but again I wonder how that would affect the percentages.

4) I don't have very strong leanings/ intuitions pro or contra scientific realism (I currently think of myself as an agnostic/ quietist, looking for slightly more well-posed questions in the neighborhood). But something that happens either 15% or 27% of the time does not feel like a miracle (as in 'No-Miracles Argument') to me. Of course, more moderate realists may well say that all they claim is that Pr(Theory is true | Theory is successful) > 0.5. But I have heard a few realistically-inclined people recently talk about 'the no-miracles intuition' or something similar -- but presumably a miracle does not need to be invoked if I predicted your dice roll would come up '4', and then you rolled a 4.


"outgroup homogeneity" and 'continental philosophy'

One phenomenon that social psychologists have found pretty consistently is called 'outgroup homogeneity.' The idea, as I understand it, is that people judge outgroup members (i.e. people who are not in a group they identify with) as being more homogeneous in the stereotypical traits attributed to the outgroup than they judge ingroup members on those same traits.

What gets lumped under the heading 'continental philosophy' today is a very diverse range of traditions and thinkers: phenomenology, structuralism, post-structuralism, deconstruction, existentialism, Nietzsche, Kierkegaard, and so on. Many of these are so different and even opposed to one another that it doesn't really make all that much sense to lump them together under one heading. 'Continental philosophy' is a phrase analytic philosophers devised (Glendinning 2006). So what I'm wondering now is whether the creation of that phrase/ category was facilitated by the outgroup homogeneity effect -- since without it, it would have been harder to amass together, under a single heading, all the disparate traditions.


Das beste Blog der Welt

I'm sure most folks who check this blog already know about this, but just in case you missed it: André Carus has recently started writing some really interesting posts on his (aptly titled) Carnap Blog. It is required reading for anyone interested in Carnapia.


Thoughts from the Pacific APA meeting

I just got back from the Pacific APA meeting. There were a lot of highlights for me: the session on Eugenics and Philosophy was really excellent (I especially got a lot out of Rob Wilson's opening remarks about his work on sterilized people in his province, and well as Adam Cureton's paper on disability and parenting); Nancy Cartwright's Dewey lecture was really interesting; and I was happy to see History of Analytic very well represented in several spots on the program. That included the author-meets-critics session on my Carnap, Tarski, Quine book -- I was very fortunate to have great commentators: Rick Creath, Gary Ebbs, and Greg Lavers. I'm very thankful for Richard Zach for organizing the session too, and Sean Morris for stepping in to chair at the last minute. And the conversation with the audience was helpful to me as well. Happily, even if you weren't at the session, you'll still be able to see what they said: their insightful comments will eventually appear in a symposium in Metascience.

One thing that I noticed was that there were not a lot of talks on philosophy of science proper. (Though happily there were some, e.g. an author-meets-critics on Jim Tabery's Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture.) Interestingly, there were a decent number of philosophers of science there, but often they were presenting something that was not philosophy of science (like me), or speaking on something philosophy-of-science adjacent (e.g. a philosopher of biology speaking on bioethics).

I was wondering whether anyone had hypotheses about this -- one hypothesis is that because the PSA exists and is pretty big, the PSA 'cannibalizes' the presentations from the APA. Another tack would be that my perception of the percentage of the profession that identify as philosophers of science is not accurate, and the APA program accurately reflected the true percentage. But I am very curious to hear other explanations.

(And the baked-goods highlight of the trip was the coffee bun at Papparoti -- it was the most interesting pastry I've had in a while.)


Historiographical reflections

I know this "scumbag analytic philosopher" meme is played out, but this one just came to me:

Commence groaning...