8/11/2015

A few thoughts on Moti Mizrahi's "The Pessimistic Induction: A Bad Argument Gone Too Far"

This post is exactly what the title says. I found this paper especially thought-provoking, so I wanted to try writing down/ nailing down exactly what those provoked thoughts were.

If you want to read the paper, the free, penultimate version is here, and the published, ridiculously expensive version is here (Synthese, 2013: 3209-3226).

Here's the bit from the paper that I want to focus on:
The theories on Laudan's list were not randomly selected, but rather were cherry-picked in order to argue against a thesis of scientific realism. If this is correct, then the pessimistic inductive generalization is a weak inductive argument.

To this pessimists might object that, if we simply do the required random sampling, then the pessimistic inductive generalization would be vindicated and shown to be a strong inductive generalization. So, to get a random of sample of scientific theories (i.e., a sample where theories have an equal chance of being selected for the sample), I used the following methodology:

- Using Oxford Reference Online, I searched for instances of the word 'theory' in the following titles: A Dictionary of Biology, A Dictionary of Chemistry, A Dictionary of Physics, and The Oxford Companion to the History of Modern Science.
~ I limited myself to these reference sources to make the task more manageable.
~ Since it is not clear how to individuate theories (e.g., is the Modern Evolutionary Synthesis a theory or is each of its theoretical claims, such as the claims about natural selection and genetic drift, a theory in its own right?), I limited myself to instances of the word 'theory.'

- After collecting 124 instances of 'theory' and assigning a number to each instance, I used a random number generator to select 40 instances out of the 124.

- I divided the sample of 40 theories into three categories: accepted theories (i.e., theories that are accepted by the scientific community), abandoned theories (i.e., theories that were abandoned by the scientific community), and debated theories (i.e., theories whose status as accepted or rejected is in question) (See Table 1).

...
Based on this sample, pessimists could construct the following inductive generalization:

15% of sampled theories are abandoned theories (i.e., considered false). Therefore, 15% of all theories are abandoned theories (i.e., considered false).

Clearly, this inductive generalization hardly justifies the pessimistic claim that most successful theories are false. Even if we consider the debated theories as false, the percentages do not improve much in favor of pessimists:

27% of sampled theories are abandoned theories (i.e., considered false). Therefore, 27% of all theories are abandoned theories (i.e., considered false).

The first thing I wanted to say is that I really like Mizrahi's basic idea here. Philosophers (myself included) sometimes throw up our hands to soon and say that some question is intractable, so I really appreciate that Mizrahi did the work of collecting some data that could place constraints on answers to certain versions of the pessimistic induction.

Here are four thoughts I had about the particulars of Mizrahi's method.

1) 3 of the 4 textual sources are (apparently) supposed to be present reference works for contemporary science, and I assume discarded/ superceded theories are much less likely to appear in such a reference work than in history of science reference works (which the last of the 4 is -- so I am curious whether the percentages would change significantly if we just looked at that 4th one, and/or other works that purport to cover the history of science, up to the present day).

2) Anti-realists have said this before, but I think it's relevant here too. The more recent a theory, the less likely it is there is evidence against it: the theory was framed to capture the data available at the time, and so the more recent a theory is, the less time there has been to accumulate/discover anomalous data.

3) The scientific realism debate is often/usually supposed to be restricted to ‘fundamental’ theories -- whatever those are. I don’t know how many of the theories in Tables 1 and 2 would qualify as fundamental. I have attached the table, so you can see for yourself; I'm pretty sure a good portion of them are fundamental, but I also think some of them are not. I don't know several of these theories (RRKM theory, anyone?), but again I wonder how that would affect the percentages.

4) I don't have very strong leanings/ intuitions pro or contra scientific realism (I currently think of myself as an agnostic/ quietist, looking for slightly more well-posed questions in the neighborhood). But something that happens either 15% or 27% of the time does not feel like a miracle (as in 'No-Miracles Argument') to me. Of course, more moderate realists may very well say that all they claim is that Pr(Theory is true | Theory is successful) > 0.5. But I have heard a few realistically-inclined people recently talk about 'the no-miracles intuition' or something similar -- but presumably a miracle does not need to be invoked if I predicted your dice roll would come up '4', and then you rolled a 4.

5/01/2015

"outgroup homogeneity" and 'continental philosophy'

One phenomenon that social psychologists have found pretty consistently is called 'outgroup homogeneity.' The idea, as I understand it, is that people judge outgroup members (i.e. people who are not in a group they identify with) as being more homogeneous in the stereotypical traits attributed to the outgroup than they judge ingroup members on those same traits.

What gets lumped under the heading 'continental philosophy' today is a very diverse range of traditions and thinkers: phenomenology, structuralism, post-structuralism, deconstruction, existentialism, Nietzsche, Kierkegaard, and so on. Many of these are so different and even opposed to one another that it doesn't really make all that much sense to lump them together under one heading. 'Continental philosophy' is a phrase analytic philosophers devised (Glendinning 2006). So what I'm wondering now is whether the creation of that phrase/ category was facilitated by the outgroup homogeneity effect -- since without it, it would have been harder to amass together, under a single heading, all the disparate traditions.

4/07/2015

Das beste Blog der Welt

I'm sure most folks who check this blog already know about this, but just in case you missed it: André Carus has recently started writing some really interesting posts on his (aptly titled) Carnap Blog. It is required reading for anyone interested in Carnapia.

4/05/2015

Thoughts from the Pacific APA meeting


I just got back from the Pacific APA meeting. There were a lot of highlights for me: the session on Eugenics and Philosophy was really excellent (I especially got a lot out of Rob Wilson's opening remarks about his work on sterilized people in his province, and well as Adam Cureton's paper on disability and parenting); Nancy Cartwright's Dewey lecture was really interesting; and I was happy to see History of Analytic very well represented in several spots on the program. That included the author-meets-critics session on my Carnap, Tarski, Quine book -- I was very fortunate to have great commentators: Rick Creath, Gary Ebbs, and Greg Lavers. I'm very thankful for Richard Zach for organizing the session too, and Sean Morris for stepping in to chair at the last minute. And the conversation with the audience was helpful to me as well. Happily, even if you weren't at the session, you'll still be able to see what they said: their insightful comments will eventually appear in a symposium in Metascience.

One thing that I noticed was that there were not a lot of talks on philosophy of science proper. (Though happily there were some, e.g. an author-meets-critics on Jim Tabery's Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture.) Interestingly, there were a decent number of philosophers of science there, but often they were presenting something that was not philosophy of science (like me), or speaking on something philosophy-of-science adjacent (e.g. a philosopher of biology speaking on bioethics).

I was wondering whether anyone had hypotheses about this -- one hypothesis is that because the PSA exists and is pretty big, the PSA 'cannibalizes' the presentations from the APA. Another tack would be that my perception of the percentage of the profession that identify as philosophers of science is not accurate, and the APA program accurately reflected the true percentage. But I am very curious to hear other explanations.

(And the baked-goods highlight of the trip was the coffee bun at Papparoti -- it was the most interesting pastry I've had in a while.)

10/21/2014

Historiographical reflections

I know this "scumbag analytic philosopher" meme is played out, but this one just came to me:


Commence groaning...

10/04/2014

(Why) must ethical theories be consistent with intuitions about possible cases?

Since Brian Weatherson recently classified this blog as 'active,' I thought I should try to live up to that billing.

A question came up in one of my classes yesterday. The student asked (in effect): Why wouldn't a moral theory that makes all the right 'predictions' about actual cases be good enough? Why demand that a moral theory must also be consistent with our intuitions about possible cases, even science-fiction-ish ones, as well?

(The immediate context was a discussion of common objections to utilitarianism, specifically, slavery and the utility monster. The student said, sensibly I think, that the utilitarian could reply that all actual cases of slavery are bad on utilitarian grounds, and there are no utility monsters.)

I know that some philosophers have argued that if a moral claim is true, then it is (metaphysically?) necessarily true: there is no possible world in which e.g. kicking kittens merely for fun is morally permissible. If you accept that all moral claims are like this, then I can see why you would demand our moral theories be consistent with our intuitions about all possible cases. But if one does not accept that all moral truths are metaphysically necessary, is there any other reason to demand the theory make the right prediction about merely possible cases?

This question seems especially pressing to me, if we think one of the main uses of moral theories is as a guide to action, since we only ever act in the actual world. However, now that I say that explicitly, I realize that whenever we make a decision, the option(s) we decided against are merely possible situations. So maybe that could explain why an ethical theory needs to cover merely possible cases? (Though even there, it need not cover all metaphysically possible cases -- e.g. the utility monster worries never need to be part of my actual decision-making process.)

9/15/2014

Are video games counter-examples to Suits' definition of 'game'?

Many readers are familiar with Bernard Suits' definition of 'game' in The Grasshopper. For those of you who aren't, Suits offers this definition of playing a game: "engaging in an activity directed towards bringing about a specific state of affairs, using only means permitted by rules, where the rules prohibit more efficient in favour of less efficient means, and where such rules are accepted just because they make possible such activity" (pp. 48-9).

I'm wondering whether video games are a counter-examples, because of the condition "the rules prohibit more efficient in favour of less efficient means." This makes sense for most games: in golf, it would be more efficient to just carry the ball by hand and put it into the hole; in poker, it would be more efficient to just reach across the table and take all your opponents' chips/cash. But what is the analogue of these 'more efficient' ways in a video game?

One might point to cheat codes, but even if a cheat code does satisfy this condition of Suits' definition, we can at least imagine a video game that doesn't have cheat codes.

6/09/2014

New(-ish) BJPS blog

Maybe everyone already knows about this, but the British Journal for the Philosophy of Science now has a blog: Auxiliary Hypotheses.  There are multiple thought-provoking posts -- happily, it doesn't only announce tables of contents.

(And I might have my own substantive post up sometime in the near future... I'm wrestling with Quine for the umpteenth time, and hopefully will be able to write something here about it.)

3/29/2014

implicit bias and vaccinations

Question: Is having a pernicious bias analogous to not getting vaccinated?

Philosophers have been talking more about implicit biases recently.  I was recently reading Payne's work on 'weapon bias'; here is the first couple sentences from the abstract of this paper:
"Race stereotypes can lead people to claim to see a weapon where there is none. Split-second decisions magnify the bias by limiting people’s ability to control responses."
That is, if forced to make a snap judgment, people in the US today are more likely to identify a non-gun tool as a gun if they have just seen a picture of someone typically racialized as black than someone typically racialized as white.  Also, if people had been under a heavy cognitive load before classifying an object as a gun or a tool, they are more likely to make this mistake.  If subjects have plenty of time, and have not been under heavy cognitive load, then they  make far fewer mistakes, and more importantly the rate of mistakes is the same regardless of race seen.

We typically say that people should be permitted complete freedom of thought: we should only be held accountable for our actions, not our beliefs.  I think (and I could be wrong about this) the usual justification for this complete freedom is: one can always control which thoughts one acts on.  For example, I can think 'I wish so-and-so were dead' without killing them, or even putting them at increased risk of being killed.  If we did not have control over whether our thoughts issued in corresponding actions, then simply having certain kinds of beliefs would put others at increased risk for harm.  It would arguably be a kind of negligence.

Cases like weapon bias suggest that the 'usual justification' above is wrong, and that my having certain (conscious or unconscious) beliefs does put others at increased risk for harm.  The usual justification holds in good circumstances: when I have plenty of time, and am not under a heavy cognitive load, I can control which of my thoughts issue in corresponding actions.  But sometimes I find myself in less than good circumstances (at least sometimes through no fault of my own).  And in those circumstances, my pernicious biases are more likely to harm others.

A biased person in a social setting that includes people stigmatized by that bias seems analogous to me to someone who has not gotten a vaccination for a communicable disease who is not quarantined.  If I don't get vaccinated, then it is of course possible that I will never get the measles, and thus never harm anyone.  But my lack of vaccination raises the risk that others will be harmed.  Having pernicious biases seems to be the same, if the weapons

Can someone talk me out of this line of reasoning?  I think I have an obligation to get vaccinated, but an obligation to have a 'thought vaccination' (whether for conscious or unconscious thoughts) sounds like the Thought Police/ brainwashing -- a result I'm guessing most people want to avoid.

1/03/2014

π, τ, and Quine's pragmatism

Some readers may already be familiar with the π vs. τ debate. If not, I recommend checking out Michael Hartl's τ manifesto and this fantastic short video by Vi Hart. An attempt at a balanced evaluation of π vs. τ can be found here.

For those who don't know, τ is just 2π. The defenders of τ argue (as seen in the above links) that using it instead of π makes many things much clearer and simpler/ more elegant.

Let's assume for present purposes that the τ-proponents turn out, in the end, to be right. I want to ask a further question: what would this then say about Quine's denial of the analytic-synthetic distinction? Quine's denial, virtually all agree, is the claim that all rational belief change is pragmatic, i.e. there is no principled difference between questions of evidence/justification on the one hand, and questions of efficiency and expedience on the other (= between external and internal questions, i.e. between practical questions of which language-form to adopt, and questions of whether a particular empirical claim is supported by the available evidence).

So here's my question: if Quine is right, then is our old friend C=2πr simply wrong (and Cr right)? If not, how can a Quinean wiggle out of that consequence? And if so (i.e. C=2πr really is wrong), does the Quinean have any way of softening the sting of this apparently absurd consequence?

Labels: , ,