10/14/2018

The perfectionist's paradox

EDIT: ADDED Oct. 16 2018: As Karim Zahidi notes in the first comment below, I made an elementary logical error in thinking that (1) is evidence for (2). So I have crossed out the original mistake like this below. But I still think the argument after that step may work: so now the argument just starts from (2) as a supposedly plausible claim, instead of trying to justify (2) via (1).

----------

This might sound initially like a too-clever undergrad ‘gotcha’ paradox. However, I think that at least for some folks who have perfectionist tendencies, the following is experienced as a genuine difficulty in their lives.

The following strikes me as plausible:

  (1) It’s OK to make some mistakes.

If (1) is true, then it seems the following should be true as well, since it’s just a restricted version of (1):


  (2) It’s morally OK to make some moral mistakes.

But

  (3) Making a moral mistake is doing something morally impermissible,

and

  (4) if something is morally OK, then it is morally permissible.

And (2)-(4) logically entail

  (C) It’s morally permissible to do some morally impermissible things.

And (C) looks like a contradiction.

(For all I know this is already out there somewhere, but it was not on the interesting list of paradoxes of deontic logic in the Stanford Encyclopedia.)

7/17/2018

'Extra-Weak' Underdetermination

I’ll start briefly with a few reminders and fix a little terminology. I then introduce a new(?) sub-species of underdetermination argument, whose premises are logically weaker than existing underdetermination arguments, but still(?) deliver the anti-realist’s conclusion.

Underdetermination arguments in philosophy of science aim to show that an epistemically rational person (= someone who weighs their available evidence correctly) should suspend belief in the (approximate) truth of current scientific theories, even if such theories make very accurate predictions.

A scientific theory T is strongly underdetermined = T has a genuine competitor theory T*, and T and T* make all the same observable predictions. (So the two theories’ disagreement must concern unobservable stuff only.)

A scientific theory T is weakly underdetermined = T has a genuine competitor theory T*, and all the observable data/ evidence gathered thus far is predicted equally well by both T and T*.(†) (So collecting new data could end a weak underdetermination situation.)

Anti-realists then argue from the purported fact that (all/most) of our current scientific theories are undetermined, to the conclusion that an epistemically rational person should suspend belief in (all/most) of our current scientific theories.

Realists can reasonably respond by arguing that even weak underdetermination, in the above sense, is not common: even if one grants that there is an alternative theory T* that is consistent with the data gathered so far, that certainly does not entail that T and T* are perfectly equally supported by the available data is unlikely. There is no reason to expect T and T* would be a perfect ‘tie’ for every other theoretical virtue besides consistency with the data. (Theoretical virtues here include e.g. simplicity, scope, relation to other theories, etc.) The evidential support for a hypothesis is not merely a matter of the consistency of that hypothesis with available data.

At this point, anti-realists could dig in their heels and simply deny the immediately preceding sentence. (The other theoretical virtues are ‘merely pragmatic,’ i.e. not evidential.) But that generates a standoff/stalemate, and furthermore I find that response unsatisfying, since an anti-realist who really believes that should probably be a radical Cartesian skeptic (yet scientific anti-realism was supposed to be peculiar to science).

So here’s another reply the anti-realist could make: grant the realist’s claims that even weak determination is not all that common in the population of current scientific theories, and furthermore that typically our current theory T is in fact better supported than any of the competitors T1, T2, ... that are also consistent with the data collected so far. The anti-realist can grant these points, and still reach the standard underdetermination-argument conclusion that we should suspend belief in the truth of T, IF the sum of the credences one should assign to T1, T2, ... is at least 0.5.

For example: suppose there are exactly 3 hypotheses consistent with all the data collected thus far, and further suppose
 Pr(T) = 0.4,
 Pr(T1) = 0.35, and
 Pr(T2) = 0.25.
In this scenario, T is better supported by the evidence than T1 is or T2 is, so T is not weakly underdetermined. However, assuming that one should not believe p is true unless Pr(p)>0.5, one should still not believe T in the above example.

I call such a T extra-weakly underdetermined: the sum of rational degrees of belief one should have in T’s competitors is greater than or equal to the rational degree of belief one should have in T.

We can think about this using the typical toy example used to introduce the idea of underdetermination in our classes, where we draw multiple curves through a finite set of data points:
We can simultaneously maintain that the straight-line hypothesis (Theory A) is more probable than the others, but nonetheless deny that we should believe it, as long as the other hypotheses’ rational credence levels sum to 0.5 or higher. And there are of course infinitely many competitors to Theory A, so it is an infinite sum. The realist, in response to this argument, will thus have to say that that infinite sum will converge to less than 0.5.

The above argument from extra-weak underdetermination is clearly related to the ‘catch-all hypothesis’ (in the terminology above, ~T) point that has been discussed elsewhere in the literature on realism, especially in connection with Bayesian approaches (see here and the references therein). But I think there is something novel about the extra-weak underdetermination argument: as we add new competitor theories to the pool (T3, T4… in the example above), the rational credence level we assign to each hypothesis will presumably go down. (I include ‘presumably,’ because it is certainly mathematically possible for the new hypothesis to only bring down the rational credence level for some but not all of the old hypotheses.) So the point here is not just that there is some catch-all hypothesis, which it is difficult(?) to assign a degree of rational belief to (that's the old news), but also that we increase the probability of something like the 'catch-all' hypothesis by adding new hypotheses to it. (I have to say 'something like' it, because T1, T2... are specific theories, not just the negation of T.)

----

(†): Note that, unlike some presentations of underdetermination, I do not require that T and T* both predict ALL the available data. I take "Every theory is born refuted" seriously. And I actually think this makes underdetermination scenarios more likely, since a competitor theory need not be perfect -- and the imperfections/ shortcomings of T could be different from those of T* (e.g. T might be more complicated, while T* has a narrower range of predictions). Hasok Chang's Is Water H2O? makes this point concretely and (to my mind) compellingly, about the case of Lavoisier's Oxygen-based chemistry vs. Priestley's Phlogiston-based chemistry.

3/13/2018

cognitive impenetrability of some aesthetic perception

For me, one of the interesting experiences of getting older is seeing, from the internal first-person perspective, many of the generalizations one hears about 'getting older' come true in my own life. One of the most obvious/ salient ones for me is about musical tastes. I love a lot of hip hop from the early-to-mid 90's. (This is probably still my favorite hip hop album of all time.) I do also like some of the stuff that is coming out now, but on average, the beats in particular just sound bad to me. In particular, the typical snare sound -- I can't get over how terrible and thin it sounds.

But on the other hand, I know full well that, as people get older, they start thinking 'Young people's music today is so much worse than when I was a kid!' And that I heavily discounted old people's views about new music when I was in school.

Yet this makes absolutely no difference to my perceiving the typical trap snare sound today as really insubstantial and weak -- just ugly. The theoretical knowledge makes zero difference to my experience.

This reminded me of Fodor's famous argument from the Müller-Lyer illusion* for the cognitive impenetrability of perception. No matter how many times I am told that the two horizontal lines are the same length, no matter how many times I lay a ruler next to each line in succession and measure them to be the same length, I still perceive one line as shorter than the other. My theoretical knowledge just can't affect my perception. In a bit of jargon, the illusion is mandatory.

My experience of the typical hip hop snare sound today is similarly mandatory for me, despite the fact that I know (theoretically/ cognitively) that, as an old person, I should discount my aesthetic impressions of music coming out today.

This seems like it could make trouble for a Fodorian who wants to use the mandatoriness of illusions as an argument that perception is unbiased/ theory-neutral -- in a conversation about the best hip hop albums of all time, my aesthetic data would extremely biased towards stuff that came out between 1989-1995.

-----
*(Have you seen the dynamic Müller-Lyer illusions? Go here and scroll down to see a few variants.)