5/28/2021

One argument for free logic over classical logic

I have just started working on an 'opinionated introduction' to free logic for the Cambridge Elements series in Philosophy and Logic. In classical logic, for a character or strong of characters to be a name, it must refer to exactly one individual. Free Logic relaxes this assumption. Names can be 'empty' in Free Logic.

I'm currently working on the section that motivates Free Logic. I wrote up what I think are the 'standard' reasons in favor of Free Logic, and then added the following one, which I'm not 100% sure about yet. I don't think I've seen it anywhere else before, but please correct me if someone else has made this argument already!

Suppose s is a string of characters with all the grammatical or syntactic markers of a name of an individual. We explicitly leave it open whether s refers to exactly one individual, i.e. we leave it open whether s is like 'Angela Merkel', or is instead like 'Zeus'. For the classical logician, 'Zeus' cannot be a name. So the classical logician holds that some strings of characters with the form s=s are not true (e.g., ‘Zeus = Zeus’), while other strings with the same form are true (e.g., ‘Angela Merkel = Angela Merkel’). However, this fact about classical logic conflicts with the widely-accepted principle that a logical truth is true in virtue of its logical form. That is:

(FORMAL) If a string of characters is a logical truth, then every string with the same grammatical or syntactic form is also true.
Since ‘Angela Merkel = Angela Merkel’ has the same grammatical or syntactic form as ‘Zeus = Zeus’, and classical logic classifies the first but not the second as a logical truth, classical logic violates this widely-held principle (FORMAL). [EDIT/ UPDATE (June 1 2021): This argument might be improved by replacing every instance of 'logical truth' with 'theorem', and 'logical form' with 'syntactic form'; see the 7th comment in the comment thread to this post]

'Positive' free logicians can avoid this problem: they make every instance of s=s, including `Zeus = Zeus', a logical truth (again, where s has all the grammatical features of a name). And 'negative' and 'neutral' free logics make no instances of s=s logical truths. (However, as Nolt points out in the Free Logic SEP entry, "[I]n negative or neutral free logic [it] is not the case [that] ... any substitution instance of a valid formula ... is itself a valid formula"; see the reference there for explanation.)

5/23/2021

Marquis's double standard

I teach bioethics most years. And like many people who teach bioethics, I teach Don Marquis’s article that argues that typical cases of voluntary abortion are very morally wrong. There are many objections that one can and should make to the claims in this article. This semester, another one came to me. I have not seen it before. This objection may very well already be out there; google scholar says Marquis’s article has been cited 618 times. I wrote it down, mostly in order to get it clear in my own head. If something like this argument is already out there somewhere among those 618 citing articles, please let me know.

Everyone agrees that in the large majority of cases, murder is seriously morally wrong. Marquis asks the question: Why? What makes murder seriously morally wrong? Marquis’s answer:

(FLO) If something has a future like ours (with many “experiences, activities, projects, and enjoyments” (189)), then it is prima facie seriously morally wrong to destroy that thing.
Marquis combines (FLO) with the (dubious*) claim that a typical fetus has a future like ours to derive an anti-abortionist conclusion.

One initial critical reaction to (FLO): Are you saying it’s morally permissible to kill people who are near the end of their lives, since they have very little future left?

Marquis’s (dialectically correct and fair) reaction: No. If you read principle (FLO) carefully, you’ll see that an entity’s having a future like ours is SUFFICIENT to make it morally wrong to destroy that entity. It is NOT a necessary condition: Marquis is not claiming that if a being lacks a future like ours, then it is morally permissible to kill it. He points out that there can be other reasons, besides having a future like ours, why it is wrong to kill people who have very little future remaining.

My ultimate conclusion in this post is that Marquis does not allow one of his opponents the exactly parallel ‘correct and fair’ reaction, for their competing position. That is, Marquis’s defense relies on a double standard: he allows himself to have multiple (non-competing) explanations for why different killings are morally wrong, but he does not allow his opponents to have multiple non-competing explanations for why different killings are morally permissible.

Let's get started. One way to resist Marquis’s argument is to offer an alternative answer to the question ‘What makes murder wrong?’ One alternative answer Marquis considers is the ‘Desire Account’:

(DESIRE) If a being has a desire to live, then it is prima facie seriously morally wrong to destroy it.
Clearly, if one accepts this as correct explanation of what makes murder wrong, then the main motivation to accept (FLO) disappears, and with it a central motivation for accepting Marquis’s anti-abortion conclusion.

Marquis responds as follows. (DESIRE) does not generate a valid argument that abortion is typically morally permissible, even if the anti-abortionist grants that the fetus lacks a desire to live. To create a valid pro-choice argument from the premise that the fetus desires to live, we would need the converse direction of the conditional in (DESIRE), namely

(CONVERSE DESIRE) If a being lacks the desire to live, then it is prima facie morally permissible to destroy it.
And (CONVERSE DESIRE) is incorrect. As Marquis points out, it is not morally permissible to kill a person who is currently asleep, or who is strongly suicidal, even though neither of those two types of people currently has a desire to live.

So much for set-up; now I can state my point. I happily grant that (CONVERSE DESIRE) is incorrect. But I deny that someone who accepts the Desire Account—even if they accept it in order to undercut Marquis’s argument—must accept (CONVERSE DESIRE).

Obviously, logically, one can accept (DESIRE) without accepting (CONVERSE DESIRE). So Marquis’s reply must be that a person who criticizes his argument by appealing to (DESIRE) must dialectically be committed to (CONVERSE DESIRE), if they actually hope to undercut his argument. Spelling this out: Marquis’s imagined critic thinks (DESIRE) better explains what makes murder wrong than (FLO); thus this critic accepts (DESIRE) in place of (FLO), thereby removing needed evidential (abductive, inference-to-the-best-explanation) support for one the premises of Marquis’s argument. But Marquis thinks that, in order to undermine his anti-abortionist argument, his opponent needs (CONVERSE DESIRE). Why? (To be honest I’m not 100% sure, but:) The combination of (DESIRE) plus the claim that the fetus lacks a desire to live does not entail that it’s morally permissible to destroy a fetus. In order for ‘Fetuses lack desires’ to deliver a validly derived pro-choice conclusion, (CONVERSE DESIRE) is needed as a premise. Therefore, Marquis concludes, for proponents of the Desire Account to have an argument for their pro-choice position, they must accept (CONVERSE DESIRE).

But this is incorrect. Someone can believe that the Desire Account’s explanation of why killing adults is wrong is at least as good as Marquis’s FLO, and use some other rationale to justify their pro-choice position that does not use (CONVERSE DESIRE) as a premise. (For example, following J. J. Thomson, they could claim that abortion is morally permissible because I have a prima facie right to control the degree to which other being use my body.) Marquis’s criticism would only be legitimate if the only way a proponent of the Desire Account could argue for the permissibility of abortion is by appealing to whether or not beings had a desire to live. But that seems pretty clearly false, or at least not independently motivated.

Now, one might imagine Marquis responding to this by saying something like ‘My overall account is better, because the FLO, unlike (DESIRE), gives a unified treatment of abortion cases and murder,’ or ‘The Desire Account proponent you have described has to make an ad hoc maneuver, having one explanation for what makes killing adults wrong, and a totally different explanation for what makes destroying fetuses morally acceptable.’ And here (at last) we get to the ‘double standard’ I mentioned at the beginning. Marquis allows himself to have one explanation for what makes killing a typical child or middle-aged adult wrong, and a separate, distinct explanation for what makes killing a very elderly person wrong. If he didn’t allow himself two distinct explanations, then the first (prima facie unfair) criticism we saw of his view would work: his explanation of what makes murder wrong would have to be rejected, on the grounds that it does not entail that a nursing home massacre is a moral atrocity (of course, it doesn’t entail that it isn’t an atrocity, either).

And eliminating the double-standard would seriously undermine Marquis’s position. For what happens if we apply a uniform standard to both the future-like-ours account of what makes killing wrong and the desire account of what makes killing wrong? Then we would either (i) allow the desire account proponent to have a second, separate explanation for what makes killing a suicidal person wrong (so that the desire account is saved from Marquis’s criticism), or (ii) Marquis’s future-like-ours account is undermined by the fact that it does not entail that killing very elderly people is extremely morally wrong.

- - - -
* I think that if a pregnant person has decided to get an abortion, then that fetus no longer has a future (like ours). For a fetus to have a future at all, it needs the continued support of the pregnant person’s body. The existence of the fetus’s future depends on this physiological support continuing; remove that support (for whatever reason, e.g. the biological/physiological conditions that create a spontaneous abortion), and the fetus no longer has a future. The pregnant person’s decision to get an abortion is one way to end that physiological support. So while I grant that a fetus in the uterus of a pregnant person who plans to take the pregnancy to term will usually have a future like ours, a fetus in a pregnant person who plans to get an abortion does not have a future like ours, any more than a fetus that has some sort of 'purely biological' condition or genetic trait that would prevent it from coming to term and/or living beyond a few years. Now, one might argue that there is a difference between a genetic atypicality that prevents a fetus from coming to term (e.g. chromosomal aneuploidy), and a decision made by the pregnant person to terminate the pregnancy: the former is not under anyone's voluntary control, whereas the second one is under the pregnant person's control. I think this difference does not make a moral difference, at least in the present dialectical situation. Imagine I'm an employee at a small-to-medium sized company where the entire upper management is extremely anti-Republican. Further imagine that the upper management find out that I am a very active member of the Republican party. I think most people would agree with the claim that I don't have (much of) a future at that company: they'll fire me at their first opportunity, and certainly won't ever promote me. This indicates that (resolute) decisions can make it so that people don't have futures. ADDED LATER: It looks like this basic line of reasoning in this footnote already appears in Sinnott-Armstrong, Walter. 1999. “You Can’t Lose What You Ain’t Never Had: A Reply to Marquis on Abortion,” Philosophical Studies

11/08/2020

Causal attribution & election results

Many people on my timeline are sharing this excellent interview with Rep. Alexandria Ocasio-Cortez:
https://www.nytimes.com/2020/11/07/us/politics/aoc-biden-progressives.html

Here's a representative quote from her:

If the party believes after 94 percent of Detroit went to Biden, after Black organizers just doubled and tripled turnout down in Georgia, after so many people organized Philadelphia, the signal from the Democratic Party is the John Kasichs won us this election? I mean, I can’t even describe how dangerous that is.

After reading that, the philosopher part of my brain started wondering about how we should think through causal questions in the neighborhood. Did 94% of Detroit going to Biden win him the election, or did winning over the ‘John Kasichs’ (right-leaning centrists) of the swing-state electorate do it? (Let’s suppose, for the sake of argument, that there were a non-trivial number of John Kasichs in swing states who either voted for Biden or chose not to vote for Trump in the 2020 election; I am completely open to that latter being factually false.) The scenario I describe below is highly idealized, and may be massively disanalgous to what actually happened in the 2020 electorate, but the idealized scenario brings out something that MIGHT be going on in this causal debate about the 2020 results.

Suppose 9 people are voting on a proposal. Further imagine that the proposal passes, by a 5-4 vote. In a strict sense, all five of those ‘yea’ votes were necessary to bring about the effect of the proposal passing. (And we can even imagine that each of the 5 votes ‘yea’ for a different reason.) But we often speak of one (or more) of those 5 as THE cause, or at least the decisive cause, of the proposal passing. Relatedly, maybe one of those 5 voters is seen as especially responsible for the proposal’s passage (I recognize that responsibility and causation are not identical; but they are related). Often, this one is called the ‘swing vote.’

But all 5 of those votes was necessary to bring about the effect—so how can we pick out one as privileged over the 4 others? If we hold all votes but one of the 5 'yea's constant, and 'wiggle'/ intervene on that one 'yea', then the effect flips from passage to failure -- and that is equally true for all 5 of the 'yea' votes. Now one reasonable reaction to this is to simply reject the idea that one of the 5 yea-votes is in any way specical or privileged. People may think one of them is special, but they are wrong. Although this is a reasonable response, I am curious whether there might be anything salvageable or reasonable in ever causally privileging one 'yea' over the other 'yea's.

I am very much not an expert on the causal attribution literature, so I strongly suspect that someone has already said this. I couldn't find anyone saying it after a little googling, but if any readers know of someone who has already published this point, please let me know in the comments. Anyway, here’s my (probably not-new) hypothesis.

Out of a set of partial causes, each of which was necessary to bring about an actual effect, we privilege the cause that fails to hold in the closest possible world to our own.
This is why the swing voter is considered especially responsible for the proposal’s passage: out of all 5 ‘yea’ votes, the actual world would have to undergo the smallest change to flip a swing voter from yea to nay.

And this matches other causal attributions we make as well. We say that the match lit because it was struck, not because there is oxygen in the air, even though both those conditions are necessary for sustained burning to occur. On the hypothesis above, this is because the world in which I don’t strike the match is closer to our actual world than the world in which I am in a very low-oxygen environment.

So on this view, questions about whether Biden’s victory is caused by the John Kasichs of the electorate, or increasing turnout in Georgia, come down to the following question: Which is closer to the actual world, (a) the Kasichs of the electorate voting for Trump at roughly the same rate as in 2016, or (b) Black turnout in Georgia remaining at roughly 2016 levels? I genuinely have no idea.

As I think about it, the causal question seems actually not to matter for the political question of what the party should do, to win in the future – unless distance between possible worlds can be measured by money and other resources. The question, in terms of promoting future success, is not ‘What caused the Biden victory?’ (and then try to replicate that cause, next time around) but rather ‘What is the most cost-effective intervention to create more favorable vote margins?’. These are related, in that a possible world where I bought 1 more blueberry muffin than I actually did this morning is closer than the possible world where I bought 2 more blueberry muffins than I actually did. But it would be surprising if a cross-world metaphysical metric could be given by just tallying up dollars and cents. (Suppose in the actual world I bought 5 blueberry muffins today. Further suppose a muffin costs the same as an apple. Which is closer to the actual world: (i) the world in which I buy 1 apple in addition to the 5 muffins, or (ii) the world in which I buy 2 more blueberry muffins, in addition to the original 5?) That would make modality and causation very anthropocentric, it seems.

Another extremely important aspect of all this not addressed above is that there are also moral reasons to prefer one plan of action over another. When people’s ability to vote is being substantially suppressed or hindered, there is also a serious moral obligation to remove those obstacles, even if the dollar-per-vote-gained wouldn’t be as high as another TV ad targeting centrist voters who do not face significant obstacles to the ballot box. You don’t have to be an orthodox Rawlsian to think considerations of justice should outweigh considerations of efficiency, at least in most cases. This may be part of why Ocasio-Cortez says it would be so "dangerous" to focus future campaigns on flipping the John Kasichs of the electorate, instead of ensuring everyone is enfranchised in a substantive and meaningful way.

8/04/2020

One way to test scientific realism

One way of formulating Scientific Realism is as follows:
What our successful scientific theories say about unobservable entities and processes is approximately true.
This is not the only way to formulate scientific realism, but it is one of the more common ones, and it does effectively separate realism from versions of anti-realism which hold that we are not justified in believing what our theories say about unboservables.

Obviously, this version of Scientific Realism cannot be directly tested using our current theories and current technology, since what is currently unobservable can't be observed now.

However, what is observable shifts over time (at least in one important sense of the word 'observable'). This can happen either because (1) we develop the ability to reach new regimes of old variables (e.g. scientists create technology to make materials colder or hotter than we previously could, or we can study bodies moving at higher and higher velocities), or because (2) scientists develop new instruments that enable new types of observation reports (e.g. telescopes, microscopes, fMRI machines, or mass spectrometers).

This suggests a way to test realism diachronically, using the historical record. First, find something that went from being unobservable to being observable. Then find theories that were (considered) genuinely successful at that earlier time, and see what claims it made about the previously-unobservable-but-now-observable world. Finally, check those claims against the now-observable reality.

Scientific Realism (at least the version stated above) predicts that the old claims about the previously-unobservable things will usually approximately match the new observations of those things. (I say 'usually' instead of 'always,' because sensible realists are fallibilists.)

I have not run this test myself. To do it in an intellectually responsible way, a large survey of past transitions from unobservable-to-observable would have to be collected, and steps would have to be taken to make that sample of transitions representative. However, at first glance, it looks like at least some cherry-picked famous examples don't bode well for the realist's prediction:

  • The telescope played a significant role in the scientific revolution
  • The vacuum pump played an significant role in the scientific revolution
  • The ability to cool things down further and further led to the discovery of superconductivity
  • The ability to study bodies at higher and higher speeds was crucial in the transition from classical mechanics to special relativity

There are historical examples that run in the realist's favor too; I think one good example is that (on the whole, i.e. usually) phylogenetic trees generated via molecular data matched previously existing phylogenetic trees fairly closely (i.e. the old trees were usually 'approximately true,' which is all the realist wants). This is why, as I said, we need a large survey to figure out which historical transitions reflect the overall, general pattern, and which cases are outliers.
{ADDED LATER (May 2022): Simon Allzen's "From Unobservable to Observable: Scientific Realism and the Discovery of Radium" is another nice, detailed example that's intended as an example in the realist's favor. Here's a representative quotation: "an entity considered to be unobservable can be inferred at one stage in the process by virtue of its role as indispensable for predictive success [i.e. via IBE -- GF-A], only to change into an observable at a later stage, thus confirming the reliability of the inference. As a case study of the conceptual changes of entities I use the discovery of radium."}

Finally, in terms of already-existing arguments, this is not really very different from the Pessimistic Induction (if at all). I think of it as a specialized version of that argument, focusing on the realist's claim that the observable/ unobservable boundary does not mark an epistemically important distinction. For this reason, I think of the above as a diachronic version of Kitcher's "Real Realism" (which potentially comes to the opposite conclusion of Kitcher's view).

10/14/2018

The perfectionist's paradox

EDIT: ADDED Oct. 16 2018: As Karim Zahidi notes in the first comment below, I made an elementary logical error in thinking that (1) is evidence for (2). So I have crossed out the original mistake like this below. But I still think the argument after that step may work: so now the argument just starts from (2) as a supposedly plausible claim, instead of trying to justify (2) via (1).

----------

This might sound initially like a too-clever undergrad ‘gotcha’ paradox. However, I think that at least for some folks who have perfectionist tendencies, the following is experienced as a genuine difficulty in their lives.

The following strikes me as plausible:

  (1) It’s OK to make some mistakes.

If (1) is true, then it seems the following should be true as well, since it’s just a restricted version of (1):


  (2) It’s morally OK to make some moral mistakes.

But

  (3) Making a moral mistake is doing something morally impermissible,

and

  (4) if something is morally OK, then it is morally permissible.

And (2)-(4) logically entail

  (C) It’s morally permissible to do some morally impermissible things.

And (C) looks like a contradiction.

(For all I know this is already out there somewhere, but it was not on the interesting list of paradoxes of deontic logic in the Stanford Encyclopedia.)

7/17/2018

'Extra-Weak' Underdetermination

I’ll start briefly with a few reminders and fix a little terminology. I then introduce a new(?) sub-species of underdetermination argument, whose premises are logically weaker than existing underdetermination arguments, but still(?) deliver the anti-realist’s conclusion.

Underdetermination arguments in philosophy of science aim to show that an epistemically rational person (= someone who weighs their available evidence correctly) should suspend belief in the (approximate) truth of current scientific theories, even if such theories make very accurate predictions.

A scientific theory T is strongly underdetermined = T has a genuine competitor theory T*, and T and T* make all the same observable predictions. (So the two theories’ disagreement must concern unobservable stuff only.)

A scientific theory T is weakly underdetermined = T has a genuine competitor theory T*, and all the observable data/ evidence gathered thus far is predicted equally well by both T and T*.(†) (So collecting new data could end a weak underdetermination situation.)

Anti-realists then argue from the purported fact that (all/most) of our current scientific theories are undetermined, to the conclusion that an epistemically rational person should suspend belief in (all/most) of our current scientific theories.

Realists can reasonably respond by arguing that even weak underdetermination, in the above sense, is not common: even if one grants that there is an alternative theory T* that is consistent with the data gathered so far, that certainly does not entail that T and T* are perfectly equally supported by the available data is unlikely. There is no reason to expect T and T* would be a perfect ‘tie’ for every other theoretical virtue besides consistency with the data. (Theoretical virtues here include e.g. simplicity, scope, relation to other theories, etc.) The evidential support for a hypothesis is not merely a matter of the consistency of that hypothesis with available data.

At this point, anti-realists could dig in their heels and simply deny the immediately preceding sentence. (The other theoretical virtues are ‘merely pragmatic,’ i.e. not evidential.) But that generates a standoff/stalemate, and furthermore I find that response unsatisfying, since an anti-realist who really believes that should probably be a radical Cartesian skeptic (yet scientific anti-realism was supposed to be peculiar to science).

So here’s another reply the anti-realist could make: grant the realist’s claims that even weak determination is not all that common in the population of current scientific theories, and furthermore that typically our current theory T is in fact better supported than any of the competitors T1, T2, ... that are also consistent with the data collected so far. The anti-realist can grant these points, and still reach the standard underdetermination-argument conclusion that we should suspend belief in the truth of T, IF the sum of the credences one should assign to T1, T2, ... is at least 0.5.

For example: suppose there are exactly 3 hypotheses consistent with all the data collected thus far, and further suppose
 Pr(T) = 0.4,
 Pr(T1) = 0.35, and
 Pr(T2) = 0.25.
In this scenario, T is better supported by the evidence than T1 is or T2 is, so T is not weakly underdetermined. However, assuming that one should not believe p is true unless Pr(p)>0.5, one should still not believe T in the above example.

I call such a T extra-weakly underdetermined: the sum of rational degrees of belief one should have in T’s competitors is greater than or equal to the rational degree of belief one should have in T.

We can think about this using the typical toy example used to introduce the idea of underdetermination in our classes, where we draw multiple curves through a finite set of data points:
We can simultaneously maintain that the straight-line hypothesis (Theory A) is more probable than the others, but nonetheless deny that we should believe it, as long as the other hypotheses’ rational credence levels sum to 0.5 or higher. And there are of course infinitely many competitors to Theory A, so it is an infinite sum. The realist, in response to this argument, will thus have to say that that infinite sum will converge to less than 0.5.

The above argument from extra-weak underdetermination is clearly related to the ‘catch-all hypothesis’ (in the terminology above, ~T) point that has been discussed elsewhere in the literature on realism, especially in connection with Bayesian approaches (see here and the references therein). But I think there is something novel about the extra-weak underdetermination argument: as we add new competitor theories to the pool (T3, T4… in the example above), the rational credence level we assign to each hypothesis will presumably go down. (I include ‘presumably,’ because it is certainly mathematically possible for the new hypothesis to only bring down the rational credence level for some but not all of the old hypotheses.) So the point here is not just that there is some catch-all hypothesis, which it is difficult(?) to assign a degree of rational belief to (that's the old news), but also that we increase the probability of something like the 'catch-all' hypothesis by adding new hypotheses to it. (I have to say 'something like' it, because T1, T2... are specific theories, not just the negation of T.)

----

(†): Note that, unlike some presentations of underdetermination, I do not require that T and T* both predict ALL the available data. I take "Every theory is born refuted" seriously. And I actually think this makes underdetermination scenarios more likely, since a competitor theory need not be perfect -- and the imperfections/ shortcomings of T could be different from those of T* (e.g. T might be more complicated, while T* has a narrower range of predictions). Hasok Chang's Is Water H2O? makes this point concretely and (to my mind) compellingly, about the case of Lavoisier's Oxygen-based chemistry vs. Priestley's Phlogiston-based chemistry.

3/13/2018

cognitive impenetrability of some aesthetic perception

For me, one of the interesting experiences of getting older is seeing, from the internal first-person perspective, many of the generalizations one hears about 'getting older' come true in my own life. One of the most obvious/ salient ones for me is about musical tastes. I love a lot of hip hop from the early-to-mid 90's. (This is probably still my favorite hip hop album of all time.) I do also like some of the stuff that is coming out now, but on average, the beats in particular just sound bad to me. In particular, the typical snare sound -- I can't get over how terrible and thin it sounds.

But on the other hand, I know full well that, as people get older, they start thinking 'Young people's music today is so much worse than when I was a kid!' And that I heavily discounted old people's views about new music when I was in school.

Yet this makes absolutely no difference to my perceiving the typical trap snare sound today as really insubstantial and weak -- just ugly. The theoretical knowledge makes zero difference to my experience.

This reminded me of Fodor's famous argument from the Müller-Lyer illusion* for the cognitive impenetrability of perception. No matter how many times I am told that the two horizontal lines are the same length, no matter how many times I lay a ruler next to each line in succession and measure them to be the same length, I still perceive one line as shorter than the other. My theoretical knowledge just can't affect my perception. In a bit of jargon, the illusion is mandatory.

My experience of the typical hip hop snare sound today is similarly mandatory for me, despite the fact that I know (theoretically/ cognitively) that, as an old person, I should discount my aesthetic impressions of music coming out today.

This seems like it could make trouble for a Fodorian who wants to use the mandatoriness of illusions as an argument that perception is unbiased/ theory-neutral -- in a conversation about the best hip hop albums of all time, my aesthetic data would extremely biased towards stuff that came out between 1989-1995.

-----
*(Have you seen the dynamic Müller-Lyer illusions? Go here and scroll down to see a few variants.)

12/01/2017

Morals and Mood (Situationism vs virtue ethics, once more)


Given how much has been written in the last couple of decades about the situationist challenge to virtue ethics, I'm guessing someone has probably already said (something like) this before. But I haven't seen it, and I'm teaching both the Nicomachean Ethics and the section in Appiah's Experiments in Ethics on situationism vs. virtue ethics now, so the material is bouncing around in my head.

First, a little background. (If you want more detail, there are a couple nice summaries of the debate on the Stanford Encyclopedia of Philosophy here (by Alfano) and here (by Doris and Stich).) The basic idea behind the situationist challenge to virtue ethics is the following: there are no virtues (of the sort the virtue ethicist posits), because human behavior is extremely sensitive to apparently minor -- and definitely morally irrelevant -- changes in the environment. For example, studies show that someone is MUCH more likely to help a stranger with a small task (e.g. picking up dropped papers, or making change for a dollar) outside a nice-smelling bakery than a Staples office supply store, or after finding a dime in a pay phone's coin return, or in a relatively quiet place than a relatively loud one. The Situationist charges that if the personality trait of generosity really did exist, and was an important driver of people's behavior, then whether people perform generous actions or not would NOT depend on tiny, morally irrelevant factors like smelling cinnamon rolls, finding a dime, or loud noises. A virtue has to be stable and global; that behavior can change so much in response to apparently very minor environmental changes suggests that there is no such stable, global psychological thing contributing significantly to our behavior. That's the basic Situationist challenge.

Defenders of Virtue Ethics have offered a number of responses to this situationist challenge (the SEP articles linked in the previous paragraph describe a few). Here is a response that I have not personally seen yet: a person's mood is a significant mediator between the minor situational differences and the tendency to help a stranger. When we describe the experimental results as a correlation between a tiny, apparently unimportant environmental change and a massive change in helping behavior, then the experimental results look very surprising -- perhaps even shocking. But it would be less surprising or shocking, if instead of thinking of what's going on in these experiments as "The likelihood of helping others is extremely sensitively attuned to apparently trivial aspects of our environment," we rather think of what's happening as "All these minor environmental changes have a fairly sizeable effect on our mood." For to say that someone in a particularly good mood is much more likely to help a stranger is MUCH less surprising than "We help people outside the donut shop, but not outside the Staples." In other words: if mood is a major mediator for helping behaviors, then we don't have to think of our behaviors as tossed about by morally irrelevant aspects of our environments. That said, we would have to think of our behavior as shaped heavily by our moods -- but I'm guessing most people would probably agree with that, even if they've never taken a philosophy or psychology class.

Now, you might think this is simply a case of a distinction without a difference, or "Out of the frying pan, into the fire": swapping smells for moods makes no real difference of any importance to morality. I want to resist this, for two reasons; one more theoretical/ philosophical, and the other more practical.

First, the theoretical reason recognizing that mood is a mediator in these experiments matters: I don't think a virtue ethicist would have to be committed to the claim that mood does not have an effect on helping behaviors. Virtue ethicists can agree that generosity should not be sensitive to smells and dimes per se. However, the fact that someone who is in a better mood is (all else equal) more likely to help strangers than someone in a worse mood is probably not devastating evidence against the virtue ethicist's thesis that personality traits (like generosity) exist and play an important role in producing behavior.

Second, more practically: one concern (e.g. mentioned by Hagop Sarkissian here) about the Situationist experimental data in general is that often times we are not consciously aware of the thing in our situation/environment that is driving the change in our behavior (I may not have noticed the nice smells, or the absence of loud noises). But mood is different: I have both better access to my mood, and a baseline/ immediate knowledge that my mood often affects my behavior. Whereas given the situationist's characterization of the data, I often don't know which variables in my environment are causing me to help the stranger or not. So if I am in a foul mood, and realize I am in a foul mood, I could potentially consciously 'correct' my automatic, 'system-1' level of willingness to help others.

Of course, on this way of thinking about it, i.e. mood as mediating, I often won't know what is CAUSING my good mood. But that's OK, because I will still be able to detect my mood (usually -- of course, sometimes we are sad, or angry, or whatever, without really noticing it. But my point is just that we are better detectors of our current mood than we are of the various elements of our environment that could potentially be influencing our mood positively or negatively).

So in short: I think the situationist's challenge to virtue ethics is blunted somewhat if we think of mood as a mediator between apparently trivial situational variables and helping behaviors.

6/22/2017

Tarski, Carnap, and semantics

Two things:

1. Synthese recently published Pierre Wagner's article Carnapian and Tarskian Semantics, which outlines some important differences between semantics as Tarski conceived it (at least in the 1930s-40s) and as Carnap conceived it. This is important for anyone who cares about the development of semantics in logic; I'd been hoping someone would write this paper, because (a) I thought it should be written, but (b) I didn't really want to do it myself. Wagner's piece is really valuable, in my opinion. And not merely for antiquarian reasons: many today have the feeling that semantics in logic (roughly: model theory) is the natural/ inevitable way to come at semantics in logic. But how exactly to pursue semantics was actually very much up for debate and in flux for about 20 years after Tarski's 1933 "On the Concept of Truth in Formalized Languages." And the semantics in that monograph is NOT what you would find in a logic textbook today.

2. I am currently putting the finishing touches on volume 7 of the Collected Works of Rudolf Carnap, which is composed of Carnap's three books on semantics (Introduction to Semantics, Formalization of Logic, and Meaning and Necessity). There is a remark in Intro to Semantics that is relevant to Wagner's topic, which Wagner cited (p.104), but I think might be worth trying to investigate in more detail. Carnap writes:

our [= Tarski's and my] conceptions of semantics seem to diverge at certain points. First ... I emphasize the distinction between semantics and syntax, i.e. between semantical systems as interpreted language systems and purely formal, uninterpreted calculi, while for Tarski there seems to be no sharp demarcation. (1942, pp. vi-vii)

I have two thoughts about this quotation:
(i) Is Carnap right? Or did he misunderstand Tarski? (Carnap had had LOTS of private conversations with Tarski by this point, so the prior probability I assign to me understanding Tarski better than Carnap does is pretty low.)
(ii) If Carnap is right about Tarski on this point, then (in my opinion) we today should give much more credit to Carnap for our current way of doing semantics in logic than most folks currently do. We often talk about 'Tarskian semantics' today as a short-hand label for what we are doing, but if there were 'no sharp demarcation' between model theory and proof theory (i.e. between semantics and syntax), then the discipline of logic would look very different today.

3/30/2017

Against Selective Realism (given methodological naturalism)

The most popular versions of realism in the scientific realism debates today are species of selective realism. A selective realist does not hold that mature, widely accepted scientific theories are, taken as wholes, approximately true---rather, she holds that (at least for some theories) only certain parts are approximately true, but others parts are not, and thus do not merit rational belief. The key question selective realists have grappled with over the last few decades is: which are the 'good' parts (the "working posits," in Kitcher's widely used terminology) and which are the 'bad' parts (the "idle wheels") of a theory?

An argument against any sort of philosophical selective realism just occurred to me, and I wanted to try to spell it out here. Suppose (as the selective realist must) there is some scientific theory that scientists believe/ accept, and which according to the selective realist makes at least one claim (call it p) that is an idle wheel, and thus should not be rationally accepted.

It seems to me that in such a situation, the selective realist has abandoned (Quinean) methodological naturalism in philosophy, which many philosophers---and many philosophers of science, in particular---take as a basic guideline for inquiry. Methodological naturalism (as I'm thinking of it here) is the view that philosophy does not have any special, supra-scientific evidential standards; the standards philosophers use to evaluate claims should not be any more stringent or rigorous than standards scientists themselves use. And in our imagined case, the scientists think there is sufficient evidence for p, whereas the selective realist does not.

To spell out more fully the inconsistency of selective realism and methodological naturalism in philosophy, consider the following dilemma:
By scientific standards, one either should or should not accept p.

If, by scientific standards, one should not accept p, then presumably the scientific community already does not accept it (unless the community members have made a mistake, and are not living up to their own evidential standards). The community could have re-written the original pre-theory accordingly to eliminate the idle wheel, or they could have explicitly flagged the supposed idle wheel as a false idealization, e.g. letting population size go to infinity. But however the community does it, selective realism would not recommend anything different from what the scientific community itself says, so selective realism becomes otiose ... i.e., an idle wheel. (Sorry, I couldn't help myself.)

On the other hand, if, by scientific standards, one should accept p, then the selective realist can't be a methodological naturalist: the selective realist has to tell the scientific community that they are wrong to accept p.
I can imagine at least one possible line of reply for the selective realist: embrace the parenthetical remark in the first horn of the dilemma above, namely, scientists are making a mistake by their own lights in believing p. Then the selective realist would need to show that there is a standard operative in the scientific community that the scientists who accept p don't realize should apply in the particular case of p. But that may prove difficult to show at this level of abstraction.