12/01/2017

Morals and Mood (Situationism vs virtue ethics, once more)


Given how much has been written in the last couple of decades about the situationist challenge to virtue ethics, I'm guessing someone has probably already said (something like) this before. But I haven't seen it, and I'm teaching both the Nicomachean Ethics and the section in Appiah's Experiments in Ethics on situationism vs. virtue ethics now, so the material is bouncing around in my head.

First, a little background. (If you want more detail, there are a couple nice summaries of the debate on the Stanford Encyclopedia of Philosophy here (by Alfano) and here (by Doris and Stich).) The basic idea behind the situationist challenge to virtue ethics is the following: there are no virtues (of the sort the virtue ethicist posits), because human behavior is extremely sensitive to apparently minor -- and definitely morally irrelevant -- changes in the environment. For example, studies show that someone is MUCH more likely to help a stranger with a small task (e.g. picking up dropped papers, or making change for a dollar) outside a nice-smelling bakery than a Staples office supply store, or after finding a dime in a pay phone's coin return, or in a relatively quiet place than a relatively loud one. The Situationist charges that if the personality trait of generosity really did exist, and was an important driver of people's behavior, then whether people perform generous actions or not would NOT depend on tiny, morally irrelevant factors like smelling cinnamon rolls, finding a dime, or loud noises. A virtue has to be stable and global; that behavior can change so much in response to apparently very minor environmental changes suggests that there is no such stable, global psychological thing contributing significantly to our behavior. That's the basic Situationist challenge.

Defenders of Virtue Ethics have offered a number of responses to this situationist challenge (the SEP articles linked in the previous paragraph describe a few). Here is a response that I have not personally seen yet: a person's mood is a significant mediator between the minor situational differences and the tendency to help a stranger. When we describe the experimental results as a correlation between a tiny, apparently unimportant environmental change and a massive change in helping behavior, then the experimental results look very surprising -- perhaps even shocking. But it would be less surprising or shocking, if instead of thinking of what's going on in these experiments as "The likelihood of helping others is extremely sensitively attuned to apparently trivial aspects of our environment," we rather think of what's happening as "All these minor environmental changes have a fairly sizeable effect on our mood." For to say that someone in a particularly good mood is much more likely to help a stranger is MUCH less surprising than "We help people outside the donut shop, but not outside the Staples." In other words: if mood is a major mediator for helping behaviors, then we don't have to think of our behaviors as tossed about by morally irrelevant aspects of our environments. That said, we would have to think of our behavior as shaped heavily by our moods -- but I'm guessing most people would probably agree with that, even if they've never taken a philosophy or psychology class.

Now, you might think this is simply a case of a distinction without a difference, or "Out of the frying pan, into the fire": swapping smells for moods makes no real difference of any importance to morality. I want to resist this, for two reasons; one more theoretical/ philosophical, and the other more practical.

First, the theoretical reason recognizing that mood is a mediator in these experiments matters: I don't think a virtue ethicist would have to be committed to the claim that mood does not have an effect on helping behaviors. Virtue ethicists can agree that generosity should not be sensitive to smells and dimes per se. However, the fact that someone who is in a better mood is (all else equal) more likely to help strangers than someone in a worse mood is probably not devastating evidence against the virtue ethicist's thesis that personality traits (like generosity) exist and play an important role in producing behavior.

Second, more practically: one concern (e.g. mentioned by Hagop Sarkissian here) about the Situationist experimental data in general is that often times we are not consciously aware of the thing in our situation/environment that is driving the change in our behavior (I may not have noticed the nice smells, or the absence of loud noises). But mood is different: I have both better access to my mood, and a baseline/ immediate knowledge that my mood often affects my behavior. Whereas given the situationist's characterization of the data, I often don't know which variables in my environment are causing me to help the stranger or not. So if I am in a foul mood, and realize I am in a foul mood, I could potentially consciously 'correct' my automatic, 'system-1' level of willingness to help others.

Of course, on this way of thinking about it, i.e. mood as mediating, I often won't know what is CAUSING my good mood. But that's OK, because I will still be able to detect my mood (usually -- of course, sometimes we are sad, or angry, or whatever, without really noticing it. But my point is just that we are better detectors of our current mood than we are of the various elements of our environment that could potentially be influencing our mood positively or negatively).

So in short: I think the situationist's challenge to virtue ethics is blunted somewhat if we think of mood as a mediator between apparently trivial situational variables and helping behaviors.

6/22/2017

Tarski, Carnap, and semantics

Two things:

1. Synthese recently published Pierre Wagner's article Carnapian and Tarskian Semantics, which outlines some important differences between semantics as Tarski conceived it (at least in the 1930s-40s) and as Carnap conceived it. This is important for anyone who cares about the development of semantics in logic; I'd been hoping someone would write this paper, because (a) I thought it should be written, but (b) I didn't really want to do it myself. Wagner's piece is really valuable, in my opinion. And not merely for antiquarian reasons: many today have the feeling that semantics in logic (roughly: model theory) is the natural/ inevitable way to come at semantics in logic. But how exactly to pursue semantics was actually very much up for debate and in flux for about 20 years after Tarski's 1933 "On the Concept of Truth in Formalized Languages." And the semantics in that monograph is NOT what you would find in a logic textbook today.

2. I am currently putting the finishing touches on volume 7 of the Collected Works of Rudolf Carnap, which is composed of Carnap's three books on semantics (Introduction to Semantics, Formalization of Logic, and Meaning and Necessity). There is a remark in Intro to Semantics that is relevant to Wagner's topic, which Wagner cited (p.104), but I think might be worth trying to investigate in more detail. Carnap writes:

our [= Tarski's and my] conceptions of semantics seem to diverge at certain points. First ... I emphasize the distinction between semantics and syntax, i.e. between semantical systems as interpreted language systems and purely formal, uninterpreted calculi, while for Tarski there seems to be no sharp demarcation. (1942, pp. vi-vii)

I have two thoughts about this quotation:
(i) Is Carnap right? Or did he misunderstand Tarski? (Carnap had had LOTS of private conversations with Tarski by this point, so the prior probability I assign to me understanding Tarski better than Carnap does is pretty low.)
(ii) If Carnap is right about Tarski on this point, then (in my opinion) we today should give much more credit to Carnap for our current way of doing semantics in logic than most folks currently do. We often talk about 'Tarskian semantics' today as a short-hand label for what we are doing, but if there were 'no sharp demarcation' between model theory and proof theory (i.e. between semantics and syntax), then the discipline of logic would look very different today.

3/30/2017

Against Selective Realism (given methodological naturalism)

The most popular versions of realism in the scientific realism debates today are species of selective realism. A selective realist does not hold that mature, widely accepted scientific theories are, taken as wholes, approximately true---rather, she holds that (at least for some theories) only certain parts are approximately true, but others parts are not, and thus do not merit rational belief. The key question selective realists have grappled with over the last few decades is: which are the 'good' parts (the "working posits," in Kitcher's widely used terminology) and which are the 'bad' parts (the "idle wheels") of a theory?

An argument against any sort of philosophical selective realism just occurred to me, and I wanted to try to spell it out here. Suppose (as the selective realist must) there is some scientific theory that scientists believe/ accept, and which according to the selective realist makes at least one claim (call it p) that is an idle wheel, and thus should not be rationally accepted.

It seems to me that in such a situation, the selective realist has abandoned (Quinean) methodological naturalism in philosophy, which many philosophers---and many philosophers of science, in particular---take as a basic guideline for inquiry. Methodological naturalism (as I'm thinking of it here) is the view that philosophy does not have any special, supra-scientific evidential standards; the standards philosophers use to evaluate claims should not be any more stringent or rigorous than standards scientists themselves use. And in our imagined case, the scientists think there is sufficient evidence for p, whereas the selective realist does not.

To spell out more fully the inconsistency of selective realism and methodological naturalism in philosophy, consider the following dilemma:
By scientific standards, one either should or should not accept p.

If, by scientific standards, one should not accept p, then presumably the scientific community already does not accept it (unless the community members have made a mistake, and are not living up to their own evidential standards). The community could have re-written the original pre-theory accordingly to eliminate the idle wheel, or they could have explicitly flagged the supposed idle wheel as a false idealization, e.g. letting population size go to infinity. But however the community does it, selective realism would not recommend anything different from what the scientific community itself says, so selective realism becomes otiose ... i.e., an idle wheel. (Sorry, I couldn't help myself.)

On the other hand, if, by scientific standards, one should accept p, then the selective realist can't be a methodological naturalist: the selective realist has to tell the scientific community that they are wrong to accept p.
I can imagine at least one possible line of reply for the selective realist: embrace the parenthetical remark in the first horn of the dilemma above, namely, scientists are making a mistake by their own lights in believing p. Then the selective realist would need to show that there is a standard operative in the scientific community that the scientists who accept p don't realize should apply in the particular case of p. But that may prove difficult to show at this level of abstraction.

2/11/2017

Confirmation holism and justification of individual claims

How do epistemological(/confirmation) holists think about the justification of the individual claims that compose the relevant ‘whole’?

Epistemological holism or confirmation holism, I take it, holds that sentences cannot be justified or disconfirmed in isolation. In other words, we can only fundamentally justify or disconfirm sufficiently large conjunctions of individual claims. What counts as ‘sufficient’ depends on how big a chunk of theory you think is needed to be justifiable: some holists allow big sets of beliefs to be confirmed/justified even if they are proper subsets of your total belief-set. (I will use 'individual claim' to mean a claim that is 'too small' i.e. 'too isolated' to admit of confirmation, according to the holist.)

I’m guessing confirmation holists also think that the individual claims that make up a justified whole are themselves justified. (If holists didn’t think that, then it seems like any individual claim a holist made would be unjustified, by the holist’s own lights, unless they uttered it in conjunction with a sufficiently large set of other utterances.) The individual claims are justified, presumably, by being part of a sufficiently large conjunction of claims that are fundamentally/ basically justified. Individual claims, if justified, can only be derivatively justified.

Presumably, if one believes that ‘A1 & A2 & … & An’ (call this sentence AA) is justified, then that person has (or: thinks they should have) a rational degree of belief in AA over 0.5.

But now I have questions:

(1) How does a holist get from the degree of belief she has in AA, to the degree of belief she has in a particular conjunct? There are many, many ways consistent with the probability calculus to assign probabilities to each of the Ai’s to get any particular rational degree of belief (except 1).

(2) We might try to solve that ‘underdetermination’ problem in (1) by specifying that every conjunct is assigned the same degree of belief. This seems prima facie odd to me, since presumably some conjuncts are more plausible than others, but I don’t see how the holist could justify having different levels of rational belief in each conjunct, since each conjunct gets its justification only through the whole. (Perhaps the partial holist can tell a story to be told about claims participating in multiple sufficiently large conjunctions that are each justified?)

(3) Various ways of intuitively assigning degrees of belief to the individual conjuncts seem to run into problems:

(i) The holist might say: if I have degree of belief k in AA, then I will have degree of belief k in each conjunct. Problem: that violates the axioms of the probability calculus (unless k=1).


(ii) Alternatively, if the holist wants to obey the axioms of the probability calculus, then the rational degree of belief she will need to have in each conjunct must be VERY high. For example, if the degree of belief in AA is over 0.5, and each conjunct is assigned the same value (per (2)), and there are 100 individual conjuncts, then one’s degree of belief in each conjunct must be over 0.993. And that seems really high to me.


(iii) One alternative to that would be to say that each conjunct of a large conjunction has to be over 0.5. But then you would have to say that the big 100-conjunct conjunction is justified when your rational degree of belief in it is anything above 7.9x10-31. And that doesn’t sound like a justified sentence.


Two final remarks: First, it seems like someone must have thought of this before, at least in rough outline. But my 10 minutes of googling didn’t turn up anything. So if you have pointers to the literature, please send them along. Second, for what it's worth, this occurred to me while thinking about the preface paradox: if you think justification only fundamentally accrues to large conjunctions and not individual conjuncts, then it seems like you couldn’t run (something like) the preface paradox, since you couldn't have a high rational degree of belief in (an analogue of) the claim ‘At least one of the sentences in this book is wrong.’