10/21/2014

Historiographical reflections

I know this "scumbag analytic philosopher" meme is played out, but this one just came to me:


Commence groaning...

10/04/2014

(Why) must ethical theories be consistent with intuitions about possible cases?

Since Brian Weatherson recently classified this blog as 'active,' I thought I should try to live up to that billing.

A question came up in one of my classes yesterday. The student asked (in effect): Why wouldn't a moral theory that makes all the right 'predictions' about actual cases be good enough? Why demand that a moral theory must also be consistent with our intuitions about possible cases, even science-fiction-ish ones, as well?

(The immediate context was a discussion of common objections to utilitarianism, specifically, slavery and the utility monster. The student said, sensibly I think, that the utilitarian could reply that all actual cases of slavery are bad on utilitarian grounds, and there are no utility monsters.)

I know that some philosophers have argued that if a moral claim is true, then it is (metaphysically?) necessarily true: there is no possible world in which e.g. kicking kittens merely for fun is morally permissible. If you accept that all moral claims are like this, then I can see why you would demand our moral theories be consistent with our intuitions about all possible cases. But if one does not accept that all moral truths are metaphysically necessary, is there any other reason to demand the theory make the right prediction about merely possible cases?

This question seems especially pressing to me, if we think one of the main uses of moral theories is as a guide to action, since we only ever act in the actual world. However, now that I say that explicitly, I realize that whenever we make a decision, the option(s) we decided against are merely possible situations. So maybe that could explain why an ethical theory needs to cover merely possible cases? (Though even there, it need not cover all metaphysically possible cases -- e.g. the utility monster worries never need to be part of my actual decision-making process.)

9/15/2014

Are video games counter-examples to Suits' definition of 'game'?

Many readers are familiar with Bernard Suits' definition of 'game' in The Grasshopper. For those of you who aren't, Suits offers this definition of playing a game: "engaging in an activity directed towards bringing about a specific state of affairs, using only means permitted by rules, where the rules prohibit more efficient in favour of less efficient means, and where such rules are accepted just because they make possible such activity" (pp. 48-9).

I'm wondering whether video games are a counter-examples, because of the condition "the rules prohibit more efficient in favour of less efficient means." This makes sense for most games: in golf, it would be more efficient to just carry the ball by hand and put it into the hole; in poker, it would be more efficient to just reach across the table and take all your opponents' chips/cash. But what is the analogue of these 'more efficient' ways in a video game?

One might point to cheat codes, but even if a cheat code does satisfy this condition of Suits' definition, we can at least imagine a video game that doesn't have cheat codes.

6/09/2014

New(-ish) BJPS blog

Maybe everyone already knows about this, but the British Journal for the Philosophy of Science now has a blog: Auxiliary Hypotheses.  There are multiple thought-provoking posts -- happily, it doesn't only announce tables of contents.

(And I might have my own substantive post up sometime in the near future... I'm wrestling with Quine for the umpteenth time, and hopefully will be able to write something here about it.)

3/29/2014

implicit bias and vaccinations

Question: Is having a pernicious bias analogous to not getting vaccinated?

Philosophers have been talking more about implicit biases recently.  I was recently reading Payne's work on 'weapon bias'; here is the first couple sentences from the abstract of this paper:
"Race stereotypes can lead people to claim to see a weapon where there is none. Split-second decisions magnify the bias by limiting people’s ability to control responses."
That is, if forced to make a snap judgment, people in the US today are more likely to identify a non-gun tool as a gun if they have just seen a picture of someone typically racialized as black than someone typically racialized as white.  Also, if people had been under a heavy cognitive load before classifying an object as a gun or a tool, they are more likely to make this mistake.  If subjects have plenty of time, and have not been under heavy cognitive load, then they  make far fewer mistakes, and more importantly the rate of mistakes is the same regardless of race seen.

We typically say that people should be permitted complete freedom of thought: we should only be held accountable for our actions, not our beliefs.  I think (and I could be wrong about this) the usual justification for this complete freedom is: one can always control which thoughts one acts on.  For example, I can think 'I wish so-and-so were dead' without killing them, or even putting them at increased risk of being killed.  If we did not have control over whether our thoughts issued in corresponding actions, then simply having certain kinds of beliefs would put others at increased risk for harm.  It would arguably be a kind of negligence.

Cases like weapon bias suggest that the 'usual justification' above is wrong, and that my having certain (conscious or unconscious) beliefs does put others at increased risk for harm.  The usual justification holds in good circumstances: when I have plenty of time, and am not under a heavy cognitive load, I can control which of my thoughts issue in corresponding actions.  But sometimes I find myself in less than good circumstances (at least sometimes through no fault of my own).  And in those circumstances, my pernicious biases are more likely to harm others.

A biased person in a social setting that includes people stigmatized by that bias seems analogous to me to someone who has not gotten a vaccination for a communicable disease who is not quarantined.  If I don't get vaccinated, then it is of course possible that I will never get the measles, and thus never harm anyone.  But my lack of vaccination raises the risk that others will be harmed.  Having pernicious biases seems to be the same, if the weapons

Can someone talk me out of this line of reasoning?  I think I have an obligation to get vaccinated, but an obligation to have a 'thought vaccination' (whether for conscious or unconscious thoughts) sounds like the Thought Police/ brainwashing -- a result I'm guessing most people want to avoid.

1/03/2014

π, τ, and Quine's pragmatism

Some readers may already be familiar with the π vs. τ debate. If not, I recommend checking out Michael Hartl's τ manifesto and this fantastic short video by Vi Hart. An attempt at a balanced evaluation of π vs. τ can be found here.

For those who don't know, τ is just 2π. The defenders of τ argue (as seen in the above links) that using it instead of π makes many things much clearer and simpler/ more elegant.

Let's assume for present purposes that the τ-proponents turn out, in the end, to be right. I want to ask a further question: what would this then say about Quine's denial of the analytic-synthetic distinction? Quine's denial, virtually all agree, is the claim that all rational belief change is pragmatic, i.e. there is no principled difference between questions of evidence/justification on the one hand, and questions of efficiency and expedience on the other (= between external and internal questions, i.e. between practical questions of which language-form to adopt, and questions of whether a particular empirical claim is supported by the available evidence).

So here's my question: if Quine is right, then is our old friend C=2πr simply wrong (and Cr right)? If not, how can a Quinean wiggle out of that consequence? And if so (i.e. C=2πr really is wrong), does the Quinean have any way of softening the sting of this apparently absurd consequence?

1/01/2014

The Knobe Effect and coaches resting their starting players

So here's one for the Knobe effect folks -- I'm wondering what they would think about the following case:
Last weekend, in the (American) football league, the head coach of the Kansas City Chiefs (Andy Reid) rested 20 of his 22 starting players; i.e. the coach played almost exclusively back-up players. He did this because he wanted to rest his starting players for the playoffs, and his team's position in the playoff seeding would not be affected by a win or a loss in last weekend's game against the San Diego Chargers.
Reid's decision (to play almost all back-up players instead of the usual starters) made it more likely that his team would lose to the Chargers. (This has subsequent effects, because the Chargers would go to the playoffs if they won that game.)
(This paragraph is skippable for those familiar with the Knobe effect.) For readers who don't have the Knobe Effect memorized, here's the original form: Suppose an action has a side effect. If that side-effect is considered morally bad, then respondents say the actor intentionally caused that side effect, whereas morally good side effects are judged to be unintentionally caused. After another decade of research, the picture has been altered and expanded somewhat; here's Mark Alfano's summary:
"Over the last decade, researchers have greatly expanded the diversity of both the norm-violations that trigger the effect and the psychological states whose attribution exhibits the asymmetry. The effect crops up not only after the violation of a moral norm, but also after the violation of prudential, aesthetic, legal, conventional, and even descriptive norms. The attribution asymmetry is found not only for intentionality, but also for cognitive attitudes such as belief, knowledge, and memory, for conative attitudes such as desire, favor, and advocacy, and for the virtue of compassion and the vice of callousness."
There are further wrinkles as well; in the case of morally wrong laws, the case is reversed: if the side-effect results in following the law (and thus causing a morally bad side-effect), then respondents say it was not caused on purpose (and conversely for breaking the law, i.e. causing a morally good side effect, was judged to be intentionally caused). Richard Holton has a nice short article that aims to account for all of these experimental results in terms of convention-breaking.

So now my question is: did Andy Reid intentionally reduce the Chiefs' chances of winning last Sunday? (And thereby intentionally increase the chances of the Chargers to make the playoffs?) To make it match the prompts for the Knobe effect studies, we can imagine that we asked Reid before the game "Do you care about improving the Chargers' chances of winning?" and he said "No, all I care about is resting my starting players."

This seems like an interesting case to me, because there is certainly a norm against intentionally causing your own team to (be more likely to) lose. Match-fixing is widely frowned-upon, and not only when gambling is involved. (Remember the badminton debacle in the 2012 Olympics, in which eight athletes were thrown out of the competition for trying to lose?) So it seems like (at least some) explanations of the Knobe effect would predict that respondents would say that Reid intentionally decreased his team's chances of winning.