4/07/2015

Das beste Blog der Welt

I'm sure most folks who check this blog already know about this, but just in case you missed it: André Carus has recently started writing some really interesting posts on his (aptly titled) Carnap Blog. It is required reading for anyone interested in Carnapia.

4/05/2015

Thoughts from the Pacific APA meeting


I just got back from the Pacific APA meeting. There were a lot of highlights for me: the session on Eugenics and Philosophy was really excellent (I especially got a lot out of Rob Wilson's opening remarks about his work on sterilized people in his province, and well as Adam Cureton's paper on disability and parenting); Nancy Cartwright's Dewey lecture was really interesting; and I was happy to see History of Analytic very well represented in several spots on the program. That included the author-meets-critics session on my Carnap, Tarski, Quine book -- I was very fortunate to have great commentators: Rick Creath, Gary Ebbs, and Greg Lavers. I'm very thankful for Richard Zach for organizing the session too, and Sean Morris for stepping in to chair at the last minute. And the conversation with the audience was helpful to me as well. Happily, even if you weren't at the session, you'll still be able to see what they said: their insightful comments will eventually appear in a symposium in Metascience.

One thing that I noticed was that there were not a lot of talks on philosophy of science proper. (Though happily there were some, e.g. an author-meets-critics on Jim Tabery's Beyond Versus: The Struggle to Understand the Interaction of Nature and Nurture.) Interestingly, there were a decent number of philosophers of science there, but often they were presenting something that was not philosophy of science (like me), or speaking on something philosophy-of-science adjacent (e.g. a philosopher of biology speaking on bioethics).

I was wondering whether anyone had hypotheses about this -- one hypothesis is that because the PSA exists and is pretty big, the PSA 'cannibalizes' the presentations from the APA. Another tack would be that my perception of the percentage of the profession that identify as philosophers of science is not accurate, and the APA program accurately reflected the true percentage. But I am very curious to hear other explanations.

(And the baked-goods highlight of the trip was the coffee bun at Papparoti -- it was the most interesting pastry I've had in a while.)

10/21/2014

Historiographical reflections

I know this "scumbag analytic philosopher" meme is played out, but this one just came to me:


Commence groaning...

10/04/2014

(Why) must ethical theories be consistent with intuitions about possible cases?

Since Brian Weatherson recently classified this blog as 'active,' I thought I should try to live up to that billing.

A question came up in one of my classes yesterday. The student asked (in effect): Why wouldn't a moral theory that makes all the right 'predictions' about actual cases be good enough? Why demand that a moral theory must also be consistent with our intuitions about possible cases, even science-fiction-ish ones, as well?

(The immediate context was a discussion of common objections to utilitarianism, specifically, slavery and the utility monster. The student said, sensibly I think, that the utilitarian could reply that all actual cases of slavery are bad on utilitarian grounds, and there are no utility monsters.)

I know that some philosophers have argued that if a moral claim is true, then it is (metaphysically?) necessarily true: there is no possible world in which e.g. kicking kittens merely for fun is morally permissible. If you accept that all moral claims are like this, then I can see why you would demand our moral theories be consistent with our intuitions about all possible cases. But if one does not accept that all moral truths are metaphysically necessary, is there any other reason to demand the theory make the right prediction about merely possible cases?

This question seems especially pressing to me, if we think one of the main uses of moral theories is as a guide to action, since we only ever act in the actual world. However, now that I say that explicitly, I realize that whenever we make a decision, the option(s) we decided against are merely possible situations. So maybe that could explain why an ethical theory needs to cover merely possible cases? (Though even there, it need not cover all metaphysically possible cases -- e.g. the utility monster worries never need to be part of my actual decision-making process.)

9/15/2014

Are video games counter-examples to Suits' definition of 'game'?

Many readers are familiar with Bernard Suits' definition of 'game' in The Grasshopper. For those of you who aren't, Suits offers this definition of playing a game: "engaging in an activity directed towards bringing about a specific state of affairs, using only means permitted by rules, where the rules prohibit more efficient in favour of less efficient means, and where such rules are accepted just because they make possible such activity" (pp. 48-9).

I'm wondering whether video games are a counter-examples, because of the condition "the rules prohibit more efficient in favour of less efficient means." This makes sense for most games: in golf, it would be more efficient to just carry the ball by hand and put it into the hole; in poker, it would be more efficient to just reach across the table and take all your opponents' chips/cash. But what is the analogue of these 'more efficient' ways in a video game?

One might point to cheat codes, but even if a cheat code does satisfy this condition of Suits' definition, we can at least imagine a video game that doesn't have cheat codes.

6/09/2014

New(-ish) BJPS blog

Maybe everyone already knows about this, but the British Journal for the Philosophy of Science now has a blog: Auxiliary Hypotheses.  There are multiple thought-provoking posts -- happily, it doesn't only announce tables of contents.

(And I might have my own substantive post up sometime in the near future... I'm wrestling with Quine for the umpteenth time, and hopefully will be able to write something here about it.)

3/29/2014

implicit bias and vaccinations

Question: Is having a pernicious bias analogous to not getting vaccinated?

Philosophers have been talking more about implicit biases recently.  I was recently reading Payne's work on 'weapon bias'; here is the first couple sentences from the abstract of this paper:
"Race stereotypes can lead people to claim to see a weapon where there is none. Split-second decisions magnify the bias by limiting people’s ability to control responses."
That is, if forced to make a snap judgment, people in the US today are more likely to identify a non-gun tool as a gun if they have just seen a picture of someone typically racialized as black than someone typically racialized as white.  Also, if people had been under a heavy cognitive load before classifying an object as a gun or a tool, they are more likely to make this mistake.  If subjects have plenty of time, and have not been under heavy cognitive load, then they  make far fewer mistakes, and more importantly the rate of mistakes is the same regardless of race seen.

We typically say that people should be permitted complete freedom of thought: we should only be held accountable for our actions, not our beliefs.  I think (and I could be wrong about this) the usual justification for this complete freedom is: one can always control which thoughts one acts on.  For example, I can think 'I wish so-and-so were dead' without killing them, or even putting them at increased risk of being killed.  If we did not have control over whether our thoughts issued in corresponding actions, then simply having certain kinds of beliefs would put others at increased risk for harm.  It would arguably be a kind of negligence.

Cases like weapon bias suggest that the 'usual justification' above is wrong, and that my having certain (conscious or unconscious) beliefs does put others at increased risk for harm.  The usual justification holds in good circumstances: when I have plenty of time, and am not under a heavy cognitive load, I can control which of my thoughts issue in corresponding actions.  But sometimes I find myself in less than good circumstances (at least sometimes through no fault of my own).  And in those circumstances, my pernicious biases are more likely to harm others.

A biased person in a social setting that includes people stigmatized by that bias seems analogous to me to someone who has not gotten a vaccination for a communicable disease who is not quarantined.  If I don't get vaccinated, then it is of course possible that I will never get the measles, and thus never harm anyone.  But my lack of vaccination raises the risk that others will be harmed.  Having pernicious biases seems to be the same, if the weapons

Can someone talk me out of this line of reasoning?  I think I have an obligation to get vaccinated, but an obligation to have a 'thought vaccination' (whether for conscious or unconscious thoughts) sounds like the Thought Police/ brainwashing -- a result I'm guessing most people want to avoid.

1/03/2014

π, τ, and Quine's pragmatism

Some readers may already be familiar with the π vs. τ debate. If not, I recommend checking out Michael Hartl's τ manifesto and this fantastic short video by Vi Hart. An attempt at a balanced evaluation of π vs. τ can be found here.

For those who don't know, τ is just 2π. The defenders of τ argue (as seen in the above links) that using it instead of π makes many things much clearer and simpler/ more elegant.

Let's assume for present purposes that the τ-proponents turn out, in the end, to be right. I want to ask a further question: what would this then say about Quine's denial of the analytic-synthetic distinction? Quine's denial, virtually all agree, is the claim that all rational belief change is pragmatic, i.e. there is no principled difference between questions of evidence/justification on the one hand, and questions of efficiency and expedience on the other (= between external and internal questions, i.e. between practical questions of which language-form to adopt, and questions of whether a particular empirical claim is supported by the available evidence).

So here's my question: if Quine is right, then is our old friend C=2πr simply wrong (and Cr right)? If not, how can a Quinean wiggle out of that consequence? And if so (i.e. C=2πr really is wrong), does the Quinean have any way of softening the sting of this apparently absurd consequence?

Labels: , ,

1/01/2014

The Knobe Effect and coaches resting their starting players

So here's one for the Knobe effect folks -- I'm wondering what they would think about the following case:
Last weekend, in the (American) football league, the head coach of the Kansas City Chiefs (Andy Reid) rested 20 of his 22 starting players; i.e. the coach played almost exclusively back-up players. He did this because he wanted to rest his starting players for the playoffs, and his team's position in the playoff seeding would not be affected by a win or a loss in last weekend's game against the San Diego Chargers.
Reid's decision (to play almost all back-up players instead of the usual starters) made it more likely that his team would lose to the Chargers. (This has subsequent effects, because the Chargers would go to the playoffs if they won that game.)
(This paragraph is skippable for those familiar with the Knobe effect.) For readers who don't have the Knobe Effect memorized, here's the original form: Suppose an action has a side effect. If that side-effect is considered morally bad, then respondents say the actor intentionally caused that side effect, whereas morally good side effects are judged to be unintentionally caused. After another decade of research, the picture has been altered and expanded somewhat; here's Mark Alfano's summary:
"Over the last decade, researchers have greatly expanded the diversity of both the norm-violations that trigger the effect and the psychological states whose attribution exhibits the asymmetry. The effect crops up not only after the violation of a moral norm, but also after the violation of prudential, aesthetic, legal, conventional, and even descriptive norms. The attribution asymmetry is found not only for intentionality, but also for cognitive attitudes such as belief, knowledge, and memory, for conative attitudes such as desire, favor, and advocacy, and for the virtue of compassion and the vice of callousness."
There are further wrinkles as well; in the case of morally wrong laws, the case is reversed: if the side-effect results in following the law (and thus causing a morally bad side-effect), then respondents say it was not caused on purpose (and conversely for breaking the law, i.e. causing a morally good side effect, was judged to be intentionally caused). Richard Holton has a nice short article that aims to account for all of these experimental results in terms of convention-breaking.

So now my question is: did Andy Reid intentionally reduce the Chiefs' chances of winning last Sunday? (And thereby intentionally increase the chances of the Chargers to make the playoffs?) To make it match the prompts for the Knobe effect studies, we can imagine that we asked Reid before the game "Do you care about improving the Chargers' chances of winning?" and he said "No, all I care about is resting my starting players."

This seems like an interesting case to me, because there is certainly a norm against intentionally causing your own team to (be more likely to) lose. Match-fixing is widely frowned-upon, and not only when gambling is involved. (Remember the badminton debacle in the 2012 Olympics, in which eight athletes were thrown out of the competition for trying to lose?) So it seems like (at least some) explanations of the Knobe effect would predict that respondents would say that Reid intentionally decreased his team's chances of winning.

12/18/2013

From Wolf's "Asymmetrical Freedom" to animals as moral agents

Most people nowadays think non-human animals (henceforth abbreviated ‘animals’) cannot be moral agents (though they can be moral patients*).   Let’s borrow Mark Rowlands’ definition from Animals that Act for Moral Reasons:
X is a moral agent if and only if X can be morally evaluated -- praised or blamed (broadly understood) -- for its motives and actions.
(Rowlands himself agrees with the orthodoxy that animals are moral patients but not agents; however he argues that animals do fall under a third category, moral subjects, which he defines as anything that can be motivated to act by moral considerations.)

Some people believe that animals can be moral agents; Marc Bekoff and Jessica Pierce’s Wild Justice is a recent defense of this view, but several other people have defended it as well (see the references in section 2 of Rowlands’ linked article). I want to consider a different kind of argument that I have not seen before; if someone else has already made it, please let me know in the comments.

The argument I want to consider here combines a position one of my students recently suggested with Susan Wolf’s “Asymmetrical Freedom.”  (So none of it is original with me.)  The key part of Wolf’s view is: “it seems that an agent can be morally praiseworthy even though he is determined to perform the action he performs” (158).  She elaborates on this as follows:
“When we ask whether an agent’s action is deserving of praise, it seems we do not require that he could have done otherwise.  … ‘I cannot tell a lie,’ ‘He couldn’t hurt a fly’ are not exemptions from praiseworthiness but testimonies to it.    If one feels one ‘has no choice’ but to speak out against injustice, one ought not to be upset about the depth of one’s commitment.” (156)
Wolf’s paper is titled “Asymmetrical Freedom” because, if an agent’s action is morally blameworthy, then that agent cannot be determined to perform that action, and we require that he could have done otherwise.  That is, “The metaphysical conditions required for an agent’s responsibility will vary according to the value of the action he performs” (158).**

Now, one of the leading reasons people give nowadays for the view that animals can’t be moral agents is that it seems wrong to hold animals morally blameworthy for their actions; this rationale is often coupled with the imagined scenario of putting an animal on trial for a crime to heighten the sense of absurdity. 

But if Wolf is right,*** then we can avoid this absurd consequence: a being’s actions can be morally praiseworthy even if its actions are determined and the being couldn’t do otherwise, then the fact that an animal acts ‘purely instinctively’ or automatically, without deliberative control, does not rule out that animal’s being a moral agent. (I am assuming we accept Rowlands’ definition of ‘moral agent’ (notice ‘praised OR blamed’ – not ‘and’).)

So far, this merely eliminates one obstacle to animals being moral agents: it is possible for a being to be morally praiseworthy without the possibility of being morally blameworthy (so we don’t have to put lions on trial), because an action can be morally praiseworthy even if the actor had no choice but to perform that action.

But I think we can go further than this mere possibility.  The point my student stressed, and which is probably implicit in the long Wolf quotation above, is that lots of human actions that we consider morally praiseworthy are automatic, ‘system1’ actions, over which we do not exercise deliberative control.  In this respect, they more closely resemble animal actions than our actions that result from deliberation, future-oriented planning, and (perhaps linguistically-aided) reasoning.  I think there are at least two classes of these automatic actions in humans: (i) the several different little daily kindnesses we do for one another without thinking (e.g. you drop your pen, and before I’ve even thought about whether or not I should reach down, I’m handing it back to you), and (ii) massively heroic actions whose performers, when interviewed afterwards, report not even thinking about e.g. running into the burning building.  Now, if we are willing to give moral praise to such automatic, non-deliberative behaviors when done by humans, then prima facie we should be willing to give moral praise to such automatic, non-deliberative behaviors when performed by non-humans too.

Of course, this is only prima facie evidence, because there certainly could be some relevant, important difference between a human’s automatic behaviors and a non-human’s that would invalidate the inference.  But going through the entire list of all plausible candidates would require a much fuller treatment.  I just wanted to get the basic argument clear: if some automatic human actions are morally praiseworthy, then some automatic non-human actions are morally praiseworthy too.

* Here is Rowlands’ definition of a moral patient: “X is a moral patient iff X is a legitimate object of moral concern: that is, roughly, X is something whose interests should be taken into account when decisions are made concerning it or which otherwise impact on it.”
** This formulation made me wonder whether there might be an interesting connection with the Knobe effect, since in Knobe effect situtations ‘the conditions required for an agent’s intentionality/performing an action ‘on purpose’ will vary according to the value of the action he performs.’
*** Of course, someone who finds the conclusion that animals can be morally praiseworthy absurd should take what follows as a reductio of Wolf’s claim that determined acts can be praiseworthy.