Since Brian Weatherson recently classified this blog as 'active,' I thought I should try to live up to that billing.
A question came up in one of my classes yesterday. The student asked (in effect): Why wouldn't a moral theory that makes all the right 'predictions' about actual cases be good enough? Why demand that a moral theory must also be consistent with our intuitions about possible cases, even science-fiction-ish ones, as well?
(The immediate context was a discussion of common objections to utilitarianism, specifically, slavery and the utility monster. The student said, sensibly I think, that the utilitarian could reply that all actual cases of slavery are bad on utilitarian grounds, and there are no utility monsters.)
I know that some philosophers have argued that if a moral claim is true, then it is (metaphysically?) necessarily true: there is no possible world in which e.g. kicking kittens merely for fun is morally permissible. If you accept that all moral claims are like this, then I can see why you would demand our moral theories be consistent with our intuitions about all possible cases. But if one does not accept that all moral truths are metaphysically necessary, is there any other reason to demand the theory make the right prediction about merely possible cases?
This question seems especially pressing to me, if we think one of the main uses of moral theories is as a guide to action, since we only ever act in the actual world. However, now that I say that explicitly, I realize that whenever we make a decision, the option(s) we decided against are merely possible situations. So maybe that could explain why an ethical theory needs to cover merely possible cases? (Though even there, it need not cover all metaphysically possible cases -- e.g. the utility monster worries never need to be part of my actual decision-making process.)
4 comments:
If maximizing utility doesn't get the right answer across all possible moral cases, then it cannot be the (full) explanation for what makes a particular action right/good in the actual world.
Mere extensional adequacy does not seem to be the target of moral theorizing, unless we take the task of moral theorizing to be the production of heuristics, rather than explanations.
Hi Lewis --
Thanks for stopping by! Sorry I took so long to reply; I couldn't really have anything to say for a while, in part because at first I wasn't sure what reason(s) you had in mind for denying that moral theories only needed to make all the right predictions about actual cases. But this morning I had a new thought, and thought perhaps I now understood where you were coming from. Is the following at least in the spirit of what you have in mind here?
"Since Hempel, many/most philosophers working on explanation have argued that a (full) explanation must have at least one law in it. Laws are claims of necessity (so they support counter-factuals). So any explanation of what makes a particular action morally right or wrong must include a moral 'law' -- or at least a morally necessary claim. And these moral laws or necessities are precisely what moral theories aim to provide."
Or is that not the sort of thing you had in mind?
Also, I guess I am wondering about your phrase "all possible moral cases." I honestly don't know what is supposed to be included in that. Is it different from "all metaphysically possible cases?" Or something narrower? (And if something narrower, why?)
(I guess in particular I am wondering whether you think "all possible moral cases" is constrained/ determined in part by the "'ought' implies 'can'" principle.)
Greg,
Suppose in the actual world we think agent S ought to do A, and we offer a sort of utilitarian story about why, namely that the change in welfare from doing A is positive (or, if you like that the welfare outcomes of doing A are greater than those of refraining to A).
So we seem to commit to "S should do A because A-ing would increase welfare".
Now, if there is a possible scenario where A-ing would increase welfare, but it is not the case that S should do A in that scenario, the explanation we gave of the actual scenario is at best incomplete. Suppose the scenario is one where A-ing would increase welfare, but only slightly, and also S would break a promise by A-ing. And so we think S shouldn't A in that scenario. Then the full explanation in the actual scenario left some qualifications implicit. It should be:
"S should do A because it increases welfare and does not involve the breaking of a promise" or something like that.
In general, if Q is a full explanation of P, then any time Q obtains, P should obtain also. I think it is plausible that moral explanations will thus have to involve moral laws as you describe them, but I am really only committed to the weaker claim that full explanations cannot be obtain without the things they explain obtaining also.
So, utilitarianism may be an effective heuristic in the actual world, but if it is the full explanation of why we should do various things, then it would have consequences for triage cases and waiting room cases and trolley cases. If the answers it gives in those cases are wrong, it reveals to us that utilitarianism, at best, offers only partial explanations of our actual world obligations.
Post a Comment