Like (almost) everyone else, I define ‘deductively correct,' i.e. 'valid' as follows:
An argument A is deductively correct (valid) =
If all of A’s premises are true, then A’s conclusion must be true =
If all of A’s premises are true, then it is impossible for A’s conclusion to be untrue.
Now, there are two (or maybe three) definitions of ‘inductive argument’ that follow this pattern of definition.
(Definition 1: probable simpliciter)
An argument B is inductively correct =
If all of B’s premises are true, then B’s conclusion is probably true =
If all of B’s premises are true, it is unlikely that B’s conclusion is untrue
(Definition 2: more probable)
An argument C is inductively correct =
If all of C’s premises are true, then C’s conclusion is more likely to be true =
If all of C’s premises are true, the probability of the conclusion’s untruth decreases
In other words:
Definition 1: Pr(Conclusion | Premises) > 0.5
Definition 2: Pr(Conclusion | Premises) > Pr(Conclusion)
(If you think >0.5 is too low, you can pick whatever higher cutoff you like. My current problem is different from picking where to set that threshold number.)
Now I can state my problem: it looks like neither definition will make the right classifications for some paradigm examples of fallacies.
On Definition 1, any argument whose conclusion highly probable regardless/independent of the truth or falsity of the premises, will count as inductively correct. That is, any non sequitur whose conclusion is probably true will count as inductively correct. (This is the inductive analog of the fact that a logical truth is a deductive consequence of any set of premises. But it just feels much more wrong in the inductive case, for some reason; maybe just because I've been exposed to this claim about deductive inference for so long that it has lost its un-intuitiveness?)
On Definition 2, hasty generalization (i.e. sample size too small) will count as inductively correct: suppose I talk to 3 likely US Presidential voters and all 3 say they are voting for Clinton. It is intuitively fallacious to conclude that Clinton will win the Presidency, but surely those 3 responses give some (small, tiny) boost to the hypothesis that she will win the Presidency.
But non sequiturs and hasty generalizations are both paradigm examples of fallacies, so neither Definition 1 nor Definition 2 will work.
I said above that there might be a third definition. This would simply be the conjunction of Definitions 1 and 2: If the premises are true, then the conclusion must be BOTH probable simpliciter (Def. 1) AND more probable (Def. 2). It seems like this would rule out both non sequiturs (because the truth of the premises increases the probability of the conclusion) and hasty generalizations (because the conclusion wouldn’t be probable simpliciter).
Problem solved? I don’t think it is, because there could be a hasty generalization for an argument whose conclusion is probable even if the premises are all false. Given our current background information (as of Sept. 6 2016) about the US Presidential race, the above example probably fits this description: ‘Clinton will win’ is more likely to be true than not, and the sample of three voters would boost a rational agent’s confidence in that claim (by a miniscule amount). That said, I will grant that a reasonable person might think this example is NOT a fallacy, but rather just an inductively correct argument that is so weak it is ALMOST a fallacy.
Before signing off, I will float a fourth candidate definition:
Definition 4: Pr(Conclusion | Premises) >> Pr(Conclusion)
put otherwise:
Definition 4: Pr(Conclusion | Premises) > Pr(Conclusion)+n, for some non-tiny n>0.
You could also conjoin this with Definition 1 if you wanted. This would take care of hasty generalizations. But does it create other problems? (You might object “That n will be arbitrary!” My initial reaction is that setting the line between inductive and fallacious at >0.5 [or wherever you chose to set it] is probably arbitrary in a similar way.)
4 comments:
Two somewhat cranky comments:
First, I'm surprised that you attribute that definition of validity to "(almost) everyone". Its if/then cannot be a material conditional, so it becomes tangled or muddy when the premises are inconsistent. I thought (almost) everyone would have said "C is valid iff it is impossible for the premises to be true and the conclusion false".
Second, I don't see why non sequitur is a problem. It seems entirely parallel to the case with deduction.
Consider a 30 premise argument with one irrelevant premise. We don't deny that it's deductively valid (or inductively strong). It is rhetorically non-optimal, but it still obtains that someone who accepts the premises ought to accept the conclusion. Perhaps the person offering the argument introduced more premises than they needed because they didn't realize how general the result really is. Maybe the evidence overdetermines the conclusion.
Now proceed by inches, so that more and more of the premises of the argument are unnecessary. I see no point at which that undoes the validity/strength. But of course the limit of the process is that all 30 premises are irrelevant. The result is so general that none of the premises were essential to it. That seems to me just how logic works; you can have a logically strong argument which is rhetorically weak.
For what it's worth: If we mean "inductive" in the broad sense of ampliative, then we could say that legit inductive arguments are ones such that they are invalid but the premises nevertheless give us good reason to believe the conclusion.
This avoids non sequitur, because it is posed in terms of the premises giving reasons. It also avoids hasty generalization, because it insists on good reasons.
Sadly, it is not formalized in terms of probabilities.
Hi P.D. --
Thanks for these!
On your first comment:
On your first point: If I change the definitions to what you prefer, does that change the main point of the OP?
On your second point (still within the first posted comment), I am definitely tempted to take that route. But then some instantiations of the following argument form become inductively good, and I'm lying to my students:
Most As are B.
x is B.
Thus, x is A.
(Arguments with such forms become inductively good (in the sense of Definition 1) when a sentence that has a high (unconditional) probability is substituted for the conclusion.)
- On your second comment (= 3rd point):
I agree with your characterization of "legit inductive arguments" completely, and that's exactly how I begin the material in class. The obvious next question is: what does 'good reason(s)' mean here, exactly? And that's the bit I am struggling with. I guess one radical(?) conclusion I could draw from the post is that there is no extant acceptable explication of 'good reason(s)'; we take it as primitive (or something we just know when we see it, or a family resemblance concept connected only by paradigm exemplars, or etc. etc.)
Regarding the definition of deduction: Your formulation sets up parallel structure with what you propose for induction. The definition I prefer would be rhetorically less useful without changing the upshot. On the day I had responded, the definition of validity had come up tangentially in my grad seminar; I tell students who give an "if the premises are true..." definition that they are only almost right.
Regarding the more substantive point, whether there is a general explication of "good reasons". One way of understanding John Norton's advocacy for a Material Theory of Induction is that there is no general, substantive characterization to be given for what makes an ampliative inference legitimate. There is something contingent at work in it, either a material postulate (a suppressed premise) or a material inference principle.
Note that for induction taken more narrowly (to mean random sampling) we can spell out what the postulates or inference principles look like. And it becomes explicit that larger samples make for stronger inferences.
Post a Comment