Like (almost) everyone else, I define ‘deductively correct,' i.e. 'valid' as follows:
An argument A is deductively correct (valid) =
If all of A’s premises are true, then A’s conclusion must be true =
If all of A’s premises are true, then it is impossible for A’s conclusion to be untrue.
Now, there are two (or maybe three) definitions of ‘inductive argument’ that follow this pattern of definition.
(Definition 1: probable simpliciter)
An argument B is inductively correct =
If all of B’s premises are true, then B’s conclusion is probably true =
If all of B’s premises are true, it is unlikely that B’s conclusion is untrue
(Definition 2: more probable)
An argument C is inductively correct =
If all of C’s premises are true, then C’s conclusion is more likely to be true =
If all of C’s premises are true, the probability of the conclusion’s untruth decreases
In other words:
Definition 1: Pr(Conclusion | Premises) > 0.5
Definition 2: Pr(Conclusion | Premises) > Pr(Conclusion)
(If you think >0.5 is too low, you can pick whatever higher cutoff you like. My current problem is different from picking where to set that threshold number.)
Now I can state my problem: it looks like neither definition makes the correct classifications for some paradigm examples of fallacies.
On Definition 1, any argument whose conclusion highly probable regardless/independent of the truth or falsity of the premises, will count as inductively correct. That is, any non sequitur whose conclusion is probably true will count as inductively correct. (This is the inductive analog of the fact that a logical truth is a deductive consequence of any set of premises. But it just feels much more wrong in the inductive case, for some reason; maybe just because I've been exposed to this claim about deductive inference for so long that it has lost its un-intuitiveness?)
On Definition 2, hasty generalization (i.e. sample size too small) will count as inductively correct: suppose I talk to 3 likely US Presidential voters and all 3 say they are voting for Clinton. It is intuitively fallacious to conclude that Clinton will win the Presidency, but surely those 3 responses give some (small, tiny) boost to the hypothesis that she will win the Presidency.
But non sequiturs and hasty generalizations are both paradigm examples of fallacies, so neither Definition 1 nor Definition 2 will work.
I said above that there might be a third definition. This would simply be the conjunction of Definitions 1 and 2: If the premises are true, then the conclusion must be BOTH probable simpliciter (Def. 1) AND more probable (Def. 2). It seems like this would rule out both non sequiturs (because the truth of the premises increases the probability of the conclusion) and hasty generalizations (because the conclusion wouldn’t be probable simpliciter).
Problem solved? I don’t think it is, because there could be a hasty generalization for an argument whose conclusion is probable even if the premises are all false. Given our current background information (as of Sept. 6 2016) about the US Presidential race, the above example probably fits this description: ‘Clinton will win’ is more likely to be true than not, and the sample of three voters would boost a rational agent’s confidence in that claim (by a miniscule amount). That said, I will grant that a reasonable person might think this example is NOT a fallacy, but rather just an inductively correct argument that is so weak it is ALMOST a fallacy.
Before signing off, I will float a fourth candidate definition:
Definition 4: Pr(Conclusion | Premises) >> Pr(Conclusion)
put otherwise:
Definition 4: Pr(Conclusion | Premises) > Pr(Conclusion)+n, for some non-tiny n>0.
You could also conjoin this with Definition 1 if you wanted. This would take care of hasty generalizations. But does it create other problems? (You might object “That n will be arbitrary!” My initial reaction is that setting the line between inductive and fallacious at >0.5 [or wherever you chose to set it] is probably arbitrary in a similar way.)
7 comments:
Interesting! Might some relevance logical rule be of any help to fix the first definition?
Yes -- I had exactly that thought, that perhaps the main point of the OP could be used as an argument in favor of relevance logic.
But I was hoping to be able to explain the concept of an inductive argument to my Intro to Philosophy students without having to introduce relevance logic ...
I'm dubious about the transition from conditionals to conditional probabilities. Is it so clear that this problem arises for the initial formulation with conditionals?
Hi Tristan --
Thanks for this! Could you say a little bit more about this (or point me to something/ someone who does)? In particular:
(1) Are there well-known problems/ issues with transitioning from conditionals to conditional probabilities?
(2) If there are such well-known problems, would that undermine the argument given in the original post? If so, how?
Hey Greg.
I'm wondering about the philosophical motivations for the concept of inductive correctness. It seems to me that the problem lies with the binary concept of inductive *correctness*, as opposed to inductive argument per se; The definitions you canvas seem to me to work for the latter but not the former.
But do we need a concept of inductive correctness? Why not settle for:
The inductive strength of an argument = Pr(Conclusion | Premises) - Pr(Conclusion).
This approach doesn't give you a sharp distinction between inductively correct on incorrect. It lets you say, strictly speaking, things like,
1. "Cet. par. the larger your sample, the stronger (and less fallacious) the inductive argument",
rather than
2. "If your sample size is less than x [and population size greater than y?] then your generalization is hasty and your argument inductively incorrect."
So one way of putting my question about philosophical motivations is: is it important to be able to say things like 2, in addition to 1?
Hi Jonathan --
Good to hear from you!
Yes, you may be right that there is no good reason to say things like 2 in addition to 1.
Here are the two motivations I can think of; one is practical/ pedagogical (for teachers, at least), the other less so:
1. Pedagogical (and what actually prompted this post): I want to be able to tell my Intro-level students that some inductive arguments are good and others aren't. I like the idea of showing them what it means to explicate apparently basic relations like 'X is a good, but not conclusive, reason for Y.' Perhaps this is impossible; but Critical Thinking textbooks don't think so, at least.
2. What should I actually believe? If I have a genuinely inductive argument for claim C, and no countervailing arguments, then I should believe C. On the other hand, if I have only fallacious arguments for C, then I should not believe C.
(I know the ethics of belief are a bit more complicated than this, but I suspect something like this idea could survive the complications -- btu I could certainly be talked out of that.)
Thanks, Greg. Those sound like compelling motivations to me.
Re: the pedagogical motivation: do you think it would be feasible to use inductive correctness as an example of a concept that has not yet been adequately explicated? I think it can be good to show students that the issues they're learning about are sometimes connected, and not distantly so, to the cutting edge--even textbook material can be debated, improved upon, etc. (I guess this would sort of be to present to students the thrust of the OP.)
My point in my previous comment relates to this: we *can* adequately explicate degree of inductive strength, so evaluation of inductive arguments isn't completely inexplicable; but (your point) it would be great to go further. Here's an opportunity for a novel philosophical contribution to a problem that students are already in a position to understand.
A downside of this suggestion is that it's hard to include ongoing problems on tests and, sadly, it can be hard to get the average student (at many schools) to care about material that won't be on the test...
Post a Comment