Like (almost) everyone else, I define ‘deductively correct,' i.e. 'valid' as follows:

An argument A is deductively correct (valid) =

If all of A’s premises are true, then A’s conclusion must be true =

If all of A’s premises are true, then it is impossible for A’s conclusion to be untrue.

Now, there are two (or maybe three) definitions of ‘inductive argument’ that follow this pattern of definition.

(Definition 1: probable simpliciter)

An argument B is inductively correct =

If all of B’s premises are true, then B’s conclusion is probably true =

If all of B’s premises are true, it is unlikely that B’s conclusion is untrue

(Definition 2: more probable)

An argument C is inductively correct =

If all of C’s premises are true, then C’s conclusion is more likely to be true =

If all of C’s premises are true, the probability of the conclusion’s untruth decreases

In other words:

Definition 1: Pr(Conclusion | Premises) > 0.5

Definition 2: Pr(Conclusion | Premises) > Pr(Conclusion)

(If you think >0.5 is too low, you can pick whatever higher cutoff you like. My current problem is different from picking where to set that threshold number.)

Now I can state my problem: it looks like neither definition makes the correct classifications for some paradigm examples of fallacies.

On Definition 1, any argument whose conclusion highly probable regardless/independent of the truth or falsity of the premises, will count as inductively correct. That is, any

**non sequitur**whose conclusion is probably true will count as inductively correct. (This is the inductive analog of the fact that a logical truth is a deductive consequence of any set of premises. But it just feels much more wrong in the inductive case, for some reason; maybe just because I've been exposed to this claim about deductive inference for so long that it has lost its un-intuitiveness?)

On Definition 2,

**hasty generalization**(i.e. sample size too small) will count as inductively correct: suppose I talk to 3 likely US Presidential voters and all 3 say they are voting for Clinton. It is intuitively fallacious to conclude that Clinton will win the Presidency, but surely those 3 responses give some (small, tiny) boost to the hypothesis that she will win the Presidency.

But non sequiturs and hasty generalizations are both paradigm examples of fallacies, so neither Definition 1 nor Definition 2 will work.

I said above that there might be a third definition. This would simply be the conjunction of Definitions 1 and 2: If the premises are true, then the conclusion must be BOTH probable simpliciter (Def. 1) AND more probable (Def. 2). It seems like this would rule out both non sequiturs (because the truth of the premises increases the probability of the conclusion) and hasty generalizations (because the conclusion wouldn’t be probable simpliciter).

Problem solved? I don’t think it is, because there could be a hasty generalization for an argument whose conclusion is probable even if the premises are all false. Given our current background information (as of Sept. 6 2016) about the US Presidential race, the above example probably fits this description: ‘Clinton will win’ is more likely to be true than not, and the sample of three voters would boost a rational agent’s confidence in that claim (by a miniscule amount). That said, I will grant that a reasonable person might think this example is NOT a fallacy, but rather just an inductively correct argument that is so weak it is ALMOST a fallacy.

Before signing off, I will float a fourth candidate definition:

Definition 4: Pr(Conclusion | Premises) >> Pr(Conclusion)

put otherwise:

Definition 4: Pr(Conclusion | Premises) > Pr(Conclusion)+

*n*, for some non-tiny

*n*>0.

You could also conjoin this with Definition 1 if you wanted. This would take care of hasty generalizations. But does it create other problems? (You might object “That

*n*will be arbitrary!” My initial reaction is that setting the line between inductive and fallacious at >0.5 [or wherever you chose to set it] is probably arbitrary in a similar way.)