Skip to main content

A rant about "deductive"

or
Don't diss the logician

I’m on my way back from The Second Conference on Concept Types and Frames in Language, Cognition and Science in Dusseldorf. It was a nice conference that gathered linguists, cognitivists, philosophers of science and logicians interested in the functional approach to concepts.

One of the things that surprised me was that both experienced cognitivists (like Paul Thagard) and younger researchers still stick to the distinction between inductive and deductive types of reasoning and attach that much importance to it. Interestingly, “deductive” in their use has a pejorative content and the term is sometimes used condescendingly to emphasize that whatever it is that logicians do is boring and useless and that pretty much the only source of insight and real knowledge are “inductive inferences” taking place in “the real brain”. So, here’s a short rant about this sort of attitude (Frederik is reading over my shoulder and tossing in his remarks).

To start with, I don’t think I know a logician alive who still uses the word “deductive” in any serious ahistorical context. This is because the notion is so worn out that different people associate it with many different things. Instead, more specific terms are used that separately capture different things that you might mean when you say “deductive”.

Roughly, a consequence operation is, for instance, often simply thought of as a set of pairs of sets of sentences. It is called structural if it’s closed under substitution. That’s one thing that you might have in mind: deductive means defined in terms of rules (and maybe axioms) which essentially make no distinction between formulas of the same syntactic form. Another way you can think about these things is to require that a deductive consequence should be simply truth-preserving (vaguely: it’s impossible that the consequence is false when the premises are true). This interpretation is not syntactic, but rather model-theoretic. A truth-preserving consequence doesn’t have to be structural and a structural consequence doesn’t have to be truth-preserving. Another sense you might associate with being deductive is being both structural and truth-preserving (in which case, you still get a multitude of consequence operations, depending on what language and model theory you pick, and what you take to belong to your logical vocabulary). Yet another interpretation you can take is to say that something is a deductive consequence of a given set of premises if it follows from them by classical logic – this notion is sometimes used by those cognitivists who think that logic is classical logic. Although this consequence is structural, whether it’s truth-preserving when it comes to natural language is a matter of what you think about the correctness of certain natural language inferences. For instance, you might be a relevantist – in which case you’re inclined to say that the classical logic allows you to infer too much. Yet another notion simply requires a deductive consequence to satisfy Tarski’s conditions, or some of them, or some of them and some other conditions of a similar type. Yet another idea is to make no reference to a formal system whatsoever and assume that a sentence A is a deductive consequence of a sentence B iff “If B, then A” is analytic (standard qualms about analyticity aside). So in general, the logician’s conceptual framework is full of notions more precise than “deductive”, and the word “deductive” seems unclear and a tad outdated.

But let us even suppose we fix on the notion of being deductive as being validated by classical logic (this seems to be the best you can do if you want to make it easy for the cognitivits to argue that deductive inferences are uninformative). Why on earth would you think that deductive reasoning can only give you boring and useless consequences that you already were aware of, unless you say so because what you take to be the most prominent example of a deduction is one of the slightly obvious syllogisms, most likely employing Socrates and his mortality?

The thing is, human beings are not logically omniscient (I myself, for instance, often feel dumb when I stare at a deductive proof I can’t grasp after half an hour). In fact, the history of mathematics is a good source of examples where prima facie well-understood premise sets led to surprising consequences. Just because the truth of a conclusion is guaranteed by the truth of the premises doesn’t mean that once we believe the premises we actually are aware that they lead to this conclusion. Take the Russell’s paradox. A rather bright dude named Frege spent years without noticing a fairly simple reasoning whose conclusion was to him somewhat surprising. Take Godel’s incompleteness theorem(s). A rather known set of mathematical truths together with a bit of slightly complicated deductive reasoning led to one of the most important discoveries in the 20th century logic, which stunned a bunch of other not-too-dumb mathematicians. If you still think that deductive inferences give nothing but boring and obvious conclusions, think again!

Two points about the opposition between the deductive and the inductive. First of all, unless you define inductive as non-deductive, the distinction is not exhaustive. For instance, if inductive inferences are supposed to be those that lead to a general conclusion, we’re missing non-deductive inferences with particular conclusions (like in History, one uses certain general assumptions and knowledge about present facts to surmise something particular about the past). In this sense, the deductive-reductive distinction introduced by the Lvov-Warsaw school sounds a bit neater (look it up).

Another thing is that people often speak of inductive inferences as if they didn’t have anything to do with deduction (the following point was made by Frederik). Quite to the contrary, certain facts about what is deducible and what isn’t lie always in the background when you’re assessing the plausibility of an inductive inference. For instance, you want the generalization you introduce to explain certain particular data you’re generalizing from, and one of the most obvious analysis of explanation uses the notion of deducibility. Also, you don’t want your new generalization to contradict your other data and other generalizations you have introduced before: but hey, isn’t the notion of consistency highly dependent of your notion of derivability?

Having said that, I also have to emphasize that this doesn't mean that I take non-deductive inferences (whatever they are) to be uninteresting; indeed, the question of how we come to accept certain beliefs other than by deducing them (whatever this consists in) from other beliefs is a very hard and interesting problem. What I oppose to, rather, is drawing cut and dry lines between these types of reasoning and saying that only one of them is interesting.

Comments

Tuomas E. Tahko said…
That's interesting Rafal. I wonder how the inductive/deductive distinction relates to the a posteriori/a priori distinction, namely, is deduction always a priori and induction a posteriori? Or does it even make sense to compare the notions?
Steve said…
Thanks for your post. Can you recommend some reading for a non-logician that provides an overview of the 'deductive vs. inductive' debate?
Anonymous said…
Can't we mark the interesting difference simply by noting that there is no syntactic test for the validity of an inductive inference, whereas there are syntactic tests for inference validity in all deductive systems?

This bypasses questions about which forms of inference are more 'interesting' (whatever that might mean) or the 'a posteriori/a priori' distinction, which introduces all sorts of other baggage.

- Nick Maley