Skip to main content

A gentle introduction to adaptive logics

Here's a brief explanation of what adaptive logics are that I wrote up as a part of another paper. It covers less ground than Batens's stuff, but it is meant to be a tad more accessible and introductory. If you need more gory details, check out the centre's writings list. If I have some time, I'll post some some examples that show how the machinery works. So, here we go (let me know if you find typos or unclarities):


An adaptive logics adapts itself to the premises it's applied to: the correctness of some rules or steps depends on the choice of premises. The basic idea is that while reasoning using an adaptive logic we "swing between" two simpler logics (called the `lower limit logic,' LLL, and the `upper limit logic,' ULL), ULL being a strengthening of LLL, so that when no problematic formula (details to follow) is derived from a set of premises, we apply ULL, and once some premises turn out to lead to difficulties, we restrict ourselves only to those conclusions that we can derive from the problematic premises using LLL, even though we may still apply ULL to those steps which don't rely on the normal behavior (=falsehood) of those abnormalities that we know follow from the premises. This approach results in adaptive proofs being doubly dynamic.

They are externally dynamic because most of them are nonmonotonic: once our premise set is extended by new input, we might have to cancel some of our previous conclusions if the new information makes the premises unreliable (say, if it depends on the falsehood of some abnormalities which now the new set of premises can derive). They are also internally dynamic because even if we keep the premise set the same, it may turn out that a conclusion that we ULL-derived from it using certain premises is no longer reliable once we discover at some later point that those premises also LLL-derive a problematic formula and hence those rules that are specific for ULL cannot be reliably applied in certain cases.

There is a general form that various adaptive logics instantiate, called the standard format. An adaptive logic AL is characterized by a choice of:
  • a lower limit logic LLL,
  • a set of abnormalities O, and
  • a strategy.
LLL is the logic which is taken to hold unconditionally and whose rules are correct even in the case of problematic premises. LLL is monotonic and the language contains the classical connectives. For instance, in an inconsistency-adaptive logic the LLL will be a non-explosive logic of some sort (say, CLuN).

O is a set of those formulas which are assumed to be false unless proven otherwise. Batens (2007a: 130-131) requires that O should be a set of formulas of a certain logical form, so that "it enables one to consider adaptive logics as formal logics." For instance, in an inconsistency-adaptive logic one might take the abnormalities to be formulas of the form  A&~A  (or existential closures of those, if the language is first-order rather than purely sentential). LLL together with O determine ULL. The upper limit logic is just the lower limit logic plus the assumption that the elements of are false. Let G be a set of formulas. If we define: (yes, I will keep my symbols as simple as possible because I'm too lazy to play with embedding LaTeX into html)

(1) G~ = {~A: A is an element of G}

and we represent the provability relations of various logics by ``(L)->," we can put this point concisely:

(2) G (ULL->) A iff G u O~ (LLL->) A

where u is the set-theoretic union.

A strategy, basically, is a set of instructions that tell us when we have to cancel a conclusion that we obtained previously or when to remove the cancelation symbol from a line (technically, the activity is called marking). In order to be able to explain one of the simplest strategies, however, I have to say a few words about dynamic proofs.


A dynamic proof is a sequence of lines, which can be taken to consist of four main components: a line number, a formula, a justification for that formula, and a set of conditions upon which the formula is derived. Besides, each line can be marked (marks can come and go as the proof progresses). If a line is at some point marked, it means that the formula in this line is not considered derived at that stage. The first three components are fairly self-explanatory. Conditions and marking require some more attention.

Our task in a proof is not only to derive formulas from premises but also to recognize those steps that can't be trusted and to cancel those conclusions which rely on the normal behavior of abnormalities which we have reasons to believe not to behave normally. Now, what do we mean when we say that a step can't be trusted? At the first stab, one might try to say that a step is unreliable if it depends on the falsehood of an abnormality which as it turns out follows from the premises. This is quite close. There's one minor complication, though. A premise set may prove a disjunction of abnormalities without proving any of its disjuncts separately, thus, in a sense, implying that at least one of them has to behave erratically, but not telling us that there is a single disjunct that we can blame. To keep track of these things, in the conditions column we put sets of those abnormalities on whose normal behavior (=falsehood) the present step relies. And (on one of the simplest strategies) we mark a line as unreliable if it depends on (the normal behavior of) a set of abnormalities D, and some member of D is a disjunct in a minimal disjunction of abnormalities LLL-derived from the premises (this notion and the rationale for its introduction will be explained below). Let's take a look at dynamic proofs in a bit more detail.

There are three rules for AL-proving formulas from G:
  • The first rule, Prem, allows one to introduce any premise  from G with the empty set in the conditions column, and the first column contains an appropriate line number.
  • The second rule, Ru, says that if we have proven A1 on the assumption that D1 behaves normally, A2 on the assumption that D2 behaves normally, ... , and An on the assumption that Dn behaves normally, and B can be LLL-derived from A1, ..., An, we can infer B as relying on the normal behavior of D1 u D2 u ... u Dn.
  • The third rule Rc is based on the following idea. If from G we can LLL-derive that either
    B is true, or one of the abnormalities in a set D occurs, we can conclude that B AL-follows from G on the assumption that D behaves normally. If D is a finite set of abnormalities, let us call the classical disjunction of its members "Dab(D)." Just like before, assume we have proven A1 on the assumption that D1 behaves normally, A2 on the assumption that D2 behaves normally, ... , and An on the assumption that Dn behaves normally. Now, however, suppose and B v Dab(D) can be LLL-derived from A1, ..., An. Then we can infer B as relying on the normal behavior of D1 u D2 u ... u Dn u D.
Now the most tricky part: marking for the reliability strategy (be aware this is a fairly simple strategy and there are others). A proof, as we conduct it, can be considered as proceeding in stages. Every application of a rule carries us to the next stage. A formula Dab(D) (i.e., a classical disjunction of abnormalities) is a minimal Dab formula of a proof at a stage s iff Dab(D) occurs in the proof at a line with the empty condition set, and for no D' which is a proper subset of D) the proof contains at s a line with Dab(D') which has the empty set as the condition.

The intuitive reason why we are interested in minimal Dab-formulas of a proof instead of just any proven Dab-formulas whatsoever is this. We want Dab-formulas to help us discover those abnormalities on whose normal behavior we can't rely. The fact that Dab(D) has been proven tells us only that at least one member of D has to be true if the premises are to be true (at least as long as LLL is truth-preserving). However, we want to assume that as many abnormalities are false as possible and we take any abnormality to be false unless really forced to do otherwise. Thus, if we know that both Dab(D) and Dab(D') are LLL-derived from our premises, but also D' is a proper subset of D, we know that we don't really have to blame any element of D which doesn't belong to D' for this fact. If we want to deal with as few abnormalities being LLL-derived as possible, it's enough to assume that it is the members of D' that we can't rely on.

Hence, we first define Us(G) to be the union of all D's that are constituents of those minimal Dab-formulas that have been derived so far from G at stage s. Then, we mark a line in a proof as canceled (or unreliable) if it depends on the normal behavior of D, and yet at least one member of D is a member of Us(G). That is, if a certain line relies on the falsehood of all abnormalities from a certain set, and yet at least one member of this set is among those abnormalities on whose normal behavior we can't rely, the formula in the line itself turns out to be unjustified.


You should be able to see now how the dynamics of a proof can go: you may be forced to retract one of your conclusions if it turns out it relies on the normal behavior of D and a member of D is a member of a Dab(D') formula that you get as a minimal Dab formula at a certain stage, and you may be also allowed to cancel your cancellation of a formula afterwards if you later prove a subformula of Dab(D') that doesn't contain a member of D thus making Dab(D') no longer minimal.

One last notion that we need: final derivability. Since we are sometimes allowed to cancel a line that we derived before, the fact that a line is derived does not mean that it actually AL-follows from the premises. Hence we also introduce the notion of final derivability. A formula  A is finally derived in a proof if it is derived in an unmarked line of that proof and also no way we can continue the proof will force us to finally mark that line (that is, any extension of the proof which would force us to mark the line can always be itself extended into a proof where this line is unmarked). There are two ways this can happen:
  • A follows from our premises by LLL, in which case it is derived on ; and then it is vacuously true that we cannot prove any abnormality on whose normal behavior it relies (just because it doesn't relies on the normal behavior of any abnormality), and
  • For some non-empty set of abnormalities D, our derivation of  depends on the normal behavior of all the members of D, but no member of D is a member of a minimal Dab-formula LLL-derivable from the premises.








Comments