Focusing the Inverse Method for Linear Logic

Focusing is traditionally seen as a means of reducing inessential non-determinism in backward-reasoning strategies such as uniform proof-search or tableaux systems. In this paper we construct a form of focused derivations for propositional linear logic that is appropriate for forward reasoning in the inverse method. We show that the focused inverse method conservatively generalizes the classical hyperresolution strategy for Horn-theories


Introduction
Strategies for automated deduction can be broadly classified as backward reasoning or forward reasoning.Among the backward reasoning strategies we find tableaux and matrix methods; forward reasoning strategies include resolution and the inverse method.The approaches seem fundamentally difficult to reconcile because the state of a backward reasoner is global, while a forward reasoner maintains locally self-contained state.Both backwards and forwards approaches are amenable to reasoning in non-classical logics.This is because they can be derived from an inference system that defines a logic.The derivation process is systematic to some extent, but in order to obtain an effective calculus and an efficient implementation, we need to analyze and exploit deep proof-theoretic or semantic properties of each logic under consideration.Some themes stretch across both backwards and forwards systems and even different logics.Cutelimination and its associated subformula property, for example, are absolutely fundamental for both types of systems, regardless of the underlying logic.In this paper we advance the thesis that focusing is similarly universal.Focusing was originally designed by Andreoli [1,2] to remove inessential non-determinism from backward proof search in classical linear logic.It has already been demonstrated [17] that focusing applies to other logics; here we show that focusing is an important concept for theorem proving in the forward direction.
As the subject of our study we pick propositional intuitionistic linear logic [14,3,8] with an additional lax modality [22].This choice is motivated by two considerations.First, it includes the propositional core of the Concurrent Logical Framework (CLF) [21], so our theorem prover, and its first-order extension, can reason with specifications written in CLF; many such specifications, including Petri nets, the π-calculus and Concurrent ML, are described in [7].For many of these applications, the intuitionistic nature of the framework is essential.Second, it is almost a worst-case scenario, combining the difficulties of modal logic, intuitionistic logic, and linear logic, where even the propositional fragment is undecidable.A treatment, for example, of classical linear logic without the lax modality can be given very much along the same lines, but would be simpler in several respects.
Our contributions are as follows.First, we show how to construct a non-focusing inverse method for intuitionistic linear logic.This follows a fairly standard recipe [12], although the resource management problem germane to linear logic has to be considered carefully.Second, we define focused derivations for intuitionistic linear logic.The focusing properties of the connectives turn out to be consistent with their classical interpretation, but completeness does not come for free because of the additional restrictions placed by intuitionistic (and modal) reasoning.The completeness proof is also somewhat different from ones we have found in the literature.Third, we show how to adapt focusing so it can be used in the inverse method.The idea is quite general and, we believe, can be adapted to other non-classical logics.Fourth, we demonstrate via experimental results that the focused inverse method is substantially faster than the nonfocused one.Fifth, we show that refining the inverse method with focusing agrees exactly with classical hyperresolution on Horn formulas, a property which fails for non-focusing versions of the inverse method.This is practically significant, because even in the linear setting many problems or subproblems may be non-linear and Horn, and need to be treated with reasonable efficiently.
In a related paper [10] we generalize our central results to first-order intuitionistic linear logic, provide more detail on the implementation choices, and give a more thorough experimental evaluation.Lifting the inverse method here to include quantification is far from straightforward, principally because of the rich interactions between linearity, weakening, and contraction in the presence of free variables.However, these considerations are orthogonal to the basic design of forward focusing which remains unchanged from the judgemental rules Perhaps most closely related to our work is Tammet's inverse method prover for classical linear logic [20] which is a refinement of Mints' resolution system [19].Some of Tammet's optimizations are similar in nature to focusing, but are motivated primarily by operational rather than by logical considerations.As a result, they are not nearly as far-reaching, as evidenced by the substantial speedups we obtain with respect to Tammet's implementation.Our examples were chosen so that the difference between intuitionistic and classical linear reasoning was inessential.

Backward linear sequent calculus
We use a backward cut-free sequent calculus for propositions constructed out of the propositional linear connectives {⊗, 1, , &, , ⊕, 0, !}; the extension to first-order connectives using the recipe outlined in [10] is straightforward.Propositions are written using uppercase letters A, B, C, with p standing for atomic propositions.The sequent calculus is a standard fragment of JILL [8], containing dyadic two-sided sequents of the form Γ ; ∆ =⇒ C: the zone Γ contains the unrestricted hypotheses and ∆ contains the linear hypotheses.Both contexts are unordered.The rules of the calculus are in fig. 1.

If
For the fairly standard proofs, see [8].

Definition 2.3 (subformulas).
A decorated formula is a tuple A, s, w where A is a proposition, s is a sign (+ or −) and w is a weight (h for heavy or l for light).The subformula relation ≤ is the smallest reflexive and transitive relation between decorated subformulas satisfying the following inequalities: where s is the opposite of s, and * can stand for either h or l, as necessary.Decorations and the subformula relation are lifted to (multi)sets in the obvious way.

Property 2.4 (subformula property). In any sequent
For the remainder of the paper, all rules are restricted to decorated subformulas of a given goal sequent.A right (resp.left) rule is applicable if the principal formula in the conclusion is a positive (resp.negative) subformula of the goal sequent.Of the judgmental rules (re-introduced in the next section), init is restricted to atomic subformulas that are both positive and negative decorated subformulas, and the copy rule is restricted to negative heavy subformulas.

Forward linear sequent calculus
In addition to the usual non-determinism in rule and subgoal selection, the single-use semantics of linear hypotheses gives rise to resource non-determinism during backward search.Its simplest form is multiplicative, caused by binary multiplicative rules (⊗R and L), where the linear zone of the conclusion has to be distributed into the premisses.In order to avoid an exponential explosion, backward search strategies postpone this split either by an input/output interpretation, where proving a subgoal consumes some of the resources from the input and passes the remaining resources on as outputs [5], or via Boolean constraints on the occurrences of linear hypotheses [16].Interestingly, multiplicative non-determinism is entirely absent in a forward reading of multiplicative rules: the linear context in the conclusion is formed simply by adjoining those of the premisses.On the multiplicative-exponential fragment, for example, forward search has no resource management issues at all.Resource management problems remain absent even in the presence of binary additives (& and ⊕).
The only form of resource non-determinism for the forward direction arises in the presence of additive constants ( and 0).For example, the backward R rule has an arbitrary linear context which we cannot guess in the forward direction.We therefore leave it empty (no linear assumptions are needed), but we have to remember that we can add linear assumptions if necessary.We therefore differentiate sequents whose linear context can be weakened and those whose can not.To distinguish forward from backward sequents, we shall use a single arrow (−→), possibly decorated, but keep the names of the rules the same.

Definition 3.1 (forward sequents).
1.A forward sequent is of the form Γ ; ∆ −→ 0 C or Γ ; ∆ −→ 1 γ.Γ contains the unrestricted resources, ∆ holds the linear resources, and γ is either empty • or a proposition C. Forward sequents are written mnemonically as Γ ; ∆ −→ w γ where w is a Boolean (0 or 1) called the weak-flag.Sequents with w = 1 are called weakly linear or simply weak, and those with w = 0 are strongly linear or strong.2. The correspondence relation ≺ between forward and backward sequents is defined as follows: The subsumption relation ≤ between forward sequents is the smallest relation to satisfy: where Γ ⊆ Γ , ∆ ⊆ ∆ , and γ ⊆ γ .
Note that strong sequents never subsume weak sequents.
Obviously, if s 1 ≤ s 2 and s 2 ≺ s, then s 1 ≺ s.It is easy to see that weak sequents model affine logic: this is familiar from embeddings into linear logic that translate affine implications A → B as A (B ⊗ ).The collection of inference rules for the forward calculus is in fig. 2. The trickiest aspect of these rules are the side conditions (given in parentheses) and the weakness annotations.In order to understand these, it may be useful to think in term of the following property, which we maintain for all rules in order to avoid redundant inferences.

Definition 3.2.
A rule with conclusion s and premisses s 1 , . . ., s n is said to satisfy the irredundancy property if for no i ∈ {1, . . ., n}, s i ≤ s.
In other words, a rule is irredundant if none of its premisses subsumes the conclusion.Note that this is a local property; we do not discuss here more global redundancy criteria.
The first immediate observation is that binary rules simply take the union of the unrestricted zone from the premisses.The action of the rules on the linear zone is also prescribed by linearity when the sequent are strong (w = 0).The binary additive rule (&R) is applicable in the forward direction when both premisses are weak (w = 1), regardless of their linear zone.This is because in this case the linear zones can always be weakened to make them equal.We therefore compute the upper bound ( ) of the two multisets: if A occurs n times in ∆ and m times in ∆ , then it occurs max(n, m) times in ∆ ∆ .
If only one premiss of the binary additive rule is weak, the linear zone of the weak premiss must be included in the linear zone of the other strong premiss.If both premisses are strong, their linear zones must be equal.We abstract the four possibilities in the form of an additive compatibility test.

Definition 3.3 (additive compatibility).
Given two forward sequents Γ ; ∆ −→ w γ and Γ ; ∆ −→ w γ , their additive zones ∆ and ∆ are additively compatible given their respective weak-flags, which we write as ∆/w ≈ ∆ /w , if the following hold: For binary multiplicative rules like ⊗R, the conclusion is weak if either of the premisses is weak; thus, the weak-flag of the conclusion is a Boolean-or of those of the premisses.Dually, for binary additive rules, the conclusion is weak if both premisses are weak, so we use a Boolean-and to conjoin the weak flags.Most unary rules are oblivious to the weakening decoration, which simply survives from the premiss to the conclusion.The exception is !R, for which it is unsound to have a weak conclusion; there is no derivation of • ; =⇒ !, for example.
Left rules with weak premisses require some attention.It is tempting to write the "weak" ⊗L rules as: (Note that the irredundancy property requires that at least one of the operands of ⊗ be present in the premiss.)This pair of rules, however, would allow redundant inferences such as: We might as well have consumed both A and B to form the conclusion, and obtained a stronger result.The sensible strategy is: when A and B are both present, they must both be consumed.Otherwise, only apply the rule when one operand is present in a weak sequent.A similar observation can be made about all such rules: there is one weakness-agnostic form, and some possible refined forms to account for weak sequents.

Property 3.4 (irredundancy). All forward rules satisfy the irredundancy property.
The soundness and completeness theorems are both proven by structural induction.Note that the completeness theorem shows that the forward calculus infers a possibly stronger form of the goal sequent.

Theorem 3.5 (soundness). If
Proof.By induction on the structure of the forward derivation F :: Γ ; ∆ −→ w γ.We have the following cases.
Otherwise, if w = 1, then for any ∆ ⊇ ∆ and C ⊇ γ: Case.F ends with a normal multiplicative rule, say , and note that ∆ 2 ⊇ ∆ 2 ; then, The case for w 1 = 1 is similar.Other multiplicative rules have a similar argument.
Case.F ends with a multiplicative rule with a weak sequent, say A similar argument can be made for R (and L in one case) that have negative existence conditions in the premisses.
Case.F ends with an additive rule, say Write ∆ for ∆ 1 ∆ 2 .If w 1 = w 2 = 0, then by the side condition (∆ 1 /w 1 ≈ ∆ 2 /w 2 ), we know that The case for w 1 = 1 and Other additive rules have a similar argument.
and C ⊇ γ be given; then: This finishes all the cases for the last rule in F .

Theorem 3.6 (completeness). If
Proof.By induction on the structure of the backward derivation D :: Otherwise, let ∆ ⊆ ∆, A and γ ⊆ C be given such that: Case.D ends in a multiplicative rule, say There are four cases to consider.Let Γ 1 and Γ 2 ⊆ Γ be given such that, for the first case, The second case is for some If γ = • then we're already done; for γ = A, The opposite case is similar.In the last case, for some ∆ 1 ⊆ ∆ 1 and The remaining multiplicative rules, and the exponential rules, are similar.
Case.D ends in an additive rule, say There are four sub-cases: This covers all possible final rules in D.

Focused derivations
Search using the backward calculus can always apply invertible rules eagerly in any order as there always exists a proof that goes through the premisses of the invertible rule.Andreoli pointed out [1] that a similar and dual feature exists for non-invertible rules also: it is enough for completeness to apply a sequence of non-invertible rules eagerly in one atomic operation, as long as the corresponding connectives are of the same synchronous nature.
In classical linear logic the synchronous or asynchronous nature of a given connective is identical to its polarity; the negative connectives (&, , , ⊥, ∀) are asynchronous, and the positive connectives (⊗, 1, ⊕, 0, ∃) are synchronous.The nature of intuitionistic connectives, though, must be derived without an appeal to polarity, which is alien to the constructive and judgmental philosophy underlying the logic.We derive this nature by examining the rules and phases of search: an asynchronous connective is one for which decomposition is complete in the active phase; a synchronous connective is one for decomposition is complete in the focused phase.This definition happens to coincide with polarities for classical linear logic, but is decidedly external.Atomic propositions and modal operators are somewhat special.Andreoli observed in [1] that it is sufficient to assign arbitrarily a synchronous or asynchronous nature to the atoms as long as duality is preserved; here, the asymmetric nature of the intuitionistic sequents suggests that they should be synchronous, as explained below.
As our backward linear sequent calculus is two-sided, we have left-and right-synchronous and asynchronous connectives.For non-atomic propositions a left-synchronous connective is right-asynchronous, and a left-asynchronous connective right-synchronous; this appears to be universal in well-behaved logics.We define the notations in the following table.
The backward focusing calculus consists of three kinds of sequents; right-focal sequents of the form Γ ; ∆ A (A under focus), left-focal sequents of the form Γ ; ∆ ; A Q, and active sequents of the form Γ ; ∆ ; Ω =⇒ C. Γ indicates the unrestricted zone as usual, ∆ contains only left-synchronous propositions, and Ω is an ordered sequence of propositions (of arbitrary nature).
The active phase is entirely deterministic: it starts on the right side of the active sequent, decomposing it until it becomes right-synchronous, i.e., of the form Γ ; ∆ ; Ω =⇒ Q.Then the propositions in Ω are decomposed in order from right to left.The order of Ω is used solely to avoid spurious non-deterministic choices.Eventually the sequent is reduced to the form Γ ; ∆ ; • =⇒ Q, which we call neutral sequents.
A focusing phase is launched from a neutral sequent by selecting a proposition from Γ, ∆ or the right hand side.This focused formula is decomposed until the top-level connective becomes asynchronous.Then we enter an active phase for the previously focused proposition.
Two focusing rules require special mention.If the left-focal formula is an atom, then the sequent is initial iff the linear zone ∆ is empty and the right hand side matches the focused formula; this gives the focused version of the "init" rule.If an atom has right-focus, however, it is not enough to simply check that the left matches the right, as there might be some pending decompositions; consider eg.• ; p & q q.Focus is therefore blurred in this case, and we correspondingly disallow a right atom in a neutral sequent from gaining focus.The other subtle rule is !R: although ! is right synchronous, the !R rule cannot maintain focus on the operand.If this were forced, there could be no focused proof of !(A ⊗ B) !(B ⊗ A), for example.This is because there is a hidden transition from the truth of !A to the validity of A which in turn reduces to the truth of A (see [8]).The first is synchronous, the second asynchronous, so the exponential has aspects of both.Girard has made a similar observation that exponentials are composed of one micro-connective to change polarity, and another to model a given behavior [15,Page 114]; this observation extends to other modal operators, such as why-not (?) of JILL [8] or the lax modality of CLF [21].
The full set of rules is in fig. 3. Soundness of this calculus is rather an obvious property-forget the distinction between ∆ and Ω, elide the focus and blur rules, and the original backward calculus appears.
We show the completeness of the focusing calculus by interpreting every backward sequent as an active sequent in the focusing calculus, then showing that the backward rules are admissible in the focusing calculus.This proof relies on admissibility of cut in the focusing calculus.Because a non-atomic leftsynchronous proposition is right-asynchronous, a left-focal sequent needs to match only an active sequent Active sequents should match other active sequents, however.Cuts destroy focus, as they generally require commutations spanning phase boundaries; the products of a cut are therefore active.This is sufficient for our purposes as we intend to interpret non-focusing sequents as active sequents.
The proof requires two key lemmas: the first notes that permuting the ordered context doesn't affect provability, as the ordered context does not mirror any deep non-commutativity in the logic.This lemma thus allows cutting formulas from anywhere inside the ordered context, and also to re-order the context when needed.The other lemma shows that left-active rules can be applied even if the right-hand side is not synchronous.This lemma is vital for commutative cuts.

Lemma 4.2. The following variants of the left-active rules are admissible
Proof.By lexicographic induction on the given derivations.The argument is lengthy rather than complex, and is an adaptation of similar structural cut-admissibility proofs in eg.[8].Name the three derivations in each case D, E and F respectively.The induction hypothesis can be used whenever: 1. the cut-formula becomes smaller; or 2. the cut-formula remains the same, but D concludes a smaller sequent; or 3. the cut-formula remains the same, but E concludes a smaller sequent.
A sequent is smaller than another if it has fewer elements in the zones of the context; the order of Ω is irrelevant in comparing sizes of sequents.We can successfully do this because lem.4.1 guarantees that the precise order of Ω is irrelevant.

Principal cuts.
A principal formula is introduced in both D and E.
Case of ⊗:

and above
Case of 1: Case of ⊕: Case of : Case of &: Case of !:  For commuting cuts, we commute into the available active derivation.There is no need to consider commuting a cut across a focus rule.

Left-commutative cuts.
Where the cut formula is a side-formula on the left.
Case.The cut-formula A is left-asynchronous and in the active zone.For instance, The cut-formula is left-synchronous, and in the passive linear zone.For instance: Where the cut formula is a side-formula on the right.
Case.D ends in a left-active rule, say: Again, there is an exceptional case for Ω = • and C right-synchronous, but in the general case, for example, Proof.First show that all ordinary rules are admissible in the focusing system using cut.Proceed by induction on derivation D :: Γ ; ∆ =⇒ C, splitting cases on the last applied rule, using cut and lemmas 4.1 and 4.4 as required.The following is a representative case for ⊗R: Let Ω and Ω be serialisations of ∆ and ∆ respectively.

and inversion
Any serialisation of ∆, ∆ is a permutation of Ω • Ω .

Forward focusing
We now construct the forward version of the focusing calculus.Intermediate sequents in the eager active and focusing phases must not be stored in the database of facts, which should contain just the neutral sequents at the phase boundaries.We therefore first construct derived rules for neutral sequents that make the intermediate focal and active sequents irrelevant.

Backward derived rules
For any given proposition, we are interested in constructing a derived inference for the proposition corresponding to a single pair of focusing and inverse phases; Andreoli called them bipoles [2].There are, however, important differences between backward reasoning bipoles and their forward analogue.As shown in Thm.3.6, forward sequents generally have fewer components than backward sequents; as forward rules have tight matching criteria, a stronger sequent will often fail to match an inference rule.The intent of this section is to transfer the idea of bipoles to forward derived rules.The details, particularly the proof of completeness (thm.5.10), turn out to be surprisingly subtle, so for presentation purposes we recall the backward construction of bipoles.
The essential idea is to interpret a proposition itself as the (derived) rules that it embodies.Every proposition is viewed as a relation between the conclusion of the rule and its premisses at the leaves of the bipole.Both the conclusion and the premisses of this bipole are neutral sequents, which we indicate by means of a double-headed sequent arrow (=⇒ =⇒).Given a neutral conclusion Γ ; ∆ =⇒ =⇒ Q, one proposition from Γ, ∆ or Q is selected for focus, and the relational interpretation of the conclusion with respect to the selected proposition provides the new (neutral) premisses of the bipole.There are three important classes of these relational interpretations: 1. Right focal relations for the focus formula A, written foc + ⇑ (A).Each relation R takes as input the conclusion sequent s, and produces a sequence of premiss sequents

right-focal
These relations are defined in fig. 4. The focal relations are understood as defining derived rules corresponding to a given proposition.If in a neutral sequent Γ ; ∆ =⇒ =⇒ Q we focus on the right, then foc + ⇑ (Q) would relate this sequent to the possible premisses in the entire bipole.
Similarly for foc − ⇑ we have two rules: We begin by constructing the forward versions of the relations in the earlier section, foc + ⇓ , foc − ⇓ and act ⇓ .These relations take a sequence of forward sequents as input, corresponding to the premisses of the derived rule, and construct the conclusion as their output.The derived rule for positive subformulas is: Similarly, for negative propositions, we have two rules: These relations are defined in fig. 5.For the "match" rule, the notation γ\ξ is defined as γ if ξ = •, and as • if γ = ξ = Q.As as simple example, consider the negative subformula P = p & q r & (s ⊗ t) for which we attempt to match the three sequents We have: Thus, the application of the full derived rule for P matched against the sequents s 1 , s 2 and s 3 is, precisely, Proof.Structural induction the definitions of foc + ⇓ , foc − ⇓ and act ⇓ .For the completeness theorem, we require some additional lemmas.≈ 35 m 0.53 s --NF = Non-focusing, F = focusing, Gt = Gandalf-tableaux, Gr = Gandalf-resolution All measurements are wall-clock times; "-" denotes unsuccessful proof within ≈ ten hours.

right-focal
Table 1: some test problems.

Lemma 6.2 (neutral subformula property). In any backward focused proof, all neutral sequents consist only of frontier propositions of the goal sequent.
In the preparatory phase for the inverse method, we calculate the frontier propositions of the goal sequent.There is no need to generate initial sequents separately, as the executions of negative atoms in the frontier directly give us the necessary initial sequents.
During the search procedure, each rule is applied to sequents selected from the current database, and if the rule applies successfully then we get a new sequent, which is then considered for insertion in the database.It is possible (and common) that a generated sequent is actually subsumed by some sequent already in the database (forward subsumption).It is also possible (though less common) for a new sequent to be stronger than some sequents already in the database.In this case, the old weaker sequents are no longer considered for new derivations (backward subsumption).The general design of the main loop of the prover and the argument for its completeness are fairly standard [12,20]; many optimisations are possible, but they are outside the scope of this paper.

Embedding non-linear logics 7.1 Intuitionistic logic
There have been many proposed embeddings of ordinary (non-linear) logics into linear logic using the exponential operator [14,8] that translate sub-formulas uniformly.These translations do not preserve the focusing properties of the source logic, as the exponentials can blur the focus too early.It is possible though to give a focusing-aware translation that is faithful to the focusing system of the source logic.As an example, consider the basic intuitionistic propositional logic with connectives {∧, t, ∨, f, ⊃}.The focusing system for this logic treats ∧ as both synchronous and asynchronous.The rules are as follows: We intend to translate signed intuitionistic formulas to signed linear formulas in a way that preserves the focusing structure of proofs.The translation is modal with two phases: A (active) and F (focal).A positive focal (and negative active) ∧ is translated as ⊗, and the duals as &.For every use of the act rule, the corresponding translation phase affixes an exponential; the phase-transitions in the image of the translation exactly mirror those in the source.
The reverse translation, written − o , is trivial: simply erase all !s, rewrite & and ⊗ as ∧, as ⇒, and ⊕ as ∨.The faithfulness of the translations can be established as a pair of soundness and completeness theorems, provable by simple structural induction.Theorem 7.1.Soundness: Proof.Soundness is immediate because the linear sequent calculus is simply a refinement of the intuitionistic calculus.Completeness is established by straightforward structural induction on the given intuitionistic derivations.
An important feature of this translation is that only negative atoms and implications are !-affixed;this mirrors a similar observation by Dyckhoff that the ordinary intuitionistic logic has a contraction-free sequent calculus that only needs to duplicate negative atoms and implications [13].
Proof (sketch).Consequence of thm.5.10.It is clear by a simple examination the foc and act relations that for every D ∈ Ψ such that D = G 1 ⊃ • • • ⊃ G n , the (D) h , −, !rule is of the form: where each Γ i and Γ ⊆ (Ψ) h .As the initial sequents have empty linear zones (all negative frontier propositions are in (Ψ) h ), they are empty in all derived sequents, the similarity to hyper D is obvious.

Some experimental results
We have implemented an expanded version of the forward focusing calculus as a certifying1 inverse method prover for intuitionistic linear logic, including the missing connectives ⊕, 0, and the lax modality. 2Table 1 contains a running-time comparison of the focusing prover (F) against a non-focusing version (NF) of the prover (directly implementing the calculus of sec.3), and Tammet's Gandalf "nonclassical" distribution that includes a pair of (non-certifying) provers for classical linear logic, one (Gr) using a refinement of Mints' resolution system for classical linear logic [19,20], and the other (Gt) using a backward Tableauxbased strategy.Neither of these provers incorporates focusing.The test problems ranged from simple stateful encodings such as blocks-world or change-machines, to more complex problems such as encoding of affine logic problems, and translations of various quantified Boolean formulas using the algorithm in [18].Focusing was faster in every case, with an average speedup of about three orders of magnitude over the nonfocusing version.

Conclusion
We have presented a design for a focused forward reasoning calculus We have presented the design of an inverse method theorem prover for propositional intuitionistic linear logic and have demonstrated through experimental results that focusing represents a highly significant improvement.Though elided here, the results persist in the presence of a lax modality [6], and extend to the first-order case as shown by the authors in a related paper [10], which also contains many more details on the implementation and a more thorough empirical evaluation.
Our methods derived from focusing can be applied directly and more easily to classical linear logic and (non-linear) intuitionistic logic, also yielding focused inverse method provers.While we do not have an empirical evaluation of such provers, the reduction in the complexity of the search space is significant.We therefore believe that focusing is a nearly universal improvement to the inverse method and should be applied as a matter of course, possibly excepting only (non-linear) classical logic.
In future work we plan to add higher-order and linear terms in order to obtain a theorem prover for all of CLF [6].The main obstacles will be to develop feasible algorithms for unification and to integrate higherorder equational constraints.We are also interested in exploring if model-checking techniques could help to characterize the shape of the linear zone that could arise in a backward proof in order to further restrict forward inferences.
Finally, we plan a more detailed analysis of connections with a bottom-up logic programming interpreter for the LO fragment of classical linear logic [4].This fragment, which is in fact affine, has the property that the unrestricted context remains constant throughout a derivation, and incorporates focusing at least partially via a back-chaining rule.It seems plausible that our prover might simulate their interpreter when LO specifications are appropriately translated into intuitionistic linear logic, similar to the translation of classical Horn clauses.
D and E (D smaller) Focus cuts.Where the last rule in D gives focus to the cut-formula.Case.D = D :: Γ ; ∆ ; P Q Γ ; ∆, P ; • =⇒ Q and E :: Γ ; ∆ ; Ω =⇒ P. Γ ; ∆, ∆ ; Ω =⇒ Q cut on D and E (D smaller) Case.D = D :: Γ, A ; ∆ ; A Q Γ, A ; ∆ ; • =⇒ Q and E :: Γ ; ∆ ; Ω =⇒ A. Γ, A ; ∆ ; Ω =⇒ A weakening on E Γ, A ; ∆, ∆ ; Ω =⇒ Q cut on D and above (D smaller) Case.D = D :: b) D ends with a right-active rule, say: D = D 1 :: Γ, A ; ∆ ; Ω =⇒ B D 2 :: Γ, A ; ∆ ; Ω =⇒ C Γ, A ; ∆ ; Ω =⇒ B & C E :: Γ ; • ; • =⇒ A Γ ; ∆ ; Ω =⇒ B cut on D 1 and E (D 1 smaller) Γ ; ∆ ; Ω =⇒ C cut on D 2 and E (D 2 smaller) Γ ; ∆ ; Ω =⇒ B & C &R (c) D ends in a right-focal rule, say: on D and E (D smaller) The rest is similar to the previous case.Subcase.Any case where A is in the unrestricted zone in the conclusion of E is impossible as there are some linear resources in the conclusion of D. Case.D :: Γ ; ∆ ; • =⇒ A and A is right-synchronous.(If it is right-asynchronous, then it is the principal formula, not a side-formula.)The only complex case is if the last rule in D is a left-focal rule: D = D :: Γ ; ∆ ; P A Γ ; ∆, P ; • =⇒ A In this case, the strategy is to permute the cut upwards in E until we are faced with cutting D with the derivation Γ ; ∆ ; A =⇒ Q or Γ ; ∆, A ; • =⇒ Q; in each of these cases the cut would preserve focus on P, using case 5 (a) or (b) respectively.Subcase.E :: Γ ; ∆ ; Ω • A =⇒ C.Then, by permutation, we have E :: Γ ; ∆ ; A • Ω =⇒ C. The cut can therefore permute into E in all instances except for Ω = • and C being rightsynchronous.For example,

Definition 7 . 3 (
(goals)G ::= p | G 1 ∧ G 2 | t (clauses) D ::= p | G ⊃ D | D 1 ∧ D 2 | t(theories) Ψ ::= • | Ψ, D Definition 7.2 (hyperresolution strategy).Let D represent the (curried) clausal form of D. The hyperresolution strategy for the Horn-sequent Ψ =⇒ h G is a proof of G starting from assumptions of the form D for every D ∈ Ψ, and rules:G 1 G 2 • • • G n G hyper D where G 1 ⊃ • • • ⊃ G m ⊃ G is a clausal form of some D ∈ Ψ. translation).The translation (−) h of formulas in the Horn fragment to linear logic is as follows: As A is left-synchronous, it is either an atom or right-asynchronous.In either case, the last rule in E must have been a blur rule (rb or rb * , respectively).
Γ ; ∆, ∆ ; Ω =⇒ B & C &R Case.The cut formula A is in the unrestricted context; characteristic examples: (a) D ends with a left-active rule, say: