IntSat: integer linear programming by conflict-driven constraint learning

State-of-the-art SAT solvers are nowadays able to handle huge real-world instances. The key to this success is the Conflict-Driven Clause-Learning (CDCL) scheme, which encompasses a number of techniques that exploit the conflicts that are encountered during the search for a solution. In this article, we extend these techniques to Integer Linear Programming (ILP), where variables may take general integer values instead of purely binary ones, constraints are more expressive than just propositional clauses, and there may be an objective function to optimize. We explain how these methods can be implemented efficiently and discuss possible improvements. Our work is backed with a basic implementation showing that, even in this far less mature stage, our techniques are already a useful complement to the state of the art in ILP.


Introduction
Since the early days of computer science, propositional logic has been recognised as one of its cornerstones.A fundamental result in the theory of computing is the proof by Cook that SAT, that is, the problem of deciding whether a propositional formula is satisfiable or not, is NP-complete [20].It was soon realised that, in consonance with this fact, a wide range of combinatorial problems could be expressed in SAT [42].
Hence, due to its potential practical implications, since then an extensive research has been underway of how SAT could be solved in an automated and efficient way [11].As a result of this work, particularly intensive in the last two decades [9, 27, 32, 34, 37-39, 45, 56, 61], now SAT solvers routinely handle formulas coming from real-world applications with hundreds of thousands of variables and millions of clauses.
State-of-the-art SAT solvers are essentially based on the Davis-Putnam-Logemann-Loveland (DPLL) procedure [22,23].In a nutshell, DPLL is a backtracking algorithm that searches for a (feasible) solution by intelligently traversing the search space.At each step a decision is made: a variable is selected for branching and is assigned a value either 0 or 1.Then the consequences of that decision are propagated, and variables that are forced to a value are detected.Each time a falsified clause (i.e., a conflict) is identified, backtracking is executed.Backtracking consists in undoing all assignments up to the last decision and forcing the branching variable to take the other value.When both values of the branching variable have already been tried without success, then the previous decision is backtracked.If the decision is the first one, and therefore there is no previous decision, then the formula can be declared unsatisfiable, i.e, infeasible.This simple description is, however, far from the current implementations of SAT solvers.What accounts for their success is the so-called Conflict-Driven Clause-Learning (CDCL) scheme, which enhances DPLL with a number of techniques: • conflict analysis and backjumping (i.e., non-chronological backtracking), which improves (chronological) backtracking [45]; • learning (that is, addition) of new clauses generated from conflicts [25]; • variable decision heuristics that select the most active variables in recent conflicts, like the VSIDS heuristic [47]; • value decision heuristics that select promising values for the chosen decision variable, such as the last phase strategy [54]; • data structures such as the 2-watched literal scheme [47] for efficiently identifying propagations and conflicts; • restarts [33]; • clause cleanups that periodically delete the least useful learnt clauses, e.g. based on their activity in conflicts [32].
Since the problems of Integer Linear Programming (ILP) and SAT are both NPcomplete, they reduce one another.Thus a natural question is how solvers of the two areas compare.It turns out that, in spite of the maturity of the ILP solving technology and its astonishing achievements [12], for problem instances of a more combinatorial (as opposed to numerical) sort, SAT solvers with an encoding into propositional clauses can outperform even the best commercial ILP solvers run on a -far more compact-ILP formulation [4,17].
The different nature of the algorithms underlying ILP solvers (based on branch-andcut and the simplex method) and SAT solvers motivate this work, in which we aim at pushing the techniques of CDCL beyond SAT, while handling ILP constraints natively at all levels.
To do so, the following issues need to be addressed: • variables are no longer binary, and may take general integer values.Moreover, the domains of values may be bounded (the variable can only take a value within an interval) or unbounded (the variable may take any value in Z).Should we do case analysis by taking concrete values of variables or based on lower and upper bounds?Also, the notion of decision has to be determined: does a decision fix a variable to a value, or does it split its domain?The value decision heuristics have to be defined as well.Finally, unbounded domains pose a problem with the termination of the search algorithm.
• constraints are not just clauses any more, and can be general linear inequalities.Hence the propagation mechanism (be it of variable values or of bounds) has to be redefined, as well as the algorithms and data structures for efficiently detecting when a propagation can be triggered or a conflict has arisen.Unlike in SAT, attention has to be paid to arithmetic and numerical problems, so as to ensure the soundness of the procedures.Most importantly, conflict analysis has to be generalised in such a way that backjumping and learning are possible.
• problems are no longer purely feasibility checks, and may require to optimise an objective function.In addition to enabling the search algorithm to optimise, for practical reasons the information provided by the objective function has to be integrated into the decision heuristics.
Building on top of the early ideas in [49], here we present the IntSat method, a family of new different conflict-driven constraint-learning algorithms for ILP.We illustrate the method with two algorithms, which differ in the way in which the aforementioned issues on extending conflict analysis and learning are resolved.We provide detailed explanations of how these algorithms can be implemented efficiently and also report an extensive up-to-date experimental analysis, comparing a basic implementation of IntSat against the best commercial (simplex-based) ILP solvers.The results show that, even in this far less mature stage, our techniques already are a useful complement to the state of the art in ILP solving, finding (good) solutions faster in a significant number of instances, especially in those of a more combinatorial (as opposed to numerical) nature.
This paper is structured as follows 1 .Preliminary background on SAT and ILP is reviewed in Section 2. Section 3 introduces our two IntSat algorithms that generalise CDCL from SAT to ILP.After presenting the main hindrance in this generalisation (Section 3.1) and reviewing basic properties of propagation in ILP (Section 3.2), the common part of these algorithms is exposed (Section 3.3).Then their differences are described in detail (Sections 3.4 and 3.5, respectively).A discussion on extensions (Section 3.6) concludes Section 3. Section 4 is devoted to implementation issues, while in Section 5 the results of an experimental evaluation are reported.In Section 6, ideas for future research are outlined.Finally, Section 7 completes this article with an account of related work and conclusions.

Propositional Satisfiability
Let X be a finite set of propositional variables.If x ∈ X, then x and x are literals of X.The negation of a literal l, written l, denotes x if l is x, and x if l is x.A clause is a disjunction of literals l 1 ∨ . . .∨ l n .A (CNF) formula is a conjunction of one or more clauses C 1 ∧ . . .∧ C n .When it leads to no ambiguities, we will sometimes consider a clause as the set of its literals, and a formula as the set of its clauses.
A (partial truth) assignment A is a set of literals such that {x, x} ⊆ A for no x.A literal l is true in A if l ∈ M , is false in A if l ∈ A, and is undefined in A otherwise.A clause C is true in A if at least one of its literals is true in A, is false or a conflict if all of its literals are false in A, and is undefined otherwise.A formula F is true in A if all of its clauses are true in A. In that case, we say that A is a solution to F , and that F is satisfied by A. The problem of SAT consists in deciding, given a formula F , whether F is satisfiable, that is, there exists a solution to F .The systems that solve SAT are called SAT solvers.
The core of a Conflict-Driven Clause-Learning (CDCL) SAT solver is described in the following algorithm, where A is seen as an (initially empty) stack: 1. Propagate: while possible and no conflict appears, if, for some clause l ∨ C, C is false in A and l is undefined, push l onto A, associating to l the reason clause C.
2. if there is no conflict if all variables are defined in A, output 'solution A' and halt.
else Decide: push an undefined literal l, marked as a decision, and go to step 1.
3. if there is a conflict and A contains no decisions, output 'unsatisfiable' and halt.4. if there is a conflict and A contains some decision, use a clause data structure C, the conflicting clause.Initially, let C be any conflict.

Conflict analysis:
Let l be the literal of C whose negation is topmost in A, and D be the reason clause of l.Replace C by (C \ {l}) ∨ D. Repeat this until there is only one literal l top in C such that l top is, or is above, A's topmost decision.4.2.Backjump: pop literals from A until either there are no decisions in A or, for some l in C with l = l top , there are no decisions above l in A.

4.3.
Learn: add the final C as a new clause, and go to 1 (where C propagates l top ).
The literal l top at step 4 is called the first implication point (1UIP), whence the conflict analysis above is said to follow a 1UIP scheme [61].
Implicitly in this description of the algorithm, conflict analysis uses resolution [57] to inspect back the cause of a conflict.Given a variable x and two clauses of the form x ∨ A and x ∨ B (the premises), the resolution rule infers a new clause A ∨ B (the resolvent).Graphically: Back to the algorithm, when the current conflicting clause C is replaced by (C \ {l}) ∨ D, where l is the literal of C whose negation is topmost in A and D is the reason clause of l, in fact resolution is being applied between C (viewed as l ∨ (C \ {l})) and l ∨ D.
Example 2.1.Let us assume we have clauses Since there is nothing to propagate at the beginning, we decide to make x 1 true and push it onto the stack of the assignment.Now due to x 1 ∨ x 2 , literal x 2 is propagated and pushed onto the stack with reason clause x 1 .Again there is nothing to propagate, so we decide to make x 3 true.Due to x 3 ∨ x 4 , literal x 4 is propagated and pushed onto the stack with reason clause x 3 .Yet again there is nothing else to propagate, so we decide to make x 5 true.Due to x 5 ∨ x 6 , literal x 6 is propagated and pushed onto the stack with reason clause x 5 .And in turn, due to x 5 ∨ x 6 ∨ x 7 , literal x 7 is propagated and pushed onto the stack with reason clause x 5 ∨ x 6 .Now clause x 2 ∨x 5 ∨x 7 has become false under the current assignment (x , where decisions are marked with a superscript d.
• Conflict analysis: The conflicting clause C is initially x 2 ∨ x 5 ∨ x 7 .In the first iteration of conflict analysis, we replace x 7 in C by the reason clause x 5 ∨ x 6 of x 7 , which yields x 2 ∨ x 5 ∨ x 6 as the new C.In the second iteration we similarly replace x 6 in C, now x 2 ∨ x 5 ∨ x 6 , with the reason clause x 5 of x 6 , which gives x 2 ∨ x 5 .Alternatively, since as explained above for each propagated literal l with reason clause D we are applying resolution between l ∨ D and C, the sequence of these two steps can be represented graphically as follows: where the variable that is eliminated at each step is highlighted in bold face, and the clauses on the right are the conflicting clauses, namely x 2 ∨ x 5 ∨ x 7 , x 2 ∨ x 5 ∨ x 6 and x 2 ∨ x 5 .Note that each of these clauses is false under the current assignment, as ensured by the invariant.Now there is a single literal l top in C (literal x 5 ) such that l top is, or is above, the stack's topmost decision (in this case, x 5 ).Therefore we backjump.
• Backjump: We pop literals from the stack until either (i) there are no decisions in the stack, or (ii) for some l in C with l = l top , there are no decisions above l in the stack.In this case (ii) applies with literal x 2 as l, and so we pop the literals from x 7 down to x 3 , leaving the stack as (x 1 d , x 2 ).Note in particular that the decision x 3 and its propagation x 4 , which are irrelevant to the conflict, are jumped over.The intuition is that, if we had had the clause x 2 ∨ x 5 in the clause database at the time we decided x 3 , we would have propagated x 5 before any decision.Precisely, the following Learn step will allow making this propagation.
• Learn: The final C, namely x 2 ∨ x 5 , is learned and added to the clause database.Back in step 1, we can finally propagate x 5 with the learned clause, leading to the assignment (x 1 d , x 2 , x 5 ).Now we decide x 4 , and all clauses are satisfied.The remaining variables can now be decided arbitrarily, and once all of them are defined, since there is no conflict the algorithm terminates with a solution.

Integer Linear Programming
Let X be a finite set of integer variables {x 1 . . .x n }.An (integer linear) constraint over X is an expression of the form a 1 x 1 +• • •+a n x n a 0 where, for all i in 0 . . .n, the coefficients a i are integers (some of which may be zero).Each of the terms a i x i is called a monomial of the constraint.In what follows, variables are always denoted by (possibly sub-indexed or primed) lowercase x, y, z, and coefficients by a, b, c, respectively.
A constraint a 1 x 1 + . . .+ a n x n a 0 is said to be normalised if gcd(a 1 , . . ., a n ) = 1.For any constraint a 1 x 1 + . . .+ a n x n a 0 , the constraint a 1 /d x 1 + . . .+ a n /d x n ⌊a 0 /d⌋, where d = gcd(a 1 , . . ., a n ), is an equivalent normalised constraint.Hence in what follows we assume all constraints to be eagerly normalised.
An integer linear program (ILP) over X is a set S of integer linear constraints over X, called the constraints of the ILP, together with a linear expression of the form called the objective function.A solution to a set of constraints S over X (and, by extension, to an ILP whose set of constraints is S) is a function sol : X → Z that satisfies every constraint If a solution to S exists, then S (and again by extension, any ILP whose set of constraints is S) is feasible, otherwise it is said to be infeasible .Without loss of generality, we will assume that the objective function in an ILP is to be minimised: an optimal solution to an ILP with constraints S and objective function c is a solution sol to S such that c(sol) ≤ c(sol ′ ) for any solution sol ′ to S. The problem of Integer Linear Programming (ILP) consists in finding an optimal solution to a given integer linear program.When there is no ambiguity, we will use ILP both for 'integer linear program' as well as 'integer linear programming'.
A bound is a one-variable constraint a 1 x a 0 .Any bound can equivalently be written either as a lower bound a x or as an upper bound x a.A variable x is binary in an ILP if its set of constraints S contains the lower bound 0 x and the upper bound x 1, so that effectively x can only take values either 0 or 1.
Given a set of constraints S and a constraint C, we write S |= C when any solution to S is also a solution to C. The definition of |= is lifted to sets of constraints on the right-hand side in the natural way.
From constraints C 1 of the form a b 0 (called the premises), and natural numbers c and d, the cut rule derives a new constraint C 3 of the form c 1 x 1 + • • • + c n x n c 0 (called the cut), where c i = ca i + db i for i in 0 . . .n.The cut rule is correct in the sense that it only infers consequences of the premises: {C 1 , C 2 } |= C 3 .If for some i > 0 we have c i = 0, then we say that x i is eliminated in this cut.Note that if a i b i < 0, then one can always choose c and d such that x i is eliminated.See [18,35,58] for further discussions and references about Chvátal-Gomory cuts and their applications to solving ILP's.
Example 2.2.Let us see an example of application of the cut rule to the constraints 4x + 4y + 2z 3 and −10x + y − z 0. By multiplying the former by 5 and the latter by 2 and adding them up, we obtain the cut 22y + 8z 15, in which variable x has been eliminated.Note that the resulting constraint 22y + 8z 15 can be normalised by dividing by gcd(22, 8) = 2, resulting into 11y + 4z 15/2 and, by rounding down, 11y + 4z 7.

IntSat
It is well known that the problem of SAT can be viewed as a particular case of ILP in which there is no objective function, all variables are binary and each constraint is of the form x 1 + . . .+ x m − y 1 . . .− y n > −n, which is an equivalent formulation of a clause x 1 ∨ . . .∨ x m ∨ y 1 ∨ . . .∨ y n using that y j = 1 − y j .This observation has been used in previous work as a starting point for the generalisation of CDCL to ILP [41].However, a major obstacle that researchers have encountered in this extension is the so-called rounding problem, to which we devote Section 3.1.

The Rounding Problem
In order to describe the rounding problem, first of all let us review how it has been typically attempted to generalise CDCL from SAT to ILP.
In ILP, variables are no longer restricted to be just binary and may be general integer.As a consequence, the notion of literal of SAT needs to be adapted.In the ILP context, bounds may play the role of literals.Namely, lower bounds of the form a x correspond to positive literals, whereas upper bounds x a correspond to negative literals.Note that, if x is a binary variable, and therefore the bounds 0 x and x 1 must always hold, then 1 x forces x to take value 1 (true), while x 0 forces x to take value 0 (false).
Moreover, in ILP constraints can be arbitrary integer linear constraints instead of purely clauses.Therefore the propagation mechanism has to be extended.In this setting, (Boolean) propagation can be replaced with bound propagation.For the sake of simplicity let us define it by example, and leave a formal presentation for Section 3.2.
Example 3.1.Let us see an example of bound propagation.By transitivity, from the lower bound 1 x, the upper bound y 2, and the constraint x − 2y + 5z 5, we infer that 1 − 4 + 5z 5, as y 2 implies −4 −2y.Simplifying we obtain 5z 8, and hence z 8/5, and by rounding down, z 1.Now that it has been outlined how to propagate bounds, in order to generalise the CDCL algorithm it remains to be seen how to trace back the propagations when a conflict occurs.To that end, one can see the propositional resolution rule used in SAT as a means to eliminate a variable from a conjunction of two clauses.When considering integer linear constraints, a natural candidate for playing the same role is the cut rule.However, as the following example illustrates, mimicking the conflict analysis of CDCL SAT solvers does not yield the expected result.
Example 3.2.Assume we have two constraints x + y + 2z 2 and x + y − 2z 0. Let us first take the decision 0 x, which propagates nothing.Then we take another decision 1 y, which due to x + y + 2z 2 propagates z 0 (since 0 + 1 + 2z 2, we get 2z 1, that is z 1/2, and by rounding down finally z 0).Then x + y − 2z 0 becomes a conflict: it is false in the current assignment A = ( 0 x, 1 y, z 0 ), as 0 x, 1 y and z 0 imply that 1 x + y − 2z.Now let us attempt a straightforward generalisation of conflict analysis: as z 0 is the topmost (last propagated) bound, we apply a cut inference eliminating z between x + y + 2z 2, the reason constraint of the propagation, and x + y − 2z 0, which is now a conflicting constraint playing the analogous role of a conflicting clause.By adding these two constraints we obtain a new constraint 2x + 2y 2, or equivalently, x + y 1.Then the conflict analysis is over because there is only one bound in A that is relevant for the conflicting constraint and which is at, or above, the last decision, namely 1 y.But unfortunately at this point the conflicting constraint, that is x + y 1, is no longer false in A, breaking (what should be) the invariant.Hence, from 0 x it only propagates y 1, which is weaker than y 0, the negation of the previous decision 1 y, which was expected to reverse now.As a consequence, the constraint that should be learnt is too weak to justify a backjump.This problem is due to the rounding that takes place when propagating z2 ).
The rounding problem illustrated in Example 3.2 was addressed in an ingenious way by Jovanović and de Moura in their CutSat procedure [41].In this work, a decision can only make a variable equal to its current lower or upper bound; i.e., if x is the decision variable and its current domain is determined by the lower bound l x and the upper bound x u, then the next decision has to be either x l (thus fixing the value of x to l) or u x (fixing the value of x to u).Although, on the one hand, this restrains the decision heuristics significantly, on the other it is necessary so as to compute, at each conflict caused by bound propagations with rounding, tightly propagating constraints that also explain the same propagations but without rounding.Conflict analysis can then be performed using these tightly propagating constraints only, and thanks to that, the resulting constraint is guaranteed to justify a backjump, as in the SAT case.Nonetheless, there is yet another toll to be paid: the termination condition of the conflict analysis in [41] requires eliminating the variables of the propagated bounds until a decision bound allows one to stop.In SAT, the analogous condition would require that, unlike in the 1UIP learning scheme, the literal l top in the CDCL algorithm should be a decision.The resulting learning scheme is known as the AllUIP scheme [61], and is well-known to perform very poorly, significantly worse than the 1UIP one.
In what follows, we will present IntSat, a new family of algorithms that extend the conflict-driven clause-learning scheme from SAT to a conflict-driven constraintlearning scheme3 in ILP with an alternative solution to the rounding problem.However, unlike [41], these algorithms admit arbitrary new bounds as decisions, and guide the search exactly as with the 1UIP approach in SAT solving.

Bound Propagation
Here we formally define bound propagation, which is a key subprocedure in the IntSat CDCL algorithms that will be introduced later on.
Let A be a set of bounds.We call two bounds a x and x a ′ contradictory if a > a ′ .A bound a x is redundant with another bound a ′ x if a ′ ≥ a.Similarly, x a is redundant with another bound x a ′ if a ′ ≤ a.If bound B is redundant with another bound B ′ , then we say B ′ is stronger than B. By min A (a 1 x 1 + . . .+ a n x n ) and max A (a 1 x 1 + . . .+ a n x n ) we represent respectively the minimum and the maximum values of the expression a 1 x 1 + . . .+ a n x n subject to the condition that x 1 , . . ., x n satisfy the bounds in A. Definition 3.3 (False constraint, conflict).Let A be a set of bounds.If C is a constraint such that {C} ∪ A has no solution, then we say that C is false or a conflict in A.
The following lemma provides us with a characterisation of conflicts: Lemma 3.4.Let C be a constraint of the form a 1 x 1 + . . .+ a n x n a 0 and A be a set of pairwise non-contradictory bounds.The following hold: (1) Let lb i be the strongest lower bound of x i in A (or −∞ if there are none), and let ub i be the strongest upper bound of x i in A (or +∞ if there are none).Then If C is a constraint and R is a set of bounds, then C and R can be used to propagate new bounds as the following lemma indicates: Lemma 3.5.Let C be a constraint a 1 x 1 +. ..+a n x n a 0 and R a set of pairwise noncontradictory bounds.Let x j be a variable and define e j = (a 0 − i =j min R (a i x i ))/a j .
Proof.See Appendix A.
The previous lemma motivates the following definition, which presents the core concept in this subsection: Definition 3.6 (Bound propagation).Let C, R, x j and e j be as in Lemma 3.5.We say that C and R propagate bound x j ⌊e j ⌋ if a j > 0, or bound ⌈e j ⌉ x j if a j < 0.
Finally, we conclude this subsection with a lemma that implies that the problem exposed in Section 3.1 -that is, that the constraint to be learnt may be too weak to explain a backjump, is indeed due to rounding (hence the name rounding problem).
Namely, we will prove that if a constraint C 2 propagates a bound and constraint C 1 is false with that bound, and the cut rule is applied between the two without rounding, then the new constraint C 3 that is obtained is still false.As a consequence, the falsity of the conflicting constraint in conflict analysis is kept invariant, and therefore in the absence of rounding in bound propagation the rounding problem cannot arise.
Lemma 3.7.Let R be a set of non-contradictory bounds, C 1 be a constraint of the form a 1 x 1 + . . .+ a n x n a 0 , and C 2 a constraint of the form b Let us assume a j < 0, b j > 0 and that C 1 is false in R ∪ {x j e j }, where e j = (b 0 − i =j min R (b i x i ))/b j ∈ Z.Let us also assume that x j e j is the strongest upper bound of x j in R ∪ {x j e j }.Let C 3 be the result of applying a cut inference between C 1 and C 2 eliminating x j : The symmetric version of the lemma when a j > 0, b j < 0 and the propagated bound is a lower bound can be stated and proved analogously.

The Core IntSat Algorithm
As outlined in Section 1, in this work we introduce two new CDCL algorithms for ILP.Although they are different in the way conflict analysis, backjumping and learning are performed, they share the same structure.This common part, which will henceforth be referred to as IntSat, is presented in this section.In the description below, this core algorithm just decides the existence of integer solutions, i.e., it considers feasibility problems only.Optimisation, as well as other extensions, will be discussed later on in Section 3.6.
First of all, let us introduce the following definitions.Let A be a sequence of bounds over a set of variables X.A variable x is defined to a in A if a x ∈ A and x a ∈ A for some a.Note that if all variables of X are defined and there are no contradictory bounds in A then A can be seen as a total assignment A : The main idea of the core IntSat algorithm is as follows.The assignment A, implemented as a stack of bounds, is initially empty.Bound propagation is applied exhaustively while no conflict is detected.Every time a constraint RC and a set of bounds RS with RS ⊆ A propagate a new bound B, this bound is pushed onto A, associating RC and RS to B as the reason constraint and the reason set of B, respectively.As in the SAT case, when no more propagations are possible and there is no conflict, if all variables are defined then the assignment determines a solution and the algorithm terminates, otherwise a decision is made.On the other hand, when a conflict arises, if there are no decisions to be undone we can conclude that there is no solution, otherwise a subprocedure performs conflict analysis, backjumping and learning.As a result, new constraints may be added to the set of constraints, and the assignment undoes the last decision (and possibly more) and is extended with a new bound.Then the process is repeated with a new round of propagations.
More succinctly, the following pseudo-code describes the core IntSat algorithm for finding a solution to a set of constraints S: (1) Propagate: while possible and no conflict appears, if RC and RS propagate some fresh bound B, for some constraint RC and set of bounds RS with RS ⊆ A, then push B onto A, associating to B the reason constraint RC and the reason set RS.
(2) if there is no conflict if all variables are defined in A, output 'solution A' and halt.
else Decide: push a fresh bound B, marked as a decision, and go to step 1.
(3) if there is a conflict and A contains no decisions, output 'infeasible' and halt.
(4) if there is a conflict and A contains some decision, Conflict analysis Backjump Learn: compute a new assignment A ′ and a set T of new constraints to be added to the set of constraints S.
Replace A by A ′ , add the constraints in T to S, and go to step 1.
Unlike in our exposition of CDCL for SAT in Section 2.1, in the above algorithm conflict analysis, backjumping and learning have been unified as a single subprocedure Conflict analysis Backjump Learn.This subroutine is not developed explicitly until Sections 3.4 and 3.5, which will present two different ways in which it can be concreted, although many possibilities exist.The only requirements that should be satisfied are summarised in the following definition: Definition 3.8.Procedure Conflict analysis Backjump Learn with input a set of constraints S and an assignment A such that S contains a conflict with A and A contains a decision, and output a set of constraints T and an assignment A ′ , is valid if: (1) It terminates: the output is computed in finite time.
( Non-decision bounds obtained with bound propagation have both a reason set and a reason constraint.On the contrary, a non-decision bound B produced by the procedure Conflict analysis Backjump Learn always has a reason set RS, but may have a reason constraint RC or not.Moreover, even if it is defined, this reason constraint is only required to be a consequence of S.Although for practical considerations it may be convenient that RC propagates B, the correctness of the algorithm does not depend on this.Finally also note that, on the other hand, decision bounds never have either a reason set or a reason constraint.
Example 3.9.Consider the following initial constraints: By adding the initial bounds to the empty stack and propagating exhaustively, the stack depicted below is obtained: Now no more new propagations can be made and there is no conflict.Since for example variable x is not defined yet, let us decide 1 x.This propagates y 0 due to C 1 , and z 0 due to C 2 , leading to the following stack: Since the stack contains the decision 1 x, at this point procedure Conflict analysis Backjump Learn would be called.In Sections 3.4 and 3.5 we will resume the execution of this example with different implementations of this procedure.
The following theorem states the termination, soundness and completeness of the core IntSat algorithm, provided the subprocedure that performs conflict analysis, backjumping and learning is valid: Theorem 3.10.Let us assume procedure Conflict analysis Backjump Learn is valid.
Then the core IntSat algorithm, when given as input a finite set of constraints S including for each variable x i a lower bound lb i x i and an upper bound x i ub i , always terminates, finding a solution if, and only if, there exists one, and returning 'infeasible' if, and only if, S is infeasible.
Proof.See Appendix B.

Resolution-based IntSat
In this section we develop a possible way of concreting the procedure Conflict analysis Backjump Learn introduced in Section 3.3 as a subroutine of the core IntSat algorithm.The main idea is to mimic the resolution-based conflict analysis in SAT by only using reason sets, which are viewed now as negations of clauses of bounds.Thus, for now no reason constraints are considered in this purely resolutionbased conflict analysis (until the hybrid versions explained in the next subsection).
One of the shortcomings of this approach is that, at the end of conflict analysis, the explanation of the backjump is not a constraint but a clause of bounds, which does not belong to the language of ILP and hence cannot be learned directly.For this reason here we also review techniques that, in some common situations, allow one to convert clauses of bounds into equivalent constraints.If successful, a new constraint justifying the backjump is learned.Otherwise, in the cases in which this equivalent constraint cannot be found, nothing is learned (or at best only a weaker constraint could be learned).In any case, this does not affect the validity of the procedure and backjumping can be performed anyway.
A more precise pseudo-code description of Conflict analysis Backjump Learn following this idea is shown below.The input consists of a set of constraints S and an assignment A such that S contains a conflict with A and A contains a decision, and the output is a set T of constraints that can be learned and a new assignment A ′ .
is precisely the negation of (the conjunction of bounds in) the set (CS \ {B ′ }) ∪ RS.
Example 3.11.Let us revisit Example 3.2.There are two constraints x + y + 2z 2 and x + y − 2z 0. We take the decision 0 x, which propagates nothing, and later on another decision 1 y, which due to x + y + 2z 2 propagates z 0. This bound is then pushed with associated reason set { 0 x, 1 y }, resulting into the stack A = ( 0 x, 1 y, z 0 ).Now x + y − 2z 0 is a conflict, and CS = { 0 x, 1 y, z 0 } ⊆ A is a set of bounds causing its falsehood.
Let us apply the procedure Conflict analysis Backjump Learn.In the first iteration of conflict analysis we unfold the propagation of z 0, and then CS becomes CS \ {z 0} ∪ {0 x, 1 y} = {0 x, 1 y}.After this step there is a single bound of CS which is, or is above, the assignment's topmost decision 1 y (namely 1 y itself, thus playing the role of B top ), and therefore Conflict analysis concludes.Then Backjump starts, and pops z 0 and 1 y from the assignment.Since 0 x is a bound in CS different from 1 y such that there are no decisions above it in the current assignment ( 0 x ), no more bounds are popped.Finally y 0 (that is, B top ) is pushed onto the stack with reason set {0 x, 1 y} \ {1 y} = {0 x}, resulting into the assignment ( 0 x, y 0 ).
In this case the clause of the negations of the bounds in the final conflicting set, namely 0 x ∨ 1 y ≡ x −1 ∨ y 0 , cannot be converted into a constraint, as can be seen e.g. by geometric arguments, and therefore nothing is learnt (i.e., T = ∅).However, if additionally variable y were binary and x had an upper bound x u for a certain u ∈ Z in the initial set of constraints, then one can see that the clause x −1 ∨ y 0 could equivalently be transformed into the constraint x u − uy − y (and finally T = {x u − uy − y}); see the end of this section for a systematic way of obtaining these transformations.
Example 3.12.Let us resume the execution of the core IntSat algorithm from Example 3.9.The constraints are: Let A 0 be the stack corresponding to decision level 0: In Example 3.9 the execution was suspended at the first conflict that was encountered.Namely, the stack is and the conflict is C 0 , which is false due to { x 1, y 0, z 0 }.
Let us apply Conflict analysis Backjump Learn.We start conflict analysis with the conflicting set Notice that there are still two bounds in CS of the last decision level, namely y 0 and 1 x.Of these, the topmost one is y 0, which has reason set { 1 x }.Then the new conflicting set is Now there is a single bound B top , which is 1 x, which is or is above the stack's topmost decision, so conflict analysis is over.
The next step is Backjump, which pops bounds from the stack until either there are no decisions or, for some B in CS with B = B top , there are no decisions above B in the stack.In this case the latter holds with bound x 1 playing the role of B after having popped z 0, y 0 and 1 x.Then B top , that is x 0, is pushed with reason set { x 1, 1 x } \ { 1 x } = { x 1 }, leading to the stack: On the other hand, the clause of the negations of the final conflicting set { x 1, 1 x }, namely cannot be converted into a constraint, and so no constraint can be learnt.
This concludes the execution of Conflict analysis Backjump Learn.Back to the core IntSat algorithm, now due to C 0 we have that x 0 and y 1 propagate 1 z , and x 0 and z 3 propagate −1 y.Then due to C 3 bound 1 z propagates y 0, and −1 y propagates z 2. In turn, thanks to C 0 , we have that x 0 and y 0 propagate 2 z, and x 0 and z 2 propagate 0 y.Finally, bounds 2 z and 0 y make C 3 false.Since there are no decisions left in the stack, the algorithm terminates reporting that there is no solution.
The following theorem ensures that the procedure Conflict analysis Backjump Learn is valid, as required by Theorem 3.10: Theorem 3.13.The above procedure Conflict analysis Backjump Learn is valid.
Finally, there is a step in the above description of the algorithm that requires further explanation.Namely, let us see when and how the clause consisting of the negations of the bounds in the final conflicting set can be converted into an equivalent constraint.For example, this transformation can be achieved if all bounds involve binary variables, except for at most one [5].More specifically, if the clause is of the form where variables x 1 , . . ., x m , y 1 , . . ., y n are binary, z is an integer variable with lower bound lb (that is, lb z for any solution) and k > lb is an integer, then the clause can equivalently be written as Similarly, if the clause is of the form where variables x 1 , . . ., x m , y 1 , . . ., y n are binary, z is an integer variable with upper bound ub (i.e., z ub for any solution) and k < ub is an integer, then the clause can equivalently be written as (1 − y j ) .

Hybrid Resolution and Cut-based IntSat
The algorithm for the procedure Conflict analysis Backjump Learn presented in Section 3.4 suffers from some drawbacks.In the first place, if the clause with the negations of the bounds in the conflicting set cannot be converted into an equivalent constraint, at best only a weaker constraint can be learned.This is inconvenient, as learning is widely acknowledged as one of the key components to the success of SAT solvers.Moreover, from a proof complexity perspective, the resolution proof system that underlies the algorithm [55] is known to be less powerful than, e.g., the cutting planes proof system, based on the cut rule.For example, for pigeon-hole problems resolution proofs are exponentially long [36], while short polynomial-sized proofs exist in cutting planes.Unfortunately, pigeon-hole problems often arise as subproblems in real-life applications, such as scheduling and timetabling [2]: how to fit n tasks in less than n time slots, etc.
This section presents an alternative approach for Conflict analysis Backjump Learn.This new algorithm incorporates the cut rule, in an attempt to address the aforementioned issues.However, the cut rule does not replace resolution but is applied in parallel.Hence, in addition to the conflicting set, a conflicting constraint will also be maintained in conflict analysis, and when unfolding propagations cuts will be applied between this constraint and the reason constraints, which unlike in Section 3.4, will now be taken into account.In this way we can bypass the rounding problem and ensure that backjumping will be possible.
Next a description of the new algorithm for Conflict analysis Backjump Learn is shown.As in the previous section, the input consists of a set of constraints S and an assignment A such that S contains a conflict with A and A contains a decision, and the output is a set T of constraints that can be learned and a new assignment A ′ .Conflict analysis Backjump Learn to the conflict described there.We recall we have the set of constraints S = {x + y + 2z 2, x + y − 2z 0} and the assignment A = ( 0 x, 1 y, z 0 ), where 0 x and 1 y are decisions while z 0 is a propagated bound with reason constraint x + y + 2z 2 and reason set {0 x, 1 y}.The conflict is x + y − 2z 0 ∈ S, which is false due to { 0 x, 1 y, z 0 } ⊆ A.
Conflict analysis starts assigning x + y − 2z 0 to CC and the set of bounds { 0 x, 1 y, z 0 } to CS.In the first iteration, in CS we replace z 0 by its reason set { 0 x, 1 y }.The resulting CS is { 0 x, 1 y }.A cut eliminating z exists (see Example 3.2) and CC becomes x + y 1.Then conflict analysis is over because CS contains exactly one bound B top , which is 1 y, at or above A's topmost decision.Then Backjump starts, and we pop bounds until for some B in CS with B = B top , there are no decisions above B in A, in this case, until there are no decisions above 0 x in A. Hence bounds z 0 and 1 y are popped, and after that B top , which is y 0, is pushed with reason set { 0 x } and reason constraint x + y 1.Note that this reason constraint is not a 'good' reason, i.e., it does not propagate y 0, but is still a valid consequence of the set of constraints.Finally, in Learn, the final CC, which is x + y 1, is assigned to T so that it can be learned.
Example 3.15.Let us revisit Example 3.9 and complete the execution of the core IntSat algorithm.Let us recall the constraints: Let us define A 0 as the stack of decision level 0: The execution in Example 3.9 was interrupted at the first conflict, with the following stack: Procedure Conflict analysis Backjump Learn starts with conflict analysis, where initially the conflicting constraint CC is C 0 , and the conflicting set CS is { x 1, y 0, z 0 }.The bound in CS which is topmost in the stack is z 0, which has reason set { 1 x } and reason constraint C 2 .Thus, as in Example 3.12, the new conflicting set becomes { x 1, y 0, 1 x }.Moreover, the cut rule can be applied to C 2 and C 0 to eliminate variable z, resulting into C 7 : 1 y, which becomes the new conflicting constraint.Now Early Backjump can be applied, since after popping z 0, y 0 and 1 x the constraint 1 y trivially propagates the fresh bound 1 y in A 0 .So 1 y is pushed with reason constraint 1 y and reason set ∅, leading to the stack: Moreover, Learn adds 1 y to the set of constraints.This concludes procedure Conflict analysis Backjump Learn.Now 1 y propagates x 0 due to C 1 , and z 0 due to C 3 .In turn, x 0, z 0 and y 1 make constraint C 0 false.As there are no decisions left in the stack, the execution of the core IntSat algorithm terminates reporting that the set of constraints is infeasible.Note that, compared to Example 3.12, the possibility to infer and learn 1 y allows proving infeasibility with many fewer propagations.
Finally, the next theorem ensures that procedure Conflict analysis Backjump Learn is valid, as required by Theorem 3.10: Theorem 3.16.The above procedure Conflict analysis Backjump Learn is valid.
Proof.See Appendix C.

Extensions of IntSat
The IntSat algorithm described in Section 3.3 only decides the existence of integer solutions.In order to go beyond feasibility problems, optimisation can be handled in the following standard way.For finding a solution that minimises a linear objective function c 1 x 1 + . . .+ c n x n , each time a new solution sol is found, the constraint c 1 x 1 +. ..+c n x n c 0 where c 0 is c 1 sol(x 1 )+. ..+c n sol(x n )−1 is added, so as to attempt to improve the best solution found so far.This triggers a conflict, from which the search continues.This objective strengthening is repeated until the problem becomes infeasible.Bound propagations from these successively stronger constraints turn out to be very effective for pruning.Unlike what happens in propositional logic, linear constraints are first-class citizens (i.e., belong to the core language), and therefore adding the constraints generated from the objective function is straightforward and does not require further encodings, as it happens in SAT [29].
Another simplification we have made for the sake of presentation is that constraints are assumed to be inequalities with integer coefficients.However, with trivial transformations it is possible to tackle more general constraints.Namely, a constraint a 1 x 1 + . . .+ a n x n a 0 can be expressed as −a 1 x 1 − . . .− a n x n −a 0 , a constraint a 1 x 1 + . . .+ a n x n = a 0 can be replaced by the two constraints a 1 x 1 + . . .+ a n x n a 0 and a 1 x 1 + . . .+ a n x n a 0 , and rational non-integer coefficients a/b can be removed by multiplying both sides by b.

Implementation
In this section we describe some aspects of our current implementation of IntSat.It consists of roughly 10000 lines of simple C++ code that make heavy use of standard STL data structures.For instance, a constraint is an STL vector of monomials (pairs of two ints: the variable number and the coefficient), sorted by variable number, plus the independent term.Coefficients are never larger than 2 30 , and cuts producing any coefficient larger than 2 30 are simply not performed.In combination with the fact that we use 64-bit integers for intermediate results during bound propagation, cuts, normalisation, etc., this allows us to prevent overflow with a few cheap and simple tests.
STL vectors are also used, e.g., in the implementation of the assignment.There is a vector, the bounds vector, indexed by variable number, which can return in constant time the current lower and upper bounds for that variable.It always stores, for each variable x i , the positions pl i and pu i in the stack of its current (strongest) lower bound and upper bound, respectively.The stack itself is another STL vector containing at each position three data fields: a bound, a natural number pos, and an info field that includes, among other information, (pointers to) the reason set and the reason constraint.The value pos is always the position in the stack of the previous bound of  the same type (lower or upper) for that variable, with pos = −1 for initial bounds.When pushing or popping bounds, these properties are easy to maintain in constant time.See Figure 1 for an example of bounds vector and corresponding stack.

Bound propagation using filters
Affordably efficient bound propagation is crucial for performance.In our current implementation, for each variable x there are two occurs lists.The positive occurs list for x contains all pairs (I C , a) such that C is a linear constraint where x occurs with a positive coefficient a, and the negative one contains the same for occurrences with a negative coefficient a.Here I C is an index to the constraint filter F C of C in an array of constraint filters.The filter is maintained cheaply, and one can guarantee that C does not propagate anything as long as F C ≤ 0, thus avoiding many useless (cache-) expensive visits to the actual constraint C. The filtering technique is based on the following lemma: Lemma 4.1.Let C be a constraint of the form a 1 x 1 + • • • + a n x n a 0 .Let R be the set consisting of the current lower bound lb i x i and the current upper bound x i ub i of x i , for each variable x i .Then the following hold: (1) Constraint C propagates a non-redundant bound on x j if and only if (2) Constraint C propagates (a non-redundant bound) if and only if Proof.For the first claim, first let us assume a j > 0. Then by Lemma 3.5, C and R propagate x j ⌊e j ⌋, where e j = (a 0 − i =j min R (a i x i ))/a j .We have that this upper bound is non-redundant if and only if ⌊e j ⌋ < ub j , that is, ⌊e j ⌋ + 1 ≤ ub j , or equivalently, e j < ub j , i.e., a j e j < a j ub j .By expanding the definition of e j , this can be rewritten as which holds if and only if The case a j < 0 is analogous.By Lemma 3.5, we have C and R propagate ⌈e j ⌉ x j , where e j = (a 0 − i =j min R (a i x i ))/a j .We have that this lower bound is nonredundant if and only if ⌈e j ⌉ > lb j , that is, ⌈e j ⌉ − 1 ≥ lb j , or equivalently, e j > lb j , i.e., a j e j < a j lb j .By expanding the definition of e j this can be rewritten as which holds if and only if a 0 < a j lb j + i =j min R (a i x i ) a 0 < a j lb j − a j ub j + i min R (a i x i ) As regards the second claim, following the previous result, we have that constraint C propagates a non-redundant bound if and only if there exists a variable x j such that and this happens if and only if max Using the same notation as in the statement of Lemma 4.1, given a constraint C we define F ′ C as −a 0 + max j ( |a j | (ub j − lb j ) ) + i min R (a i x i ).By Lemma 4.1, constraint C propagates a non-redundant bound if, and only if, F ′ C > 0. For the sake of efficiency, the filter F C of C is actually an upper approximation of To preserve the property that F C ≥ F ′ C , the filters need to be updated when new bounds are pushed onto the stack (and each update needs to be undone when popped, for which other data structures exist).Namely, assume a fresh lower bound k x is pushed onto the stack.Let the previous lower bound for x be k ′ x.Note that k ′ < k.For each pair (I C , a) in the positive occurs list of x, using I C we access the filter F C and increase it by |a(k − k ′ )|.This accounts for the update of the term i min R (a i x i ) in the definition of F ′ C , as a > 0 and On the other hand, for efficiency reasons, the term max j ( |a j | (ub j − lb j ) ) in the definition of F ′ C is not updated.Since this term decreases as new bounds are added, we get the inequality F C ≥ F ′ C .Only when F C becomes positive the constraint C is visited, as it may propagate some new bound.To avoid too much precision loss, after each time a constraint C is visited, F C is reset to the exact value F ′ C .Finally, if a fresh upper bound x k is pushed onto the stack, exactly the same is done, but using the negative occurs list and the previous upper bound x k ′ for x.Note that k < k ′ , and if a < 0 then

Early backjumps
A significant amount of time in the hybrid resolution and cut-based conflict analysis of Section 3.5 is spent on determining whether an Early Backjump can be applied.In our current implementation, when we want to find the lowest decision level at which a certain constraint C propagates (a non-redundant bound), we take each of the variables that occur in C and go over the history of previous lower and upper bounds in the stack (using the aforementioned field pos).The heights in the stack of these bounds indicate the decision levels at which C could have propagated earlier.
The worst-case scenario in this process is when the constraint turns out not to propagate at any decision level, because all the potentially propagating decision levels have been examined but the outcome of the computation is useless.Fortunately, in some situations it is possible to see in advance that the constraint cannot propagate, and hence precious time can be saved.Namely, this is the case when the constraint has been obtained by applying an eliminating cut inference between two constraints whose only common variable is the one that is eliminated.
To prove this result formally, first we need the following lemma, which states that simplifying a constraint by dividing with a common divisor of the left-hand side and rounding down does not increase propagation power.
Lemma 4.2.Let C 1 be a constraint of the form ca 1 x 1 + ... + ca n x n ck + d with 0 ≤ d < c, and let C 2 be the constraint a 1 x 1 + ... + a n x n k.If C 1 does not propagate any non-redundant bound on a variable x j then neither does C 2 .
Proof.For any variable x i , let lb i x i and x i ub i be the current lower and upper bounds for x i , respectively.Let A be the set of all these bounds.
Let us assume C 1 does not propagate any non-redundant bound on x j .Then by Lemma 4.1 and since 0 ≤ d < c, which implies that C 2 does not propagate any non-redundant bound on x j .
Finally, the following lemma ensures that, if the only common variable is the one that is eliminated, an eliminating cut does not improve the propagation power.Proof.For any variable v ∈ {x, y i , z j }, let lb(v) v and v ub(v) be the current lower and upper bounds for v, respectively.Let A be the set of all these bounds.First of all, by virtue of Lemma 4.2 we notice that simplifying the cut by dividing with a common factor of the left-hand side and rounding down does not increase propagation power.Now let us assume that C 1 and C 2 have been propagated exhaustively.As C 1 does not propagate any non-redundant bound on variable y k (1 ≤ k ≤ n), by Lemma 4.1 As C 2 does not propagate any non-redundant bound on variable x, by Lemma 4.1 By adding both inequalities: . This shows that C 3 does not propagate any non-redundant bound on variable y k .The proof of the analogous result for z l (1 ≤ l ≤ m) is symmetric.
To apply this result in conflict analysis, let C 1 be the current conflicting constraint, and C 2 be the reason constraint of the bound B of the conflicting set that is topmost in the stack.Since we have not been able to apply early backjump so far, C 1 does not propagate anything at any previous decision level.And since C 2 is a constraint of the integer program and before each decision all bounds are exhaustively propagated, C 2 cannot propagate anything at any previous decision level either.Therefore, by Lemma 4.3, if C 1 and C 2 only have the variable x of bound B in common, and a cut eliminating x exists between C 1 and C 2 , then early backjump will not be applicable to the resulting constraint.

Clauses
Clauses are common in ILP problems.For example, set covering constraints, which are of the form x 1 + . . .+ x n 1, are a particular case of clauses where all literals are positive.As a reference, 50 out of the 240 instances of the Benchmark Set and 199 out of the 1065 instances of the Collection Set of the MIPLIB Mixed Integer Programming Library 20174 (roughly 20 % in both cases) contain set covering constraints.
Hence there is a potential gain in giving clauses a particular treatment.For this reason, in our implementation they have a specialised implementation, with no difference with what could be found in current SAT solvers.This allows, for example, a more memoryefficient storage and a faster propagation, thanks to the use of watch lists instead of occurs lists.Moreover, some clause-specific constraint simplification techniques can be applied, such as the so-called lemma shortening introduced in MiniSAT [28], which has turned out to be essential in the state of the art of SAT solving.
In particular, binary clauses are implemented as edges of the so-called binary graph [6].As it is common in SAT solvers, our implementation uses this graph not only for propagating even faster than with watch lists, but also for detecting equivalent literals, which are used to simplify the problem.
Apart from the edges coming from (explicit) binary clauses, we simulate that the binary graph also contains what we call implicit binary clauses.These are binary clauses that are consequences of constraints in the integer program, but which are not explicitly added to the graph.For example, a set packing constraint of the form x 1 + . . .+ x n ≤ 1 implies binary clauses xi ∨ xj , where 1 ≤ i < j ≤ n.In general, let us assume we are given an assignment A and we have a constraint a 1 x 1 + • • • + a n x n a 0 such that x i , x j are Boolean variables satisfying 1 x i ∈ A, 1 x j ∈ A and a i , a j > 0. Then the constraint implies (together with A) the binary clause xi

4, this happens if and only if
In practice, at the beginning of the search, once all bounds have been exhaustively propagated and before making any decision, all constraints are examined, and for each literal we store a list of the identifiers of those constraints that imply a binary clause with that literal.These lists of constraint identifiers are used in the algorithms that involve the binary graph to recompute on demand the binary clauses and thus simulate the corresponding edges.
These three different implementations of constraints (general constraints, clauses, binary clauses) coexist in the implementation.As a consequence, the stack of bounds of the assignment has three integers that point to the constraint, clause and binary clause, respectively, that was propagated last, and specialised procedures for propagation are called when these pointers move forward towards the top of the stack.If a conflict is found, the conflicting constraint as well as the reason constraints (be they general constraints, clauses or binary clauses) are transformed into a common representation in order to carry out the analysis of the conflict.

Decision heuristics
As in SAT solvers, our current heuristics for selecting the variable of the next decision bound is based on recent activity: the variable with the highest activity score is picked.To that end, a priority queue of variables, ordered by activity score, is maintained.The score of a variable x is increased each time a bound containing x appears in the conflicting set during conflict analysis, and to reward recent activity, the amount of increment grows geometrically at each conflict.
Once a variable x is picked, one has to decide the actual decision bound: whether it is lower or upper, and how to divide the interval between the current lower bound l and the current upper bound u.Several strategies have been implemented, among others (in what follows, let m be the domain middle value (l + u)/2): The next two strategies attempt to steer the search towards minimising the cost in a 'first-success' manner.Let v be the best value for x in {l, u} so as to minimise the objective function.
(5) The domain of The following three strategies are inspired by the last-phase polarity heuristic from SAT [56] and are aimed at finding a first solution quickly (which helps to prune the search tree dramatically).Let v be the last value that x was assigned to.(7) The domain of or [u, u], respectively.Otherwise, if v is closer to l than to u, then the domain of The next strategies are similar to the ones just described, but using other assignments as a reference.
(11) Like ( 7), ( 8), ( 9), but using the value of x in an initial solution provided by the user.
The user can specify an order between these strategies, in such a way that if a strategy cannot be applied (for instance, in ( 7), ( 8), ( 9) if variable x has not been assigned yet, or in (9) if v ∈ [l, u]), then the next strategy in the order is attempted.

Experiments
This section is devoted to the experimental evaluation of IntSat.After describing the competing solvers in Section 5.1 and explaining how the benchmarks have been obtained in Section 5.2, the results of this evaluation are shown in Section 5.3.Finally Section 5.4 presents some conclusions that can be drawn from the evaluation.
All experiments reported next were carried out on a standard 3.00 GHz 8-core Intel i7-9700 desktop with 16 Gb of RAM.A binary of our basic implementation as well as all benchmarks can be downloaded from [50], so that the interested reader can reproduce and verify the results reported here.
Due to their prevalence and great success in practice, in this evaluation we focus on comparing with Operations Research ILP solvers based on the simplex algorithm and branch-and-cut.However, there are other tools that can handle ILP with different approaches, which have not been included here mainly for performance reasons.For instance, SAT Modulo Theories (SMT) [52] solvers such as Mathsat5 [19], Yices [26], Z3 [24] or our own Barcelogic solver [14] are aimed at problems consisting of Boolean combinations of integer linear constraints, being ILP the particular case of conjunctions of constraints.SMT solvers specialize in efficiently handling this arbitrary Boolean structure, while their theory solver component, the one that precisely handles conjunctions of constraints (our goal here), is not as sophisticated as in simplex-based systems.As a consequence, the performance of SMT solvers on ILP instances is in general orders of magnitude worse than CPLEX or Gurobi, and for this reason they are not included in these experiments.
Concerning SAT and Lazy Clause Generation (LCG) [53], from our own work (see among many others [1]), we also know too well that solvers that (lazily) encode integer linear constraints into SAT can be competitive as long as problems are mostly Boolean, without a heavy numerical/optimisation component.Also CSP solvers such as Sugar [60] or Gecode [59], which heavily focus on their rich constraint language, are in general very far from commercial Operations Research solvers on hard ILP optimisation problems.Because of that, these tools have not been considered in our experimental comparison either.
Finally, another solver very close to our methods is CutSat [41], which also attempts to generalise CDCL from SAT to ILP.As argued in Section 3.1, the approach that CutSat follows to work around the rounding problem poses important limitations, which translate in practice into a rather poor performance when compared to state-ofthe-art ILP solvers, typically orders of magnitude slower [49].Moreover, the current implementation can only handle feasibility problems and has no optimisation features.

Instances
The instances used in the experiments were taken from the MIPLIB Mixed Integer Programming Library.The latest edition of the library (MIPLIB 2017, available at https://miplib.zib.de)organises benchmarks into two sets: the Benchmark Set, which contains 240 instances that are solvable by (the union of) today's codes and were chosen 'subject to various constraints regarding solvability and numerical stability'; and the much larger Collection Set, which represents a diverse selection of instances [31].See the MIPLIB website for statistics of each instance, including among others the number of continuous, integer and binary variables, the number of non-zeroes in the coefficient matrix, and the number of constraints in different classes: set covering, set packing, set partitioning, cardinality, knapsack, etc.
From these two sets of instances, we picked those that: (1) do not contain continuous variables, and (2) have lower and upper bounds for all variables, and (3) do not contain constraints or objective functions with fractional coefficients with many (four or more) decimal digits, and (4) contain at least one integer non-binary variable, and (5) are known to be infeasible or have a feasible solution, according to MIPLIB 2017.
Restrictions 1-3 in the above list are due to the limitations of our algorithms.In particular, to ensure that all coefficients are integer, the constraints and objective functions with fractional coefficients are multiplied by appropriate powers of 10.Moreover, instances with only binary variables were discarded, as our goal here are non-binary problems; to deal with binary linear programming, a specialised implementation would perform much better (see Section 6.4).Finally, instances with an 'open' status at MI-PLIB 2017 (that is, which are still not known to have a feasible solution or not) were not included after confirming that, within a reasonable amount of time, no solver could produce an answer for them.
After applying this filtering, we obtained 29 benchmarks from the Benchmark Set, and 40 from the Collection Set.We also added to our selection the three instances from MIPLIB 2010 that were used in our earlier work in [49] and that have been removed from the current 2017 edition of the library.Altogether we compiled a test suite consisting of 72 benchmarks.
In order to present the results in a coherent way, in what follows these instances will be classified depending on whether they admit a feasible solution or not, and whether they are feasibility or optimisation problems.Hence, out of the 72 instances of the benchmark suite, it turns out that 6 are infeasible: cryptanalysiskb128n5obj14, no-ip-64999, no-ip-65059, fhnw-sq3, neos-3211096-shag and neos859080.In fact, all of these but neos859080 are actually feasibility problems, i.e., there is no objective function.The remaining 66 feasible instances of the test suite contain both feasibility and optimisation problems.As regards the former there are 6 feasibility instances, namely: cryptanalysiskb128n5obj16, neos-3004026-krka, fhnw-sq2, lectsched-1, lectsched-2 and lectsched-3.The rest of the benchmark suite consists of 60 feasible optimisation instances.

Results
The time limit of all executions was set to 1 hour of wall-clock time.However, it is important to highlight that our implementation of IntSat (as well as GLPK and SCIP) is sequential and only uses one core, while on the other hand CPLEX and Gurobi are run in parallel mode.Therefore, they may use (and often do use) all of the eight cores that are available in the computer used for the experiments.

Infeasible instances
Table 1 summarizes the results on infeasible benchmarks.There is a row for each instance 5 .As regards columns, the first three describe size properties of the instance: number of constraints, (total) number of variables and number of (non-binary) integer variables.Recall that the number of continuous variables is always zero here.The rest of the columns show the time in seconds that each solver takes to prove infeasibility: (1) grb stands for Gurobi; (2) cpx stands for CPLEX; (3) scip stands for SCIP; (4) glpk stands for GLPK; (5) isr (short for 'IntSat Resolution') stands for our core IntSat algorithm with the implementation of the procedure Conflict analysis Backjump Learn as described in Section 3.4.
(6) isc (short for 'IntSat Cuts') is similar to the previous solver, but with Conflict analysis Backjump Learn implemented as in Section 3.5.
The timing TO stands for time out.The fastest solver for each problem (if any) is highlighted in bold face.
First of all, we remark the difficulty of most of these instances.Even for a tool like CPLEX, for half of the benchmarks infeasibility cannot be proved within the time limit of 1 hour; and over the 6 instances, Gurobi times out twice.Having said that, it appears that our techniques are not especially appropriate for this kind of problems: infeasibility can be proved only for a single instance.However, it is worth highlighting the complementarity of the techniques: for this particular instance cryptanalysiskb128n5obj14, which turns out to be a difficult problem even for Gurobi, the overall best solver, our solver isc performs comparatively well.

Feasible feasibility instances
Table 2 shows the results obtained on feasible feasibility benchmarks.The format of the table is as in Table 1, but here the last six columns indicate the time in seconds that each solver takes to prove feasibility (that is, to find a solution, given that in these instances there is no objective function).
As can be seen in the table, for these feasibility problems our IntSat solvers turn out to perform reasonably well.Any of the two uniformly performs better than any other solver, even CPLEX and Gurobi.Although admittedly the lectsched-* instances are rather easy, the other three are not so (e.g., CPLEX times out on all of them, even with the eight available cores).Moreover, in the particular case of the instance cryptanalysiskb128n5obj16, in fact our IntSat solvers are the only ones that could find a solution.It is worth noting that isc solved this instance in roughly 38 seconds.

Feasible optimisation instances
In this subsection we report the results on feasible optimisation instances.Here the focus will be on the ability of the solvers to find good solutions quickly, rather than their strength in proving optimality.We do so motivated on practical grounds.Indeed, for many real-life instances it is simply impossible to certify optimality in the allotted time.As noted in [40], this can consume a huge fraction of the long overall running time.This is natural, as optimal solutions can be discovered using good heuristic guidance and may involve only exploring a small part of the space of solutions, while proving optimality, on the other hand, involves reasoning over the whole search space.For this precise reason, MILP solvers based on the simplex method and branch-andcut actually only look for new solutions until they consider they are 'close enough' to the optimum, according to their MILP gap tolerance.
In consequence, in what follows feasible optimisation instances are divided into two groups, depending on whether or not an optimal solution could be found in the executions with the different solvers.In order to identify solutions as optimal without requiring solvers to effectively prove optimality, we use the optimal value of the objective function as reported in MIPLIB 6 .
Namely, Table 3 displays the results on the 49 instances for which at least one solver could find an optimal solution within the time limit of 1 hour.The format of the table is as in Tables 1 and 2, but now the last six columns indicate the time in seconds that each solver takes to find an optimal solution.Here the timing TO means that the solver timed out before finding an optimal solution (although it may have obtained other worse solutions, which is not represented in the table for the sake of succinctness).
Table 4 shows the results on the remaining 11 instances.Given that none of the solvers could find an optimal solution, this table is different in that it does not display timings but how the objective function value of the best solution evolved along time.The format is similar to that in previous tables, with the difference that now each problem spans five rows, indicating the value of the best solution that could be found within the first 1 minute, 5 minutes, 15 minutes, 30 minutes and 60 minutes, respectively.Moreover, there is an additional final column with these time lapses to help reading the results.A dash means that the time lapse passed without discovering any solution.
For each time lapse, the solver with the best objective value over all is highlighted in bold face.
First of all, let us compare our IntSat solvers between themselves.Out of 49 instances, in Table 3 there are 7 instances in which isr gives better results than isc, and among these, 1 instance in which isc actually times out.On the other hand, there are 15 instances in which isc is better than isr, and among these, 9 instances in which isr times out.As regards Table 4, isr is clearly superior to isc in instance rococoC12-111000, while the reverse occurs in instances supportcase1, comp12-2idx, comp21-2idx, rococoC12-010001, ns1854840, proteindesign121hz512p19 and nursesched-medium-hint03.
Altogether, although the conflict analysis with cuts and early backjumps as described in Section 3.5 usually pays off, there is a significant number of instances on which a simpler conflict analysis yields better results.This complementarity suggests that, given that they are now run sequentially, a portfolio approach running both variants in parallel would be in order.Now let us contrast our solvers with SCIP and GLPK.To simplify the comparison, for each instance we will consider the best of res and cut, and the best of SCIP and GLPK.Regarding Table 3, we see that the best of our solvers was superior to the best solver of SCIP and GLPK in 20 instances, and the other way around in 19 instances.As for Table 4, our solvers were better in instances supportcase1, neos-3214367-sovi, proteindesign121hz512p19, neos-4360552-sangro and ns1854840, and the other way around for instances rococoC12-010001 and rococoC12-111000.In short, we see our solvers tend to perform slightly better than SCIP and GLPK.
Finally, let us compare our solvers with CPLEX and Gurobi.Again, for the sake of simplicity, for each instance we will consider the best of res and cut, and the best of CPLEX and Gurobi.We see that in Table 3 the best of our solvers was superior to the best solver of CPLEX and Gurobi in 9 instances, and the other way around in 39 instances.As for Tables 4, our solvers were better in instances supportcase1, neos-3214367-sovi and ns1854840, and the other way around for instances comp12-2idx, rococoC12-010001, neos-4360552-sangro, nursesched-medium-hint03, comp21-2idx and rococoC12-111000.Although on the whole admittedly CPLEX and Gurobi are superior, there is about a 20% of the instances for which our solvers obtain better results, which is a non-negligible percentage.It has also to be reminded that, while isr and isc are sequential and only use one core, CPLEX and Gurobi are parallel and often use all eight available cores.

Conclusions of the Experiments
Altogether, the results of the experiments in Sections 5.3.1, 5.3.2 and 5.3.3indicate that (both) our IntSat algorithms can be helpful on optimisation problems so as to find first solutions of relatively good quality.This is even more the case for feasibility problems, which are more amenable to our methods as optimisation is not natively supported.Our results thus confirm the well-known fact that depth-first search, which is the search strategy that CDCL solvers implement, is particularly appropriate for finding solutions fast.In fact, a typical search strategy in ILP solvers consists in first performing depth-first search in order to get a feasible solution to enable cost-based pruning in branch-and-bound, and then apply best-first search, so as to give more emphasis to the quality of the solutions.Taking this into account we consider that, in the diverse toolkit of techniques that ILP solvers implement, IntSat could provide an edge when feasibility is prioritised over optimality, or in the first stages of the search.

Future Work
A large number of further ideas around IntSat and its implementation are yet to be explored, among which we sketch the most relevant ones in this section.

Unboundedness
The current implementation assumes that for each variable there is an initial lower bound and an upper bound.This is used in the proof of Theorem 3.10 to ensure the termination of the core IntSat algorithm.Although it is common that this condition holds in real-life applications, some instances do have unbounded variables.In these cases non-termination may manifest itself in, e.g., bound propagation.For example, consider the set of constraints consisting of C 1 : x − y 0 and C 2 : −x + y + 1 0, which clearly does not have any solution.If we decide bound 0 x, constraint C 1 propagates 0 y, then C 2 propagates 1 x, then constraint C 1 propagates 1 y, and so on indefinitely, resulting into an endless chain of propagations.Hence, even bound propagation is not guaranteed to terminate for unbounded infeasible problems.
In theory, any ILP can be converted into an equivalent fully bounded one [58].Unfortunately, these bounds turn out to be too large to be useful in practice.
A pragmatic solution to handle unbounded variables is to introduce a fresh auxiliary variable z with lower bound 0 z, and for each variable x without lower bound add the constraint −z x, and similarly if it has no upper bound add x z.Then one can re-run the algorithm with successively larger upper bounds z ub for z, thus guaranteeing completeness for finding (optimal) solutions.
An alternative strategy for handling unbounded variables is to apply the so-called bounding transformation presented in [15].This allows reducing any problem to an equisatisfiable bounded one via a Mixed-Echelon-Hermite transformation composed with a double-bounded reduction.

Restarts and Constraint Database Management
From the practical point of view, our current implementation mimics several ideas from CDCL SAT solving without having tested them thoroughly.
For instance, like SAT solvers our basic implementation applies periodic restarts.We currently follow a policy that triggers a restart when the number of conflicts reaches a threshold.The user can specify if this threshold is determined according to the Luby sequence [44] or to an inner-outer geometric series [10].Moreover, each of these strategies has a number of parameters with a significant effect on performance and whose values are still to be tuned.
Another aspect that requires further investigation is the cleanup policy for the constraint database; at each cleanup, we remove all non-initial constraints with more than two monomials and activity counter equal to 0. This activity counter is increased each time the constraint is a conflicting or reason constraint at conflict analysis, and is divided by 2 at each cleanup.Cleanups are done periodically in such a way that the constraint database grows rather slowly over time.The policy that is currently implemented triggers a cleanup after the number of new learnt constraints reaches a threshold, or the memory used by learnt constraints exceeds a space limit.We think there is still room for improvement in adjusting the parameters of this strategy based on experimental grounds.Furthermore, other cleanup strategies can be devised, e.g., by generalising the heuristics based on the literal block distance [7] that are used in state-of-the-art SAT solvers with great success.
Finally, modern SAT solvers heavily apply pre-and in-processing techniques [27,30] to keep the constraint database small but strong.These techniques could be worth applying also in the context of CDCL solvers for ILP.

Conflict analysis
Several ideas can be explored for conflict analysis.Instead of taking the extreme position of always trying to apply early backjump (as in Section 3.5) or never (as in Section 3.4), one can, for instance, attempt to do an early backjump with the intermediate conflicting constraint C only if it is false in the current stack, or promising (e.g., short) according to some heuristic.Other conflict analysis algorithms mixing resolution with reason and conflicting sets and cuts with reason and conflicting constraints can also be designed.
In another line of investigation, the quality of the backjumps and the strength of the reason sets could be improved by doing some more work: e.g., instead of using the pre-stored reason sets, one can re-compute them on the fly during conflict analysis with the aim of maximising the length of the backjump.One can also do a bit of search during conflict analysis, e.g., by trying to remove non-topmost bounds and do cuts with these, with the aim of finding good early backjump cuts.

Binary ILP and Mixed ILP
Binary ILP is a particular case of ILP, and as such the algorithms presented here are directly applicable.However, given the prevalence of this kind of problems, it is worth to specialise the design and implementation of the data structures and subprocedures, e.g. in the propagation mechanism.Preliminary experiments with a binary ILP solver prototype [51] already show a significant speed-up in running time in comparison with the general basic implementation.
In the opposite direction, it also needs to be worked out how to apply the proposed algorithms in order to solve mixed ILP (MILP) instances, i.e., where not all variables are subject to integrality.For instance, one could decide on the integer variables as it is done now, and at any desired point, run an LP solver to optimise the values for the rational variables.The inclusion of lower bounding techniques, well-known from modern MILP solvers, needs to be considered as well.

Related Work and Conclusions
The success of SAT solving has spurred a number of research projects that attempt to extend CDCL-related techniques to ILP and MILP.Some of these come from the traditional MILP community.For example, in [3] conflict analysis is generalised to mixed integer programming thanks to special heuristics for branch-and-cut.These techniques are implemented in SCIP, one of the solvers used in the experiments in Section 5.
Other works come from the SAT area, as the aforementioned CutSat presented in [41], which has later been refined and extended in [16].Unlike in these works, our cut-based reasoning does not replace but is performed in parallel to resolution-based reasoning with reason and conflicting sets.
In this same direction, the idea of applying reason and conflicting sets is not only reminiscent to the conflict analysis of SAT, but also of SAT Modulo Theories (SMT) [8,52] for the theory of linear arithmetic, with the main difference, among others, that here new ILP constraints are obtained by cut inferences, normalised and learned, and not only new Boolean clauses that are disjunctions of literals representing bounds (usually only those that occur in the input formula).Other SAT/SMT related work, but for rational arithmetic is [21,43,46].
As outlined in Section 3.5, it is also worth mentioning that there may be some possible theoretical and practical consequences of the fact that our algorithm's underlying cutting planes proof system is stronger than CDCL's resolution proof system: could we outperform SAT solvers on certain SAT problems (e.g., pigeon-hole-like) for which no short resolution proofs exist?A similar question applies to the current SMT solvers, which are based on resolution as well [48].
It seems unlikely that for ILP or MILP solving one single technique can dominate the others; the best solvers will probably continue combining different methods from a large toolbox, which perhaps will also include this work at some point.Still, IntSat already appears to be the first alternative method for ILP that, without resorting to the simplex algorithm or LP relaxations, turns out to be competitive on hard optimisation problems.We expect that, give its large potential for enhancement, this work will trigger further research activity, in particular along the lines sketched in Section 6.
As regards the second claim, we observe that the constraint is false if and only if min A (a 1 x 1 +. ..+a n x n ) > a 0 .And since A is a set of bounds, min A (a 1 x 1 +. ..+a n x n ) = min A (a 1 x 1 ) + . . .+ min A (a n x n ).
Proof of Lemma 3.5.Let us consider the case a j > 0. Let us assume that there exists a solution sol to {C}∪R with sol (x j ) = v > ⌊e j ⌋ and we will get a contradiction.If v > ⌊e j ⌋ then v ≥ ⌊e j ⌋ + 1, which implies v > e j , that is, a j v > a j e j .Expanding the definition of e j we get or equivalently which by Lemma 3.4 implies that {C} ∪ R ∪ {v x j , x j v} has no solution, a contradiction.
The case a j < 0 is similar.If there exists a solution sol to {C} ∪ R with sol (x j ) = v < ⌈e j ⌉, then v ≤ ⌈e j ⌉ − 1, which implies v < e j , that is, a j v > a j e j .The proof then follows in the same way as in the case a j > 0.
Proof of Lemma 3.7.By Lemma 3.4, C 1 is false in R ∪ {x j e j } if and only if i min R (a i x i ) > a 0 .But min R∪{xj ej} (a j x j ) = a j e j as a j < 0. So we have a j e j + i =j min R (a i x i ) > a 0 .By expanding the definition of e j and multiplying at both sides by b j > 0, we get   together with S |= RC and CC ∧RC |= CC ′ imply that S |= CC ′ , which completes the proof of invariance.Also as a byproduct, this also proves property 5 in the definition of validity.
Let us show property 3 in the definition of validity.We have to see that the assignment A can be decomposed as N D M , where N and M are sequences of bounds and D is a decision bound, and A ′ is of the form N B, where B is a fresh bound in N .If Backjump is applied, this holds following the same argument as in the proof of Theorem 3.13.On the other hand, if Early Backjump is applied, let D be the last bound that is popped, which by construction is a decision.So the assignment A can be decomposed as N D M , where N and M are sequences of bounds and D is a decision bound.Moreover, A ′ is of the form N B ′ , where by definition B ′ is a fresh bound in N .Hence property 3 is also true in this case.
Finally, regarding property 4 in the definition of validity, again following the same argument as in the proof of Theorem 3.13, it is straightforward to see that it holds for bounds that are pushed in Backjump.For bounds that are pushed in Early Backjump, let D be the last bound that is popped, which as observed above is a decision.Hence the assignment A can be decomposed as N D M .If B ′ is the bound that is pushed and CC and RS ′ are its reason constraint and its reason set, then by definition RS ′ ⊆ N and CC ∪ RS ′ |= B ′ , which together with S |= CC implies S ∪ RS ′ |= B ′ .

( 5 )
The assignment A can be decomposed as N D M , where N and M are sequences of bounds and D is a decision bound, and A ′ is of the form N B, where B is a fresh bound in N .(4) Bound B has a reason set RS ⊆ N such that S ∪ RS |= B. If bound B has a reason constraint RC then S |= RC.

Lemma 4 . 3 .
Let C 1 be a constraint of the form ax + a 1 y 1 + ... + a n y n a 0 and C 2 of the form −bx + b 1 z 1 + ... + b m z m b 0 , where a, b > 0 and the variables y i , z j are different pairwise.Let C 3 be the result of applying a cut inference eliminating x:C 3 : ba 1 y 1 + ... + ba n y n + ab 1 z 1 + ... + ab m z m ba 0 + ab 0 .Then C 3 propagates nothing more than C 1 and C 2 do.

a
j b 0 − a j i =j min R (b i x i ) + b j i =j min R (a i x i ) > a 0 b j .j b i x i ) + i =j min R (b j a i x i ) ≤ i =j min R ((−a j b i + b j a i )x i ) .

Altogether
a j b i + b j a i )x i ) > a 0 b j − a j b 0and by Lemma 3.4 we have proved that C 3 is false in R.
Pop bounds from A ′ until either there are no decisions in A ′ or, for some B in CS with B = B top , there are no decisions above B in A ′ .Then push B top onto A ′ with associated reason set CS \ {B top }.Next let us argue that, as claimed in the introduction to this section, procedure Conflict analysis Backjump Learn is reproducing the conflict analysis from SAT based on resolution.To that end, it is convenient to view the algorithm as working on the logical negations of the conflicting set and the reason sets.Namely, one of the invariants of conflict analysis is that S ∪ CS is infeasible.Equivalently, in a more logical notation we can say that S ∧ ( B∈CS B) is unsatisfiable, or that S |= B∈CS B. Similarly, the reason set of a non-decision bound B ′ satisfies that S ∪ RS |= B ′ , which can be reformulated logically as S |= B ′ ∨ B∈RS B. Drawing a parallel with SAT, the clause B∈CS B would play the role of the conflicting clause, while the clause B∈RS B would correspond to the reason clause of B ′ .From this viewpoint, let us consider the step in the above algorithm in which the propagation of bound B ′ with reason set RS is unfold and CS is replaced by (CS \ {B ′ }) ∪ RS.Now this can be interpreted as a resolution inference eliminating 'literal' B ′ between clause B∈CS B, viewed as B ′ ∨ ( B∈CS\{B ′ } B), and clause B ′ ∨ ( B∈RS B).Note that the resolvent 1. Conflict analysis: Let us define CS as the Conflicting Set of bounds.Initially, CS is a subset of bounds of A causing the falsehood of C for a certain conflict C in S, i.e, C ∈ S is false in CS ⊆ A. Invariants: CS ⊆ A and S ∪ CS is infeasible.Let B be the bound in CS that is topmost in A, and RS its reason set.Replace CS by (CS \ {B}) ∪ RS.Repeat this until CS contains a single bound B top that is, or is above, A's topmost decision.2. Backjump: Assign A ′ a copy of A. 3. Learn: if CS = {B | B ∈ CS}, viewed as a clause of bounds, can be converted into an equivalent constraint CC, let T = {CC} else T = ∅.

1 .
Conflict analysis: Let us define CC as the Conflicting Constraint, and CS as the Conflicting Set of bounds.Initially, CC is any conflict and CS is a subset of bounds of A causing the falsehood of CC, i.e, CC ∈ S is false in CS ⊆ A. Let B be the bound in CS that is topmost in A, and RS its reason set.Replace CS by (CS \ {B}) ∪ RS.If RC is defined and there exists a cut eliminating x between CC and RC then replace CC by that cut.1.3.Early Backjump: If for some maximal k ∈ N, after popping k bounds from A the last one being a decision, the conflicting constraint CC and a subset RS ′ of bounds of the resulting A propagate some fresh bound B ′ , then assign A ′ a copy of A, pop k bounds from A ′ , push B ′ onto A ′ with associated reason constraint CC and reason set RS ′ , and go to Learn.untilCS contains a single bound B top that is, or is above, A's topmost decision.2. Backjump: Assign A ′ a copy of A. Pop bounds from A ′ until either there are no decisions in A ′ or, for some B in CS with B = B top , there are no decisions above B in A ′ .Then push B top onto A ′ with associated reason constraint CC and reason set CS \ {B top }.
Invariants: S |= CC, and CS ⊆ A, and S ∪ CS is infeasible.1.2.Let x and RC be the variable and the reason constraint of B, respectively.3. Learn: Let T = {CC}.Example 3.14.Again let us consider Example 3.2, and let us apply the procedure

Table 3 .
Feasible optimisation instances, time for finding the optimal solution.

Table 4 .
Feasible optimisation instances, cost of best solution after different run times.