A Hybrid Method for Planning and Scheduling

. We combine mixed integer linear programming (MILP) and constraint programming (CP) to solve planning and scheduling problems. Tasks are allocated to facilities using MILP and scheduled using CP, and the two are linked via logic-based Benders decomposition. Tasks assigned to a facility may run in parallel subject to resource constraints (cumulative scheduling). We solve minimum cost problems, as well as minimum makespan problems in which all tasks have the same release date and deadline. We obtain computational speedups of several orders of magnitude relative to the state of the art in both MILP and CP.

We address a fundamental class of planning and scheduling problems for manufacturing and supply chain management.Tasks must be assigned to facilities and scheduled subject to release dates and deadlines.Tasks may run in parallel on a given facility provided the total resource consumption at any time remains with limits (cumulative scheduling).In our study the objective is to minimize cost or minimize makespan.
The problem naturally decomposes into an assignment portion and a scheduling portion.We exploit the relative strengths of mixed integer linear programming (MILP) and constraint programming (CP) by applying MILP to the assignment problem and CP to the scheduling problem.We then link the two with a logic-based Benders algorithm.
We obtain speedups of several orders of magnitude relative to the existing state of the art in both mixed integer programming (CPLEX) and constraint programming (ILOG Scheduler).As a result we solve larger instances to optimality than could be solved previously.In minimum makespan problems, the Benders method provides a feasible solution and a lower bound on the optimal makespan even when it is terminated before finding a provably optimal solution.

The Basic Idea
Benders decomposition solves a problem by enumerating values of certain primary variables.For each set of values enumerated, it solves the subproblem that results from fixing the primary variables to these values.Solution of the subproblem generates a Benders cut (a type of nogood) that the primary variables must satisfy in all subsequent solutions enumerated.The next set of values for the primary variables is obtained by solving the master problem, which contains all the Benders cuts so far generated.
In this paper, the primary variables define the assignment of tasks to facilities, and the master problem is the assignment problem augmented with Benders cuts.The subproblem is the set of cumulative scheduling problems (one for each facility) that result from a given assignment.
In classical Benders decomposition [2,6], the subproblem is always a continuous linear or nonlinear programming problem, and there is a standard way to obtain Benders cuts.In a logic-based Benders method, the subproblem is an arbitrary optimization problem, and a specific scheme for generating cuts must be devised for each problem class by solving the inference dual of the subproblem.In the present context, the Benders cuts must also be linear inequalities, since the master problem is an MILP.It is also important in practice to augment the master problem with a linear relaxation of the subproblem.
The main contribution of this paper is to develop effective linear Benders cuts and subproblem relaxations for (a) minimum cost problems with cumulative scheduling, and (b) minimum makespan problems with cumulative scheduling in which all tasks have the same release date and deadline.

Previous Work
Logic-based Benders decomposition was introduced by Hooker and Yan [10] in the context of logic circuit verification.The idea was formally developed in [7] and applied to 0-1 programming by Hooker and Ottosson [9].
Application of logic-based Benders to planning and scheduling was proposed in [7], which suggested solving the master problem by MILP and the subproblem by CP.Jain and Grossmann [11] first successfully applied this approach.They solved minimum-cost planning and scheduling problems in which the subproblems are one-machine disjunctive (rather than cumulative) scheduling problems.The Benders cuts are particularly simple in this case because the subproblem is a feasibility problem rather than an optimization problem.Two goals of the present paper are (a) to accommodate cumulative scheduling, and (b) to develop Benders cuts when the subproblem is an optimization problem, as in the case of minimum makespan problems.
In related work, we observed in [7] that the master problem need only be solved once by a branching algorithm that accumulates Benders cuts as they are generated.Thorsteinsson [17] showed that this approach, which he called branch-and-check, can result in substantially better performance on the Jain and Grossmann problems than standard logicbased Benders.We did not implement branch and check for this study because it would require hand coding of a branch-and-cut algorithm for the master problem.But we obtained substantial speedups without it.
A shorter version of this paper appeared as [8].In the same proceedings volume, a paper by Cambazard et al. [3] applies a logic-based Benders method to real-time allocation and scheduling of computing resources.The master problem allocates tasks to networked processors, and the subproblem schedules the tasks so as to meet deadlines.Since both the master problem and subproblem are solved with constraint technology, the algorithm represents a purely CP rather than a hybrid approach.
Classical Benders decomposition can also be useful in a CP context, as shown by Eremin and Wallace [5].

The Problem
The planning and scheduling problem may be defined as follows.Each task j ∈ {1, . .., n} is to be assigned to a facility i ∈ {1, . ..m},where it consumes processing time p ij and resources at the rate c ij .Each task j has release time r j and deadline d j (viewed as a due date in the case of minimum tardiness problems).The tasks assigned to facility i must be given start times t j in such a way that the total rate of resource consumption on facility i is never more than C i at any given time.
We investigate two objective functions.If we let x j be the facility assigned to task j, the cost objective function is g(x, t) = j f x j j , where f ij is the fixed cost of processing task j on facility i.The makespan objective function is g(x, t) = max j {t j + p x j j }.

Constraint Programming Formulation
The problem is succinctly written using the constraint cumulative(t, p, c, C), which requires that tasks be scheduled at times t = (t 1 , . .., t n ) so that the total rate of resource consumption at any given time never exceeds C. Thus j∈Jt c j ≤ C for all t, where J t = {j | t j ≤ t ≤ t j + p j } is the set of tasks underway at time t.
The planning and scheduling problem becomes minimize g(x, t) subject to cumulative((t j |x j = i), (p ij |x j = i), (c ij |x j = i), C i ), all i r j ≤ t j ≤ d j − p x j j , all j (1) where g(x, t) is the desired objective function and (t j |x j = i) denotes the tuple of start times for tasks assigned to facility i.The second constraint enforces the time windows.

Mixed Integer Programming Formulation
The most straightforward MILP formulation discretizes time and enforces the resource capacity constraint at each discrete time.Let the 0-1 variable x ijt = 1 if task j starts at discrete time t on facility i.The formulation is min g(x, t) x ijt = 0, all j, t with where N is the number of discrete times (starting with y = 0) and T ijt = {t | t − p ij < t ≤ t} is the set of discrete times at which a task j in progress on facility i at time t might start processing.Constraint (a) ensures that each task starts once on one facility, (b) enforces the resource limit, and (c) the time windows.The cost objective is g(x, t) = ijt f ij x ijt .The makespan objective is g(x, t) = z, together with the constraints z ≥ it (t + p ij )x ijt for all j.
Due to the size of (2), we also investigated a smaller discrete event model suggested by [18], which uses continuous time.However, it proved to be much harder to solve than (2).We therefore omitted the discrete event model from the computational studies described below.

Logic-based Benders Decomposition
Logic-based Benders decomposition applies to problems of the form Here C(x, t) is the constraint that results from fixing x = x in C(x, t).
The inference dual of ( 4) is the problem of inferring the tightest possible lower bound on f (x, t) from C(x, t).It can be written where =⇒ means "implies" (see [7] for details).
The solution of the dual can be viewed as a derivation of the tightest possible bound v on f (x, t) when x = x.For purposes of Benders decomposition, we wish to derive not only a bound when x = x but a function B x(x) that provides a valid lower bound on f (x, t) for any given x ∈ D x .In particular, B x(x) = v.If z is the objective function value of (3), this bounding function provides the valid inequality z ≥ B x(x), which we call a Benders cut.
In iteration H of the Benders algorithm, we solve a master problem whose constraints are the Benders cuts so far generated: Here x 1 , . .., x H−1 are the solutions of the previous H − 1 master problems.Then the solution x of ( 6) defines the next subproblem (4).
If we let v * 1 , . .., v * H−1 denote the optimal values of the previous H − 1 subproblems, the algorithm continues until the optimal value z * H of the master problem equals v * = min{v * 1 , . .., v * H−1 }.It is shown in [7,9] that the algorithm converges finitely to an optimal solution under fairly weak conditions, which hold in the present case.At any point in the algorithm, z * H and v * provide lower and upper bounds on the optimal value of the problem.
In the planning and scheduling problem (1), any assignment x of tasks to facilities creates the subproblem: ) which decomposes into a separate scheduling problem for each facility.After solving the subproblem we generate a Benders cut that becomes part of the master problem (6).
The bounding function B x(x) is generally obtained by examining the type of reasoning that led to a bound for x = x and extending this reasoning to obtain a bound for general x.Unfortunately, only the primal solution (the schedule itself) is available from the commercial CP solver.Thus for purposes of computational testing we present Benders cuts that require only this information.We indicate in Section 8, however, how stronger cuts can be deduced using "dual" information from the subproblem solution.

Minimizing Cost
The cost objective presents the simplest case, since cost can be computed in terms of master problem variables, and the subproblem is a feasibility problem.Let J hi = {j | x h j = i} be the set of tasks assigned to facility i in iteration h.If there is no feasible schedule for facility i, then J hi is a conflict set, or a set that results in infeasibility.The most obvious Benders cut simply rules out any assignment of tasks to facility i that contains the conflict set J hi .In this case B x h (x) takes the value ∞ when there is an infeasibility and the value j f x j j otherwise.The master problem (5), written as a 0-1 programming problem, becomes where y ij ∈ {0, 1}, and Experience shows that it is important to include a relaxation of the subproblem within the master problem.A straightforward relaxation Let R i = ∅.For j = 1, . .., p: Set k = 0.For k = 1, . .., q: If r k ≥ r j and T i (r j , dk ) < T i (r j , dk ) then Remove from R i all R i (r j , dk ) for which T i (r j , dk ) ≥ T i (r j , dk ).Add R i (r j , dk ) to R i and set k = k.Let R i = ∅, and set j = 0.For k = 1, . .., p d : If T i (r 0 , dk ) > T i (r 0 , dj ) then add R i (r 0 , dk ) to R i and set j = k.can be obtained as follows.For any two times t 1 , t 2 , let J(t 1 , t 2 ) be the set of tasks j whose time windows lie between t 1 and t 2 ; that is, t 1 ≤ r j and d j ≤ t 2 .If the tasks j ∈ J ⊂ J(t 1 , t 2 ) are assigned to the same facility i, then clearly the "area" j∈J p ij c ij of these tasks can be at most C i (t 2 − t 1 ) if they are to be scheduled in the time interval [t 1 , t 2 ].This yields the valid inequality which we refer to as inequality R i (t 1 , t 2 ).If we let r1 , . .., rnr be the distinct elements of {r 1 , . .., r n } in increasing order, and similarly for d1 , . .., dn d , we have a relaxation consisting of the inequalities for each facility i.These inequalities serve as the relaxation (c) in (8).Many of these inequalities may be redundant of the others, and if desired they can be omitted from the relaxation.Let A set of undominated inequalities can be generated for each machine using the algorithm of Fig. 1.It has O(n 3 ) complexity in the worst case, since it is possible that none of the inequalities are eliminated.This occurs, for instance, when each r j = j −1, d j = j, and p ij = 2.However, the algorithm need only be run once as a preprocessing routine.
In practice the relaxation can be simplified by supposing that the release times are all r 0 = min j {r j }.Then the relaxation (c) in ( 8) for each facility i.All redundant inequalities can be eliminated by running the simple O(n) algorithm of Fig. 2 for each facility.Similarly, one can suppose that the deadlines are all d 0 = max j {d j } and use the inequalities R i (r j , d 0 ).In our computational tests, all r j = 0 and all d j = d 0 , which means the relaxation ( 11) is the single inequality

Stronger Benders Cuts
The simple Benders cuts described above can be strengthened.One way to strengthen them is to re-solve the infeasible scheduling problems with various subsets of jobs removed, in order to identify a conflict set Jhi that is smaller than J hi .One can then replace (8b) with stronger Benders cuts De Sequeira and Puget [4] state an algorithm for finding a conflict set that is minimal in the sense that any proper subset is feasible.Junker [12] provides a faster algorithm for the same task using a bisection search.This approach was used by Cambazard et al. [3] and can be practical when the scheduling subproblems are quickly solved.
It may be more efficient, however, to construct a stronger cut on the basis of "dual information" obtained from the solution of the scheduling problem; that is, on the basis of an explanation or proof of infeasibility.One straightforward approach is to keep track of which tasks actually play a role in the proof of infeasibility, and let these tasks comprise the conflict set Jh i in (12).We indicate how to do this when domain filtering is based on one type of edge finding.The idea can be extended to other forms of domain reduction.Edge finding is a procedure for identifying jobs that must precede, or that must follow, a set S of other jobs.For the scheduling problem on a given facility i, let the current domain of each start time t j be the interval [E j , L j − p ij ], where E j is the earliest start time and L j the latest finish time for task j.For any subset S of the tasks assigned to facility i, let E S = min j∈S E j and L S = max j∈S L j .If we can identify an S for which then task k must start before any task in S starts.Similarly, if then task k must finish after all the tasks in S finish.
If edge finding determines that task k must start before the tasks in S, then we can bound how late task k can finish.The available capacity to run any subset S ⊂ S of tasks is (L S − E S )C i .We can be sure that these tasks do not constrain the finish time of task k if ∆ S ≤ 0, where However, if ∆ S > 0, then task k must finish no later than L = L S − ∆ S /c ik If L < L k , we can tighten L k by reducing it to L. Similarly, if task k must finish after the tasks in S finish, then whenever ∆ S > 0 for S ⊂ S we infer that task k can start no earlier than [1] that computes, for any given t k , the smallest interval [E k , L k ] that can be deduced in this fashion by edge finding.
To keep track of which tasks are involved in a proof of infeasibility, we associate with task j a set R j of tasks that help to reduce t j 's domain.Initially, R j = {j}.If edge finding determines that task k precedes or follows the tasks in S and tightens E k or L k as a result, then the tasks in j∈S R j are added to R k .When the edge finding process is finished and the domain of some t j is found to be empty, we conclude that R j is a conflict set.A somewhat sharper analysis can be obtained by associating with each E j a set R E j of tasks that help tighten that particular bound, and a similar set R L j with each L j .Then when the domain of some t j is reduced to the empty set, we conclude that R j = R E j ∪ R L j is a conflict set.This approach requires a detailed analysis of the algorithm in [1] to determine under what circumstances each bound is tightened.
A simple example illustrates the idea.Suppose that tasks 1, . .., 4 are assigned to a particular facility i.The release times and deadlines appear in Table I, along with processing times for facility i.For simplicity we assume this a disjunctive scheduling problem, so that tasks must run sequentially and c ij = C i = 1 for all j.As we will see below, edge finding deduces that there is no feasible schedule on facility i.Thus we have the Benders cut However, a tighter cut can be obtained by observing which tasks actually play a role in the edge finding.The initial domains [E j , L j − p ij ] for t j , j = 1 . .., 4, are [0, 8], [0, 5], [2,5], [4,4] We start with R E 1 , . .., R E 4 = R L 1 , . .., R L 4 = {1}, {2}, {3}, {4}.Edge finding deduces that job 2 must precede jobs 3 and 4, as can be seen from ( 13): At this point the earliest start time E 3 for job 3 can be tightened to E 2 + p i2 = 3.Since job 2 brought this about, it is added to R E 3 , which becomes {2, 3}.Edge finding also deduces, however, that job 3 must precede job 4, since This updates E 4 to E 3 + p i3 = 5, and the jobs in R E 3 are added to R E 4 , which is now {2, 3, 4}.But now the domain of t 4 is the interval [5,4], which is the empty set.This proves infeasibility, and the jobs involved in the proof are those in R 4 = R E 4 ∪ J L 4 = {2, 3, 4}.Since job 1 was not involved in the proof, one can deduce a stronger Benders cut than ( 14): In practice, edge finding and other filtering mechanisms are combined with branching search.For instance, one can branch on which task starts first (among those whose position is not already fixed by prior branching).The search proves infeasibility by deducing, at each leaf node of the branching tree, an empty domain for at least one variable t j .The conflict set R j of tasks involved in deriving this empty domain can be obtained, in the manner just described, by examining the edge finding operations along the path from the root of the tree down to .A task that branching places before certain other tasks is treated like a task determined by edge finding to precede the other tasks.
At this point a conflict set can be collected by taking a union of conflict sets over the leaf nodes.For each leaf node we select a variable t j with empty domain.Then Jhi = R j is a conflict set for the scheduling problem on facility i.
An interesting characteristic of feasibility problems is that there are often several proofs of infeasibility when there is one proof; in the present case, there may be several conflict sets at each node of the search tree.These proofs can be regarded as alternate dual solutions and are analogous to dual "degeneracy" in linear programming.Some of these proofs or explanations may be more useful or perspicuous than others.In the context of Benders decomposition, it is obviously advantageous to choose dual solutions that result in stronger Benders cuts.Thus one should select j at each leaf node in such a way that Jhi is as small as possible.
We did not use strengthened Benders cuts in our computational tests, because the necessary edge-finding information was not available from the commercial CP solver.It is unclear how much stronger the cuts would be if the information were available, but whatever the case, the computational overhead involved in computing the R j s would be quite small.

Minimizing Makespan
This case is less straightforward than minimizing cost because the subproblem is an optimization problem.However, there are relatively simple linear Benders cuts when all tasks have the same release date, and they simplify further when all deadlines are the same.We also use a linear subproblem relaxation that is valid for any set of time windows.
The Benders cuts are based on the following fact: Lemma 1.Consider a minimum tardiness problem P in which tasks 1, . .., n with release time 0 and deadlines d 1 , . .., d n are to be scheduled on a single facility i.Let M * be the minimum makespan for P , and M the minimum makespan for the problem P that is identical to P except that tasks 1, . .., s are removed.Then where ∆ = s j=1 p ij .In particular, when all the deadlines are the same, Proof.Consider any optimal solution of P and extend it to a solution S of P by scheduling tasks 1, . .., s sequentially after M .That is, for k = 1, . .., s let task k start at time M + k−1 j=1 p ij .The makespan of S is M + ∆.If M + ∆ ≤ min j≤s {d j }, then S is clearly feasible for P , so that M * ≤ M + ∆ and the lemma follows.Now suppose M + ∆ > min j≤s {d j }.This implies Since M * ≤ max j≤s {d j }, (16) implies (15), and again the lemma follows.
The bound M * − M ≤ ∆ need not hold when the deadlines differ.Consider for example an instance with three tasks where (r Now consider any given iteration of the Benders algorithm, and suppose for the moment that all the time windows are identical.Let J hi be the set of tasks assigned to facility i in a previous iteration h, and M * hi the corresponding minimum tardiness incurred by facility i.The solution of the current master problem removes task j ∈ J hi from hybrid.tex;27/03/2005; 16:23; p.12 facility i when y ij = 0. Thus by Lemma 1, the resulting minimum makespan for facility i is reduced by at most This yields a bounding function that provides a lower bound on the optimal makespan of each facility i.We can now write Benders cuts (b) in the master problem: where y ij ∈ {0, 1}.The relaxation (c) is similar to that for the minimum cost problem.
If the deadlines differ (and the release times still the same), the minimum makespan for facility i is at least if one or more tasks are removed, and is M * hi otherwise.This lower bounding function can be linearized to obtain the Benders cuts 19) replaces (18b) when the deadlines differ and all release times are equal.For purposes of computational testing, however, we focus on the case in which all time windows are the same, since this seems to be the more important case, and it permits simpler and stronger Benders cuts.
The Benders cuts in ( 18) and ( 19) can be strengthened much as in the minimum cost algorithm.The CP solver determines the minimum makespan on a given facility by finding the largest upper bound M on makespan for which the scheduling problem is infeasible; in this case the minimum makespan is M + 1.By analyzing the infeasibility proof as in minimum cost problem, we may find a conflict set Jhi that is smaller than J hi .We can then obtain stronger cuts by replacing J hi with Jhi in the Benders cuts of ( 18) and (19).

Computational Results
We solved randomly generated problems with MILP (using CPLEX), CP (using the ILOG Scheduler), and the logic-based Benders method.All three methods were implemented with OPL Studio, using the OPL script language.The CP problems, as well as the CP subproblems of the Benders method, were solved with the assignAlternatives and setTimes options, which result in substantially better performance.
Random instances were generated as follows.The capacity limit was set to C i = 10 for each facility i.For each task j, c ij was assigned the same random value for all facilities i and drawn from a uniform distribution on [1,10].For instances with n tasks and m facilities, the processing time p ij of each task j on facility i was drawn from a uniform distribution on [i, 10i].Thus facility 1 tends to run about i times faster than facility i for i = 1, . .., m.Since the average of 10i over m facilities is 5(m+1), the total processing time of all tasks is roughly proportional to 5n(m + 1), or about 5n(m +1)/m per facility.The release dates were set to zero, and the deadline for every task was set to 5αn(m + 1)/m (rounded to the nearest integer).We used α = 1/3, which results in a deadline that is loose enough to permit feasible solutions but tight enough so that tasks are reasonably well distributed over the facilities in minimum cost solutions.In minimum cost problems, the cost f ij is drawn from a uniform distribution on [2(m − i + 1), 20(m − i + 1)], so that faster facilities tend to be more expensive.
No precedence constraints were used, which tends to make the scheduling portion of the problem more difficult.
Table II displays computational results for 2, 3 and 4 facilities as the number of tasks increases.The CP solver is consistently faster than MILP, and in fact MILP is not shown for the makespan problems due to its relatively poor performance.However, CP is unable to solve most problems with more than 16 tasks within two hours of computation time.
The Benders method is substantially faster than both CP and MILP.Its advantage increases rapidly with problem size, reaching some three orders of magnitude relative to CP for 16 tasks.Presumably the advantage would be greater for larger problems.
As the number of tasks increases into the 20s, the Benders subroblems reach a size at which the computation time for the scheduling subproblem dominates and eventually explodes.This point is reached later when there are more facilities, since the subproblems are smaller when the tasks are spread over more facilities.
In practice, precedence constraints or other side constraints often accelerate the solution of the scheduling subproblems.Easier subproblems could allow the Benders method to deal with larger numbers of tasks.This hypothesis was tested by adding precedence constraints to the problem instances described above.Pairs of tasks related by a precedence constraint could be scheduled on any facility as long as they were scheduled on the same facility.The results appear in Table III.The precedence constraints resulted in fast solution of the scheduling subproblems, except in the three largest makespan instances, which we omit because they do not test the hypothesis.Easier subproblems in fact allow solution of somewhat larger instances.
Table IV investigates how the Benders method scales up to a larger number of facilities, without precedence constraints.The average number of tasks per facility is fixed to 5. The random instances are generated so that the fastest facility is roughly twice as fast as the slowest facility, with the other facility speeds spaced evenly in between.Since the subproblems remain relatively small as the problem size increases, it is possible for Benders to accommodate more tasks than in Table II.The advantage relative to CP or MILP is substantial, since the Benders approach extends solubility from about 16 tasks to 40 or so in the case of minimum cost problems, and to 30 or so in the case of minimum makespan.However, the number of iterations tends to increase with the number of facilities.Since each iteration adds more Benders cuts to the master problem, the computation time for solving the master problem dominates in larger problems.
Table IV suggests that the Benders method scales up better for minimum cost problems than for minimum makespan problems.Yet even when it fails to solve a makespan problem optimally, it obtains a feasible solution and a fairly tight lower bound on the optimal value.The bound steadily improve as the algorithm runs.Table V displays the bounds obtained for the minimum makespan problems of Table IV that were not solved to optimality.

Conclusions and Future Research
We find that logic-based Benders decomposition can substantially improve on the state of the art when solving minimum-cost and minimummakespan planning and scheduling problems, in the latter case when all tasks have the same release date and deadline.In the case of minimum makespan problems, the Benders approach has the additional advantage that it can be terminated early while still yielding both a feasible solution and a lower bound on the optimal makespan.The bound improves steadily as the algorithm runs.
Several issues remain for future research.One is whether effective Benders cuts can be developed for minimum makespan problems in which tasks have different release dates.Another is how effective relaxation (11) is on such problems.A further research issue is whether a Benders method can be extended to other objectives, such as minimum tardiness or minimum number of late tasks.A fourth issue is whether a branch-and-check approach to solving the Benders master problem would significantly improve performance on planning and scheduling problems.The experience of Thorsteinsson [17] suggests that it would.
We indicated how access to "dual" information from the CP solver (results of edge finding, etc.) can result in more effective Benders cuts.A solver can provide this kind of information if it provides an explanation for a solution along with the solution itself.Explanations have attracted recent interest in the CP community (e.g.[13,14,15,16]), and their use as nogoods in Benders and other search methods provides an additional reason to pursue this line of research.

Figure 1 .
Figure 1.O(n 3 ) algorithm for generating an inequality set Ri that relaxes the time window constraints for facility i.By convention d0 = −∞.

max{L 3 ,
L 4 } − min{E 4 } < p i3 + p i4 hybrid.tex;27/03/2005; 16:23; p.10 is a set of constraints containing variables x, t.D x and D t denote the domains of x and t, respectively.When x is fixed to a given value x ∈ D x , the following subproblem results:

Table I .
Data

Table II .
Computation times in seconds for minimum cost and minimum makespan problems, using MILP, CP, and logic-based Benders methods.Each time represents the average of 5 instances.Computation was cut off after two hours (7200 seconds), and a + indicates that this occurred for at least one of the five problems.

Table III .
Computational results for minimum cost and minimum makespan problems on two facilities with precedence constraints, using the Benders method.Computation time and number of iterations are shown for individual problem instances.Computation was cut off after 600 seconds.Minimum makespans are also given, except when computation is terminated prematurely, in which case lower and upper bounds are shown.In such cases a feasible solution with makespan equal to the upper bound is obtained.

Table IV .
Computation times in seconds and number of iterations for minimum cost and minimum makespan problems, using the Benders method.Each figure represents the average of 5 instances.
a Includes one outlier that ran for 240 sec and 191 iterations.

Table V .
Best solution value and lower bound found after 7200 seconds of computation by the Benders method on the minimum makespan problems of Table IV that were not solved to optimality.