An iterated local search for customer order scheduling in additive manufacturing

This paper studies the customer order scheduling problem in the context of additive manufacturing. The study discusses an integrated problem involving the nesting of parts as well as the scheduling of batches of nested parts onto unrelated parallel machines. A mixed-integer programming model is presented, based on existing formulations from the literature, that integrates different materials and sequence-dependent setup times. Additionally, a metaheuristic based on an iterated local search is proposed for the problem configuration under consideration. Focusing on minimizing the total weighted tardiness of orders, the efficiency of the heuristic approach is evaluated using comprehensive test data. Further, we show the importance of the considered order-related objective by using qualitative analysis.


Introduction
Additive Manufacturing (AM), also referred to as 3D printing, describes the process of constructing parts from digital 3D model data with repetitive production steps.In contrast to conventional subtractive processes, AM technologies build parts layer by layer (Attaran 2017).Scientific studies have highlighted the opportunities of AM to improve flexible manufacturing systems (see, e.g.Weller, Kleer, and Piller 2015).Due to the ability of AM technologies to build different parts simultaneously without additional molding or tooling (Kruth, Leu, and Nakagawa 1998;Gibson et al. 2021), 3D printers can produce a wide variety but also a low volume of parts (Kang et al. 2018).Hence, efficient process planning and production management procedures are crucial for mass customization based on additive manufacturing.
Inspired by new services like printing-on-demand (Oh et al. 2020), where success is significantly dependent on customer satisfaction and the corresponding planning metrics, we address the problem of scheduling customer orders on unrelated batch processing machines in additive manufacturing.The objective is to minimise the total weighted tardiness of orders.To increase the practical applicability, we also consider different part families and sequence-dependent setup times.The studied problem can be classified according to the three-field notation as Rm|batch(b), AM, s jk , fmls| w o T o . 1 It can easily be CONTACT Benedikt Zipfel benedikt.zipfel@tu-dresden.deThis article has been corrected with minor changes.These changes do not impact the academic content of the article.
Supplemental data for this article can be accessed online at https://doi.org/10.1080/00207543.2023.2167015.
reduced to Pm|| w i T i , and is therefore NP-hard (see e.g.Pinedo 2012).Due to their extensive commercial application (Li, Kucukkoc, and Zhang 2017), this work focuses on powder bed fusion (PBF) technologies.Figure 1 illustrates the structure of a 3D printer using PBF technology.A part is built layer by layer through two sub-process steps that are constantly repeated.First, a roller distributes a powder layer over the entire build space.Afterward, a laser melts the powder layer at the designated areas to build the part's structure.Subsequently, the build platform is lowered and the leveling roller applies a new powder layer (Gibson et al. 2021).
Based on this description, we deduce the main characteristics that are relevant for scheduling 3D printers.First, an AM machine can produce several parts simultaneously as long as these parts fit into the build space of the printer.The nesting problem in AM can be defined as determining the arrangement of a given number of parts on a finite area under consideration of a particular objective criterion (Oh et al. 2020).Thus, the nesting represents a packing problem.With this, AM scheduling problems are related to batch processor machine (BPM) scheduling problems, where a batch machine can accommodate a predefined number of jobs or parts, respectively.In conventional BPM scheduling, the processing time of a batch is usually defined by the maximum of all Figure 1.Schematic cross section of a PBF machine following Attaran (2017) processing times (parallel batching) or the sum of all processing times (serial batching) of parts within the batch (Zhang, Yao, and Li 2020).In contrast, the processing time of a batch in AM with PBF technology is calculated based on the height and the volume to be printed.Accordingly, the recoating time of the roller indicates the length of time it takes to reach the layer height of the highest part in the batch (Kucukkoc 2019).The scanning time of the laser determines how long it takes to fuse the total volume of the batched parts.Hence, the processing times of batches on AM machines depend on their composition (Alicastro et al. 2021).
Compared to existing studies on AM scheduling, the main contributions of this paper are the following: • Building on Zipfel, Neufeld, and Buscher (2021), we present a part-oriented mathematical formulation of the problem.• We propose an iterated local search algorithm to enable the solution of large-scale instances of scheduling customer orders in an AM environment within a reasonable computation time.• We adopt and customize neighborhood structures and perturbation procedures to efficiently explore the solution space of the given planning problem.
The remainder of this paper is organised as follows: In Section 2 we review relevant articles from related research areas such as customer order scheduling, BPM scheduling, and integrated scheduling and nesting in an AM environment.Subsequently, Section 3 defines the problem and proposes a mixed-integer programming (MIP) model.Afterward, an iterated local search (ILS) is described in Section 4 for the considered planning problem.The MIP model and the ILS are compared in Section 5 using comprehensive test data.Further, insights are given within a qualitative analysis.Finally, in Section 6, we conclude by highlighting relevant insights and noting future research prospects.

Literature review
The considered problem is related to three streams of the literature: First, the integrated planning problem concerns customer order scheduling (COS), second, BPM scheduling, and, third, integrated nesting and scheduling for AM.In COS, the optimisation objectives shift from a job focus to an order focus.This is based on the customeroriented assumption that only the status of a complete order is relevant for the customer (Ahmadi, Bagchi, and Roemer 2005).In the field of COS, most commonalities can be found in configurations with incompatible job families, as there are also incompatibilities due to material types in the present problem configuration.For a comprehensive overview of COS, we refer to Leung, Li, and Pinedo (2005), who consider different problem configurations in detail.In regard to the problem considered in this paper, Shi, Huang, and Shi (2018) present a very similar problem of COS combined with batch scheduling for single and parallel machine environments.To determine the batch processing time, the authors use the sum of the processing times of all jobs in the respective batch.Concerning the objective criterion, the work is focused on order makespan and total weighted order completion times.Several aspects of our problem are included in the work of Shi, Huang, and Shi (2018).However, they consider single and parallel batching machine environments, whereas we take unrelated machines into account.
The problem presented by the authors can be viewed as a special case of the problem considered here.
3D printers can process several products at the same time.So, the problem is also related to BPM scheduling problems.Some publications consider unrelated parallel BPM and non-identical or arbitrary job sizes, respectively.Uzsoy (1994) addresses BPM scheduling with non-identical job sizes for a single machine environment with the objective to minimise makespan.Arroyo and Leung (2017) present an iterated greedy procedure to solve the BPM scheduling problem with arbitrary job sizes, unequal ready times, and unrelated parallel machines.With comprehensive computational experiments, the effectiveness of the metaheuristic is evaluated for both small and large-scale instances.Very recently, Queiroga et al. (2021) propose an ILS for scheduling on a single batch processing machine with non-identical job sizes and the objective of minimizing total weighted tardiness.The authors suggest several neighborhood structures for a permutation-based solution representation.Further, Gahm, Wahl, and Tuma (2022) consider job families and sequence-dependent setup times for the BPM scheduling problem using an example from the metal-processing industry.A mathematical model as well as constructive heuristics are used for the solution process.Our problem can be seen as a generalization of both of the latter works of Queiroga et al. (2021) and Gahm, Wahl, and Tuma (2022) by extending the machine environment to unrelated parallel batch processing machines.
Framinan, Perez-Gonzalez, and Fernandez-Viagas (2022) provide a recent and comprehensive survey on planning problems faced in an AM environment.The authors also review papers on the problems of nesting and scheduling for AM, which are considered here.Since the two subproblems are highly complex optimisation problems (Kucukkoc 2019), many works focus on the aspect of scheduling while simplifying the nesting subproblem as a one-dimensional packing.Li, Kucukkoc, and Zhang (2017) propose a model formulation with the goal of minimizing the average production cost per volume unit as the objective criterion.Based on this work, a large number of advanced studies exist to date.Kucukkoc (2019) leverages prior studies and designs mathematical formulations to minimise makespan.The models apply to environments with single machines, parallel machines, and unrelated parallel machines in an AM environment.The test results of the three models indicate the need for effective heuristic methods for integrated planning for additive manufacturing.Alicastro et al. (2021) extend the work of Kucukkoc (2019) by considering unrelated parallel machines and develop an ILS for the enhanced scheduling problem.The authors combine their ILS approach with techniques of reinforcement learning to achieve an efficient neighborhood exploration.Rohaninejad et al. (2022) develop a metaheuristic for scheduling in additive manufacturing with material types as well as sequence-dependent setup times.The authors use the well-known NSGA-II framework to solve the considered problem and improve their algorithm by including a local search that makes use of machine learning elements.Chergui, Hadj-Hamou, and Vignat (2018) explicitly address customer requirements in their problem configuration by minimizing total tardiness.The authors consider the subproblems of nesting and scheduling by two sequential MIP formulations and propose a heuristic based on the earliest due date (EDD) rule.Zipfel, Neufeld, and Buscher (2021) consider material types and quality levels, which result in different process speeds of the machines.Further, they focus on minimising the total weighted tardiness of orders, which is also considered in this study.Another extension is shown in Kucukkoc, Li, and Li (2021), where the nesting problem is regarded as a two-dimensional packing problem.The authors propose a genetic algorithm to solve this problem minimizing total tardiness.Other works combining order scheduling in AM and twodimensional packing are found in Li et al. (2019) and Wu et al. (2022).The first-mentioned authors address the problem of online order acceptance and scheduling in additive manufacturing.They formulate the problem as a MIP and evaluate the performance of different decision strategies.Wu et al. (2022) consider an environment of cloud production for their online order scheduling problem.Extensions of the two-dimensional packing include irregular shapes (Zhang, Yao, and Li 2020) and the consideration of the orientation of parts within the build area as an additional subproblem (Che et al. 2021).This work focuses on customer order scheduling and adopts the simplified nesting by considering one-dimensional packing.Therefore, the present work represents an extension of the work of Zipfel, Neufeld, and Buscher (2021).In this way, we contribute to research on the integrated planning problem in AM by explicitly considering the customer-oriented perspective.

Problem definition
We examine an additive manufacturing company that offers its customers an on-demand service of 3D printed parts.In this context, the information regarding exemplary problem instances is drawn from the machine system and several customer orders.Let an order o ∈ O consist of a subset of parts in the manufacturer's portfolio.Each order o has a due date d o and a weight w o , which define the times spent on scanning one volume unit and recoating one height unit, respectively.Each machine is able to process a subset of all available materials.Setups are necessary between two consecutive batches processed on the same machine, which can involve a change in material.Therefore, we consider sequence-dependent material setup times.For example, setup time σ kuv represents the time for a setup from material u to material v on machine k.The left side of Figure 2 illustrates a possible instance of the considered planning problem with two orders and two machines.Each of the orders incorporates four parts represented by rectangles.The dimensions of each part define its height and area, while the color indicates the material requirement of the respective part.
The goal of the manufacturer is to fulfill all customer orders within the given due dates as best as possible.Therefore, the objective is to minimise the total weighted tardiness of orders o∈O w o T o , while taking into account the ability of the machines to process parts simultaneously in batches and the material types of the ordered parts.
A solution to the considered problem is a schedule that specifies the time needed for each part to be manufactured.To this end, we first determine the assignment of a part to an available machine.Afterward, we need to decide which parts will be produced together, while we have to ensure that batches only contain parts with identical material requirements.This decision incorporates the nesting problem for AM.In the present study, we simplify this problem by considering the total area consumed by each part instead of its width and length.Therefore, the nesting problem becomes a one-dimensional bin packing problem.Finally, we have to decide about the sequence of batches on each machine.Figure 2 displays a possible solution to the problem on the right.A total of eight parts from two orders have been allocated to three batches, which are sequenced on both available machines.The numerical example corresponding to this illustration can be found in the supplementary material.
To evaluate the solution defined by the given schedule, we need to determine the completion times of parts and orders.We examine the completion times using the illustrated Gantt chart.Again, the colors represent different material types, whereas the gray bars in the chart illustrate setups.The completion time of a part is identical to the completion time of the batch in which the respective part is processed.For example, the completion time of part 2 is identical to the completion time E 11 of batch 1 on machine 1.Further, the completion time C o of an order o ∈ O, is equal to the maximum completion time of parts belonging to the respective order.For order 2, the last part is processed in batch 2 on machine 1.Hence, the completion time C 2 of order 2 is equal to the completion time E 21 .After all completion times have been determined, we define the tardiness T o , o ∈ O, by subtracting its due date d o from its completion time C o .
In this study, we assume that all batch processing machines are available from the beginning of the planning period.The machines may vary in terms of their sizes (height, area) and production coefficients.The processing time of each batch is based on its composition and results from the total volume and the maximum height to be printed.In accordance with the different materials, sequence-dependent setup times with regards to the used materials must be accounted for on each machine.All machine parameters are known and deterministic.Concerning the setups, we assume that the triangular inequality (1) holds.Otherwise, setups would not be efficient in the resulting schedules (Shen and Buscher 2012).
The build space of at least one machine is large enough to produce the largest part in terms of height and area.Otherwise, the problem instance would not be solvable.Moreover, we consider the nesting of parts as a one-dimensional bin packing problem (see, e.g.Kucukkoc 2019;Rohaninejad et al. 2022).The stacking of parts on top of each other is not allowed.Hence, all parts are placed onto the build plate of the printers.Regardless, we must ensure a feasible production height of all machines.Each part must be assigned to one batch on one machine.As soon as the production of a batch starts, no part can be added to or removed from the batch before a printing process has ended.
The studied problem is classified as a scheduling problem on unrelated parallel batch processing machines with an integrated bin packing problem, incompatible job families, and sequence-dependent setup times (e.g.Neufeld, Gupta, and Buscher 2015).The objective is to minimise the total weighted tardiness of all orders.Although comparable problems have already been considered in the field of batch processing machines, additive manufacturing imposes novel requirements on the production planning because the process times of the batches formed depend on their composition (Kapadia et al. 2019).To account for these new requirements, the addition of AM is made in the classification.Hence, the studied problem is defined as Rm|batch(b), AM, s jk , fmls| w o T o .The problem addressed is strongly NP-hard.In addition to the reduction mentioned in Section 1, the problem also represents a generalization of the single batch processing machine scheduling problem, which is proven to be NPhard for minimizing makespan by Uzsoy (1994).With the objective of minimizing total weighted tardiness, Brucker et al. (1998) prove NP-hardness for scheduling on a single batch processing machine.Further, the packing subproblem represents a one-dimensional bin packing problem, which is itself strongly NP-hard (Garey and Johnson 1979).

Problem formulation
Table 1 presents the used notation.
Inspired by existing models from Kucukkoc (2019) and Tavakkoli-Moghaddam, Shirazian, and Vahedi-Nouri (2020), we formulate the described planning problem as a linear MIP model as follows: We consider (2) as the objective function for minimising the total weighted tardiness of all orders.Inequalities (3) determine the tardiness of each order by subtracting the order due date from the completion time of the respective order.Constraints (4) are used to calculate the completion time of an order o ∈ O.According to (4), the latter is at least equal to the completion time of a batch b ∈ B on machine k ∈ M if the respective batch includes parts of order o ∈ O.With (5) we ensure that all parts are built within the scheduled batches.In Inequalities (6), we adopt the one-dimensional packing constraint introduced by Li, Kucukkoc, and Zhang (2017) to integrate the nesting problem.The constraint specifies that the cumulative area of all assigned parts must not be greater than the production area of the allocated machine, while ( 7) and ( 8) guarantee that the height of the batch does not exceed the maximum production height of the machine.Constraints (9) represent symmetry-breaking constraints stating that batches have to be filled successively one after another.Furthermore, we define batches to have only a single processing material at the same time with (10).Constraints (11) define the relation between decision variables X ibk and Y bku .The constraints state that as soon as a part is assigned to a batch the respective batch has to be processed.We denote with ( 12) that a batch can only be manufactured on a machine if the machine is capable of processing the required material, i.e. u is in the set U Mach k .Inequalities (13) ensure the completion time of a batch on a certain machine to be greater than or equal to the initial setup time and its processing time.For a batch having at least one predecessor, Inequalities ( 14) guarantee that the completion time is greater than or equal to the sum of the completion time of the predecessor, its own processing time, and a required setup time.The latter depends on whether or not a change of materials is necessary between batches b−1 and b.Moreover, in (15) we define the processing time of a batch as the sum of the time needed to print the height of the batch and the time needed by the laser to build the volume of all parts within the batch.Finally, ( 16) and ( 17) define the domains of the decision variables.
To ensure the feasibility of the model, a sufficient number of batches must be available for all machines.This is achieved by setting the maximum number of batches B on each machine equal to the number of parts to be produced: However, this assumption leads to a large number of unnecessary decision variables and thus to increased computational effort.For this reason, we restrict the maximum number of batches according to Algorithm 1.To determine B, we sum up the rounded-up numbers of batches needed for each order in respect to different material types.This approach yields a valid upper bound on the number of batches since it can never be beneficial to split two parts from the same order and with identical material types into two batches.This method significantly reduces the number of batches compared to (18).Nevertheless, it overestimates the number of batches needed because it does not consider the simultaneous processing of orders and therefore does not exclude any solutions.

Lower bounds
Due to the big-M formulations in model ( 2)-( 17), the resulting lower bounds may be weak when running it with a solver.For this reason, we propose two lower bounding approaches based on decomposition.
Let P be an instance of the considered problem with a set of orders O and a set of machines M. We can decompose P into |O| problems P o , each comprising a single order o ∈ O. Further, let z d o be the optimal objective value for P o , o ∈ O based on model ( 2)-( 17).Then, a simple lower bound to P is given by ( 19).
Of course, the calculation of LB 1 has the same drawback as the original problem.We still use the big-M constraints of the model to determine the objective function values of the decomposed problems.If a problem P o can not be solved to optimality, then only the best known lower bound of the decomposed problem given by the mathematical solver needs to be used.Again, this lower bound may be weak.
In addition, we consider a second lower bound LB 2 , which is also based on decomposition but does not require a MIP model to be solved.Given the assumptions above, we can calculate a lower bound for each order o ∈ O according to (20)-( 23).First, we determine the minimum number of batches that needs to be processed on a single machine Bo with (20).Further, we determine two separate order-dependent lower bounds in ( 21) and ( 22) by using minimum setup times and production coefficients.The difference between the two lower bounds is the combination of batch heights and volumes.In LB MH o , we use the maximum height and minimum volume of all parts, whereas in LB MV o we apply the contrary combination.Finally, LB 2 results from the maximum of the summed up individual bounds according to (23).The performance of LB 1 and LB 2 is evaluated in the computational experiments, where the MIP is also analyzed.
4. An iterated local search procedure

Overall solution strategy
In this section, we propose a metaheuristic to enable the solving of large-scale instances with adequate computational effort.We select an implementation of ILS to solve the described problem.The overall solution approach is illustrated in Figure 3.We first carry out preprocessing steps, which we explain in Section 4.2.This is followed by a construction phase to create an initial solution, which is described in Section 4.3.In the improvement phase, we apply an ILS until the algorithm reaches a predefined termination criterion.We discuss the individual elements of the ILS, such as the neighborhood structure, perturbation, and termination criterion in Section 4.4.

Preprocessing
We first determine feasible machine assignments for each part on the basis of the available problem data.A part can be processed on a machine if two conditions are met: First, the machine can process the material needed for the production of the part.Second, the component fits into the build space of the machine according to its dimensions.If both of the above conditions are met, a component can be processed on a machine.We store these feasible assignments in a separate set, which we use to create initial solutions (see Section 4.3).Afterward, we approximate the maximum number of batches that can be processed on each machine.This is also done using Algorithm 1 from Section 3.2.We transform the problem with |O| orders into |O| problems, each with one single order.Then we estimate the number of batches needed for each order and add up the individual results.

Construction phase
The construction of initial solutions is based on the principles of list scheduling.Whenever a machine becomes .After this step, we create a possible batch for each material u ∈ U. We do so by extracting the parts from I that need to be processed with u and fit into the build space of machine k.We sort the resulting array of parts I u using a sorting strategy and construct a temporary batch according to lines (7)-( 14) of Algorithm 3.
Afterward, the temporary batch is stored in set G. Once, we have constructed a batch for all material types, we choose the best batch in G based on the sorting strategy and update the assignment array A as well as I.At the end of Algorithm 2, assignment array A stores a feasible solution to the considered problem.
Two decisions in Algorithm 2 depend on the sorting strategy used.We integrate four sorting strategies for the parts commonly used in studies with tardiness objectives since they take into account the respective due dates: • Earliest Due Date: First, we use the common earliest due date (EDD) rule to sort the parts by the ascending due dates of their Using this rule, we sort parts according to their latest start times (LST) at which the processing of the part must begin in order to prevent production delays.The last possible start time is determined by subtracting the due date and processing time for each part (Mathirajan, Bhargav, and Ramachandran 2010).We calculate the processing time for each part as if it would be assigned solely to a batch.In case of ties, again we consider descending part heights and finally areas.
• Ratio Index: The ratio index (RI) is very similar to the Latest Start Time priority rule.Additionally, the latest start time is divided by the area of the respective part (Mathirajan, Bhargav, and Ramachandran 2010).

• Apparent Tardiness Cost:
Lastly, we use the apparent tardiness cost (ATC) rule proposed by Vepsalainen and Morton (1987) to sort the parts.Since the ATC rule uses a look-ahead parameter to calculate the values, we can construct different start solutions by using several look-ahead values.
Besides the sorting of the parts, we also need to determine which batch to schedule in each iteration.For EDD, LST, and RI, we decide by the mean values of the respective criterion.For instance, for EDD we choose the batch with the smallest mean due date.For ATC, we use the batch ATC value, which is the sum of all ATC values of parts in the respective batch (Perez, Fowler, and Carlyle 2005).The described procedure is performed for all sorting strategies and for multiple look-ahead values in ATC.Thus, at the end of the construction phase, there are several initial solutions, the best of which is selected for the improvement phase.

Improvement phase
Algorithm 3 presents the pseudocode of iterated local search.The procedure follows the familiar structure of an ILS (Lourenço, Martin, and Stützle 2019).However, to better explore the solution space, instead of a local search with one neighborhood structure we use a variable neighborhood descent (VND) with pipe change step as described in detail by Hansen et al. (2017).The termination criterion of the algorithm is a problem-size-dependent time criterion, which is proposed in Ruiz and Stützle (2007).Each instance has |I| = I parts and |M| = M machines.According to the used termination criterion, the algorithm stops after I • M • ξ seconds, where ξ is a scaling parameter.The variable neighborhood descent tries to find a better solution by sequentially exploring different neighborhoods.All neighborhood structures use best improvement as a search strategy, because of its superior intensification capability (Duarte et al. 2018).

Solution Representation and Neighborhood Structures
After establishing the basic concept of the metaheuristic, we want to highlight the specific elements.First, we describe the solution representation, which we use in this phase to find improved solutions.The solution representation can be formally understood as a part permutation with separators.An example part permutation for nine parts is represented at the top of Figure 4(a).All indices and part IDs start at zero in this example.
As seen in the figure, there are two types of separators.The dashed separators indicate the end of a batch, while the solid separators represent a machine change.Consequently, in the example shown, there are five batches, two of which are scheduled on the first machine and the remaining three on a second machine.The first batch on the first machine contains Part 2, the second batch on Machine 1 contains Parts 4 and 5, and so on.The separators are represented internally with two additional arrays based on the maximum batch number from Section 4.2.If we assume for the present example that a maximum of four batches may be opened on each machine, we are able to describe the batches by the indices of the last element, which belongs to a batch.This results in the array called 'End Indices' in Figure 4(a).The indices of the last elements of each batch are marked with boxes in the part permutation.Those indices are stored in the array 'End Indices', while empty batches are marked with −1.Indices 0 to 3 belong to the batches on Machine 1, while 4 to 7 belong to Machine 2. It can be deduced that two of four batches on machine 1 and that three of four batches on machine 2 are filled with parts.In addition, we can derive the batch allocation for each part and store the values in another array called 'Batch Allocation'.Figure 4(b) gives a graphic representation of the example solution, where the different colors indicate different material types of the ordered parts.Based on this illustration, we explain the neighborhood structures for the VND.Examples are illustrated in Figure 5.
Please note that the chosen illustration aims to appropriately illustrate the used neighborhoods and does not represent the actual packing of the individual parts.For all illustrated neighborhood moves, the resulting processing times may be different from the times the batches have in the initial schedule.For the batch-related moves, this is due to sequence-dependent setup times as well as different production coefficients on the available machines that influence the processing times.Regarding the part-related moves, the variation of the processing times is caused by the changing composition of the respective batches.

• Batch Swap Neighborhood:
A neighbor from this neighborhood structure is created by exchanging the positions of two batches.
The swap can be performed on one machine as well as across machines.For the latter swap, it is mandatory that the machines can process the materials of the respective swapped batches and that the area capacities of the machines are sufficient.Moreover, to avoid symmetry, a swap of batches on positions a and b is only valid if a < b is true.For an example of a cross-machine swap move, please see Figure 5(a).

• Batch Insertion Neighborhood:
The neighborhood of batch insertion is similar to the aforementioned neighborhood structure.Here, a batch is cut out of its current position and put into another position.Again, insertions across machines are possible, but they are only valid if the receiving machine is able to receive a batch, i.e. the number of batches on this machine does not exceed the maximum number of batches per machine.This can be verified by checking whether entries with the value −1 exist in the array 'End Indices' in the section of the machine under consideration (see Figure 4(a)).Further, the insertion move is only valid if the machine can process the batch in terms of area capacity and material requirements.Here, one part gets removed from its original batch and reinserted into another batch that is capable of processing the respective part.Within this neighborhood, we only consider batches that have at least two parts.Otherwise, we would create an empty batch.A valid insertion move is shown in Figure 5(c).

Perturbation and Acceptance Criterion
In ILS, perturbation moves are used to escape from local optima (Lourenço, Martin, and Stützle 2019).We use three different perturbation mechanisms, in which we also take into account the characteristics of good solutions to the considered problem: • Batch Partition: First, we consider the creation of a new batch by splitting an existing one.For this purpose, we choose a random batch from all batches, which includes at least two parts.Next, we sort the parts according to descending current tardiness.Therefore, we subtract the order due date of each part from the respective batch completion time.To partition the batch, we determine a random splitting point in the sorted sequence.All parts after the splitting point build the new batch at the end of the respective machine.With this sorting strategy, we explicitly account for the objective function.Parts that are already tardy are more likely to remain in the batch, while parts that are currently not tardy are moved to the new batch.

• Batch Deletion:
Further, it can be advantageous to delete a batch and allocate the parts previously assigned to that respective batch to other batches.For this mechanism, we collect all batches from machines that have at least two batches scheduled.The reason for this restriction is that we do not want to have empty machines.We randomly select a batch from the batches gathered in the first step giving a higher probability to batches with higher maximum tardiness.Once we have determined the batch to be deleted, the parts incorporated in this batch are sorted by ascending due dates and assigned to the remaining batches.By prioritizing batches with higher tardiness and sorting the parts before reassigning them, we try to guide the search to promising regions of the search space.

• Multiple Part Swap:
The last perturbation mechanism conducts several valid swap moves of parts.To retain a certain level of randomness in our search, we do not guide the swaps in this perturbation.The perturbation rate κ yields the total number of moves by multiplying it by the number of parts within an instance.
All perturbation mechanisms are applied with identical probabilities.Subsequently, the VND performs local searches within the described neighborhood structures for the new perturbed solution.If the resulting local optimum is worse than the current solution s, then a criterion must be used to determine how the search will continue.For this purpose, we use a combined acceptance criterion derived from the works of Ruiz and Stützle (2007) and Lourenço, Martin, and Stützle (2019).The former accepts non-improving solutions with a certain probability by using an acceptance criterion based on the simulated annealing criterion.The criterion makes use of a temperature T , which is derived from the suggestions of Osman and Potts (1989).Since the temperature in Ruiz and Stützle (2007) is originally proposed for the permutation flowshop scheduling problem, we adapt the calculation in Equation ( 24): The input parameter T Base is multiplied by the average processing time of a batch.In accordance with Ruiz and Stützle (2007), this result is divided by 10 to yield T .Additionally, we use resets of the current solution to avoid ineffective intensification of the search (Lourenço, Martin, and Stützle 2019).If no new best solution is found within i last iterations, the current solution is reset to the best known solution.Afterward, the algorithm continues from the current solution.Based on these preliminary considerations, function AcceptSolution(s , s, i, T , i last ) can be defined according to Algorithm 4. We represent the objective value of a solution s with f (s).Solutions that improve the objective value are always accepted.For non-improving solutions, the solution is either reset to the best solution or accepted as a new solution with the probability described in the algorithm.Random() describes the generation of a random number that is uniformly distributed in [0, 1] (Ruiz and Stützle 2007).
Algorithm 4: AcceptSolution(s , s, i, T , i last ) Input: new solution s , current solution s, current counter i, temperature T , reset counter i last Output: current solution s, updated counter i

Test data and parameter tuning
In this section, we report the results of comprehensive computational tests to evaluate the solution methods presented.For this purpose, we solve the MIP model ( 2)-( 17) with Gurobi 9.5.1 and use Python for implementing the proposed ILS heuristic.The test data is randomly generated and based on data from the literature.We use the part data from Kucukkoc (2019), while the machine data is derived from Rohaninejad et al. (2022).Table 2 presents at the top the different parameter levels we use to generate the test data.The determination of due dates for each order is adapted from similar approaches in the literature (see, e.g.Balasubramanian et al. 2004).The expected makespan Cmax is calculated based on the average values of the relevant parameters.All average parameter values used are displayed in the formula with bars.The first part of the term determines the processing time necessary for height production and setups.The second part of the formula calculates the time needed to produce the required volume of the parts.Using the presented parameter levels, the test data results in a total of 216 problem configurations, for each of which we create 10 individual instances.The proposed ILS algorithm requires the tuning of three parameters (see Algorithm 3).At the bottom of Table 2, we list the considered values for the parameter test.We create a full factorial test design with all possible combinations of the described parameters.From each problem configuration one instance is solved with three seed values and a time scaling parameter of ξ = 0.06 seconds.Consequently, we conduct a total of 23, 328 runs.We average the results of the different seed values and calculate the relative percentage deviation (RPD) according to Ruiz and Stützle (2007) in (25).
Herein, f (s) represents the objective value for a specific parameter set, while f (s) best indicates the best objective value over all parameter sets.RPD values are determined for each test instance.
To avoid a high impact of outliers, we restrict the RPD to at most 100%.After this adjustment, we select the parameter combination, which results in the best average RPD for all instances.Therefore, we set the parameters i last = 10, κ = 0.15, and T Base = 0.1, which result in an overall RPD value of 4.94%.

Test results
In this section, we analyze computational results based on the generated data.For the MIP, we use the standard settings of Gurobi, a maximum of six threads and a time limit of 3,600 seconds.In the test runs of the ILS, we use the parameter settings resulting from the parameter test in the previous section.Further we apply ξ = 0.5 seconds to control the time limit of the metaheuristic.All tests are run with five different seed values on an Intel(R) Xenon(R) CPU E5-2630 v2 with a 2.6 GHz clock speed and 384 GB RAM.

Results on Lower Bounds
We start our analyses with a comparison of the lower bounds.For this purpose, we consider the lower bound resulting from the MIP model with the given time limit (LB MIP ), and both decomposition-based lower bounding methods, LB 1 and LB 2 , from Section 3.3.We compare the different methods by counting the number of times a method yields the best lower bound LB max = max{LB MIP , LB 1 , LB 2 }.Table 3 illustrates the results of these comparisons categorised by the number of parts and the number of machines.The table shows that LB 1 has the best performance across all instances.LB 1 provides the best lower bound in 1,030 instances, while LB 2 has the worst performance resulting in the best lower bound only in 52 of 2,160 test instances.For a total of 978 instances, all methods yield equal lower bound values.But as shown in the last column of the table, in all of these cases LB max = 0.In summary, LB 1 has a clear advantage over the rest of the lower bounds.But still, many instances end with LB max = 0.This fact reinforces the need for stronger bounds to make better statements about the quality of solutions.For further analyses, we will use LB max to calculate optimality gaps.

Performance of Constructive Heuristics
As stated in Section 4.3, the construction phase provides several initial solutions based on different sorting strategies.We analyze the objective values of the approaches Earliest Due Date (EDD), Latest Start Time (LST), Ratio Index (RI), and Apparent Tardiness Cost (ATC) with the use of RPD values given in (25).Figure 6 presents boxplots of the different approaches.Since ATC incorporates a look-ahead parameter, we decided to show only the best ATC value in the figure .As parameter values we used ten different values spanning from 0.1 to 5.0.It can be seen from the figure that RI performs poorly in comparison to the remaining three sorting strategies.The median RPD value for RI is reported with 57.9%, while the other median values are below 30.0%.Further, we see that sorting by ATC values provides by far the best solutions with a median RPD value of 0.0%.Because of negligible runtimes of the construction phase, the ILS algorithm still uses all sorting strategies and uses the best solution found as the starting solution.Even for the small problem instances, the MIP model is only able to find optimal solutions for a small fraction.For |I| = 20, the MIP model can find an optimal solution in up to 31.25% of the instances.Further, we can observe a growing number of optimally solved instances as |M| increases.The MIP gaps reported in column Gap MIP also decrease when more machines are available.This behavior is typical for machine scheduling and can be observed in a variety of problem configurations.For |I| = 30, we recognize similar trends for both the number of optimal solutions and the MIP gap values.Surprisingly, the average computation time for problems with |I| = 30 and |M| = 5 is only 3.17 seconds, while for problems with |I| = 20 and |M| = 5 the average computation time is given with 175.38 seconds.However, the number of instances that have been solved to optimality is rather small, with 11 and 21 instances, respectively, which might be a reason for the differing running times.Overall, the results enhance the need for heuristic methods that can achieve good solutions in reasonable computation time.

Comparison of Solution Procedures
We further compare the MIP model and the described ILS.Evaluating tardiness objectives RPD has a major drawback due to its definition.Whenever the best found solution s has an objective value f (s) = 0, we can not determine a proper RPD for a solution s with f (s ) > 0. Regardless of how far an objective value f (s ) is from the best objective value f (s) = 0, RPD of s would always be at RPD(s ) = ∞ (Bülbül 2011).Therefore, we use two variants of RPD.For the original RPD based on (25), we exclude all instances from RPD calculation where only one of the two solution methods achieves a solution with an objective value of zero.Furthermore, we limit the largest possible RPD to 100% to avoid bias in the results.Additionally, we define an adjusted RPD (aRPD) according to ( 26), where we use an objective gap of worst and best solution found in the denominator.Since we ran the ILS with different seed values, we report the minimum and average values for both RPD variants.
Table 5 categorises the results for different parameters of the test data, while the boxplots in Figure 7 illustrate the performance categorised by part number and machine number.For aRPD MIP , we observe growing box sizes with increasing |I| and |M|.This behavior indicates worse performance when problem sizes increase.values show worse performance for small instances compared to the aRPD MIP , whereas ILS outperforms the MIP solutions for larger instances (e.g. for |I| = 90 and 120).In contrast, a large number of parallel machines |M| does not have such a big impact.
Analyzing the results in Table 5, we first notice high MIP gaps across all categories.The tests result in an average MIP gap of 65%, which indicates that the lower bounds are still weak.Looking at the individual categories, we see that the gaps increase as the number of orders and parts increases.In contrast, a decreasing gap can be observed for an increasing number of machines.The number of material types considered has no impact on the magnitude of the MIP gap.In contrast, the levels of the last two parameters, PT and DR, influence the complexity of the problem instances.The MIP gap for instances with fewer tardy orders is significantly smaller than the average gap reported for instances with a higher percentage.But for instances with a smaller due date range, DR, the MIP gap is higher than the gap for a broader range of DR.Both observations are plausible.Loose due dates and a higher due date range tend to produce more orders without delays.This circumstance may lead to problem instances that are easier to solve.Overall, as we can only solve a few instances to optimality, the reported computation time is high with an average of 3,116.27seconds.
When comparing the RPDs for both approaches, we observe a superior performance of the ILS when considering the minimum values.Across all categories and their respective levels the reported RPD values are significantly lower than the RPD resulting from the MIP model.Looking at the average values of the ILS, we observe a drop in performance in the RPDs, which are still on the same level with the MIP results in all categories.While the MIP model finds better solutions for small instances, the ILS performs on average significantly better for larger instances (e.g. with increasing number of parts |I|).Due to the selected parameter setting with respect to the termination criterion of ILS, significantly lower computation times than the MIP model can be achieved.The average computation time of the ILS lies around two minutes, whereas the MIP model has an average computation time of 50 minutes.Thus, we can state that ILS leads to up to 6% better RPDs in only 4% of the computation times compared to the MIP.The advantage of the proposed ILS is also shown in the comparison of aRPD values.The ILS can always meet or improve the MIP results focusing on minimum values aRPD min ILS .
For the average values aRPD avg ILS , we can again see better results for instances with a large number of parts using the ILS, while the MIP performs better for instances with a smaller number of parts.Comparing both RPD variants for average values, we see that for the smaller instances, the gaps between MIP and ILS are reduced, while there are increased gaps for the larger instances.Consequently, the ILS results for small instances are only slightly worse than the best results, while the MIP results are further away from the best solutions for larger instances.Looking at the columns #best for MIP and ILS, we see that the MIP model leads to the best solution in 1,411 of 2,160 instances, whereas ILS represents the best found solution in 1,625 cases, respectively.In summary, similar or better objective values compared to the MIP results can be achieved by the ILS with significantly lower computational effort.While the MIP yields better solutions for small instances, ILS is suitable to solve large instances especially regarding the number of parts.Since the average computation time of the ILS is only 4% of the MIP model, it is also easily possible to run the ILS several times and thus achieve the best values.These results underline the advantages of the proposed ILS.

Qualitative Analysis of an Illustrative Example
To analyze problem characteristics and to give insights for decision making we study the solution behavior within three different scenarios.The scenarios are as follows: First, the base scenario represents the considered problem as defined in Section 3.1.In the second scenario, we assume that batches can only be formed from parts belonging to the same order.This scenario can occur when customers do not want their products to be manufactured with others for safety or quality reasons.To realize this scenario, we extend model ( 2)-( 17) with another constraint forbidding that parts from different orders are allocated to the same batch.Third, in the last scenario, we compare the objective function of total weighted tardiness of orders with the common objective of total weighted tardiness of parts.This allows for an evaluation of whether a holistic view of the entire order is relevant for planning.We achieve this scenario by reformulating Equations ( 2), (3), and (4) to calculate the completion time and tardiness of each part.In all scenarios, we analyze the impact of the adjustments onto the tardiness of orders.
Since we can only make precise statements if optimal schedules are available, we decide to conduct a qualitative evaluation of these scenarios using an example instance from the known test data.In this instance, we have four orders, each incorporating five parts.Two machines are available to produce all 20 parts.At most, two material types are available, while the material requirement of each part is specified within its order.In the considered instance, all 20 parts require the same material type. 3 Figure 8 illustrates the resulting schedules of the instance together with the order due dates and completion times of orders for all three scenarios.The different colors represent the orders, while the presence of a pattern means that different orders are processed together in one batch.In scenarios 1 and 3, parts from different orders are produced together, while in scenario 2, this is prohibited.Compared to the base scenario, the changed objective function in scenario 3 leads to situations where batches are processed with fewer parts in order to meet the deadlines for the parts.Consequently, the total number of batches increases and thus setup costs are higher as well.Scenario 2 also requires more batches than the base scenario.This is a result of the interaction between the requirement for single order batches and the existing due dates.To meet the latter as best as possible, orders tend to be split among several batches even if they have the same material requirement.Table 6 presents the detailed schedule information of the different scenarios.In all scenarios, only order 2 is tardy, since this order has the least penalty costs.Comparing the scenarios, we can see that the tardiness is reduced significantly by considering order due dates instead of part dependent due dates.Further, allowing cross-order batches can also substantially reduce tardiness.2)-( 17) and single order batches (Scenario 2) (c) Resulting schedule with total weighted tardiness of parts (Scenario 3)

Insights and limitations
Based on the analyses and explanations given in the previous sections, we highlight the major insights as well as limitations corresponding to the considered problem structure and to the proposed metaheuristic: • Similar to other planning problems in the area of scheduling, the number of parts to be scheduled represents a major complexity driver of the considered problem.We derive this from the reported numbers given in Table 5, in which the MIP gaps increase drastically with the growing number of orders and items.
• The comparisons between the MIP model and the ILS in Section 5 indicate that the presented metaheuristic is able to achieve high quality results with reasonable computational effort.Thus, the proposed ILS represents an adequate solution method for the considered planning problem in practice.However, we observe deviations between the minimum and average objective values resulting from ILS.Hence, further adjustments may be necessary to ensure a more robust planning.
• With the qualitative analysis, we show the importance of considering an order-related objective as well as the inclusion of cross-order batching to reduce tardiness.If all parts of an order are required simultaneously by the customer, the considered objective of total weighted tardiness of orders can yield better solutions than a part-related objective.

Summary and outlook
This paper considers the customer order scheduling problem on additive manufacturing machines.For this problem, we developed a mixed-integer programming (MIP) model inspired by previous formulations from the literature.Due to the complexity of the problem, we presented a heuristic algorithm that follows the guidelines of an iterated local search and that combines several different neighborhood structures into a variable neighborhood search.Based on a comprehensive computational study, we were able to demonstrate the efficiency of the proposed metaheuristic compared to the MIP model.For this purpose, we used extensive test data, which is adapted machine data and part data from the literature.
Regarding the prospects for future research, there are several aspects worthy of investigation.First, there is a high need of adequate lower bounds for the considered problem.For example, linear relaxation may be improved by reformulating the MIP model to obtain better lower bounds.Of course, other techniques can be used to provide lower bounds for the problem under consideration.Subsequently, the solution quality of the proposed algorithms could be evaluated in more detail.In addition to lower bounds, other solution approaches such as population-based procedures can be considered to solve the problem and its extensions.The most important extension is probably the inclusion of part dimensions in the nesting problem.The one-dimensional bin packing problem considered here changes to a two-dimensional problem, which makes the problem even more difficult to solve.For this extension, some studies already present model formulations and heuristic approaches that should be extended for customer order scheduling.
Using an example, we showed in the qualitative analysis that the consideration of the order-related objectives has a high impact on the resulting schedule.This observation should be investigated in future studies to provide further support for decision makers.

Notes
Dr. Janis S. Neufeld is a senior researcher at the Chair of Industrial Management and leader of the research group Operations Management at Technische Universität Dresden, Germany, where he also received his Ph.D. and habilitation.In 2020/2021 he served as an interim professor at the Chair of Management Science, Otto-von-Guericke-Universitt Magdeburg.His major research interests are the development and application of optimisation methods to complex planning problems in manufacturing and logistics.He has worked on several operations research projects with industry partners, covering topics such as railway crew scheduling, predictive maintenance, scheduling in cellular manufacturing, and lot sizing.
Professor Udo Buscher is full professor for Industrial Management at Technische Universität Dresden, Germany, where he teaches courses on operations management and operations research.He studied at the University of Göttingen, Germany, received a Ph.D. from Technische Universität Dresden, and finished his habilitation at the University of Würzburg, Germany.His research interests include production planning and control, supply chain management, and scheduling with a strong focus on operations research, decision science, applied algorithms, and game theory.He carried out a large number of projects with companies in which the application of OR methods plays a central role.His research results are published in leading international academic journals.

Figure 2 .
Figure 2. Schematic description of the planning problem

Figure 3 .
Figure 3. Flowchart of the general heuristic solution strategy

Algorithm 2 : 2 determine next avaiable machine k 3 G
ConstructiveHeuristicInput: set of parts I, set of machines M, set of materials U, Sorting Strategy Output: assignment array A1 while I = {} do ← − {}, A ← − [ ] 4 foreach u ∈ U do 5 I u ← − {i : u ∈ U Part i , a i ≤ α k , h i ≤ γ k , i ∈ I}6 sort parts in I u based on sorting strategy 7 a Temp ← − 0 8 batch Temp ← − {} 9 while a Temp < α k do 10 move first part i from I u to batch Temp 11 update a Temp ← − a Temp + a i 12 if a Temp > α k then 13 move last added part i from batch Temp to I u 14 add batch Temp to G 15 choose best option batch best from G based on sorting strategy 16 update A with batch best 17 remove parts in batch best from I 18 return A associated orders.In case of equal due dates, we sort by descending part heights and finally areas.• Latest Start Time:

Figure 4 .
Figure 4. Example solution representation (a) Numerical solution representation (b) Graphical solution representation

Figure 5 .
Figure 5. Example moves from neighborhood structures following Queiroga et al. (2021) (a) Example move in batch swap neighborhood (Swap B2 and B6) (b) Example move in batch insertion neighborhood (Insert B6 after B0) (c) Example valid move in part insertion neighborhood (Insert Part 5 in B5) Figure 5(b) illustrates an example of batch insertion.To avoid overlapping with the batch swap, neighborhood insertions from position k to position l are only allowed if |k − l| > 1 holds (Queiroga et al. 2021).• Part Insertion Neighborhood:

Figure 7 .
Figure 7. Boxplots of relative percent deviations segmented by instance parameters (a) aRPD MIP by |I| (b) aRPD min ILS by |I| (c) aRPD avg ILS

Table 1 .
Summary of the used notation.

Table 2 .
Parameters and their levels for test data and parameter tuning.

Table 3 .
Comparison of lower bounds categorised by part number and machine number.
Table 4 displays the aggregated results of the MIP model for small instances categorised by the number of parts |I| and by the number of machines |M|.The table shows with columns res.and #Ins.how many problems could either be solved to optimality or resulted only in a feasible solution.Columns Gap MIP and RT present the MIP gap and the average runtime, respectively.

Table 4 .
MIP results on small instances.

Table 5 .
Test results of solution procedures categorised by instance parameters.Notation: #best: number of best solutions found; Gap MIP : MIP gap to LB max ; RPD: RPD to best found solution; aRPD: aRPD to best found solution; RT: runtime; RPD min : ave.RPD values based on best ILS solution; RPD avg : ave.RPD values based on ave.ILS solutions; aRPD min : ave.aRPD values based on best ILS solution; aRPD avg : ave.aRPD values based on ave.ILS solutions In contrast, aRPD min ILS displays consistently superior performance across all categories of |I| and |M|.A rising number of parts |I| makes problems more difficult to solve using the MIP model.For this parameter, aRPD

Table 6 .
Evaluation of different solution scenarios.