The impact of incentive-based programmes on job-shop scheduling with variable machine speeds

Given the high demand for energy in the manufacturing industry and the increasing use of renewable but volatile energy sources, it becomes increasingly important to coordinate production and energy availability. With the help of incentive-based programmes, grid operators can incentivise consumers to adjust power demand in critical situations such that grid stability is not threatened. On the consumer side, energy-efficient scheduling models can be used to make energy consumption more flexible. This paper proposes a bi-objective job-shop scheduling problem with variable machine speeds that aims on minimising the total energy consumption and total weighted tardiness simultaneously. We use a genetic algorithm to solve the model and derive Pareto frontiers to analyse the trade-off between both conflicting objectives. We gain insights into how incentive-based programmes can be integrated into machine scheduling models and analyse the potential interdependencies and benefits that result from this integration.


Introduction
Enroute to a low-emission energy supply, electricity needs to be generated in a greenhouse gas-neutral way in the future.The German government, for example, set out the objective that at least 80% of the electricity consumed in Germany should come from renewable sources by 2030 (Bundespresseamt 2022).Renewable energies include hydropower, solar and wind energy, biomass, and geothermal energy.These energy sources cause very low overall greenhouse gas emissions, both from tapping into the energy source and from generating electricity, making their energy production more climate-friendly than traditional sources of energy.In addition, many facilities generating electricity from renewable sources can be used in many locations, which enables countries to reduce their dependence on energy imports (Evans, Strezov, and Evans 2009;Østergaard et al. 2020).
Taking the generation of electricity from wind power as an example, two central difficulties arise.First, there is the risk of a regional imbalance between the locations of power generation, which may often be coastal or hilly regions, and the demand for energy, which may occur in other parts of the country.This exposes the grid infrastructure to intense stress.Secondly, the availability of energy from renewable sources is subject to natural CONTACT Marc Füchtenhans fuechtenhans@pscm.tu-darmstadt.deInstitute of Production and Supply Chain Management, Technical University of Darmstadt, Hochschulstraße 1, 64289 Darmstadt, Germany Supplemental data for this article can be accessed online at https://doi.org/10.1080/00207543.2023.2266765.fluctuations (Bartels et al. 2006).Even though facilities for storing energy are becoming both cheaper and larger enabling grid operators and consumers to increase energy availability, a proactively planned adjustment of consumption behaviour to the available energy is still necessary.To maintain grid stability at a reasonable cost, an increasing share of volatile renewable energy generators needs to be reflected in a proactively planned adaptation of consumption behaviour to the available energy (Alizadeh et al. 2016).A more flexible consumption of energy requires a forward-looking, market-price-driven adjustment of value-added processes to forecasted energy generation that is priced into short-term energy markets (Haider, See, and Elmenreich 2016).
To enable load adjustment in energy consumption, different demand-side management approaches can be used (Hussain et al. 2015).Especially the industrial sector offers great potential for load adjustment, as it is one of the main energy consumers.For example, the industrial sector was responsible for about 25.5% of the total energy consumption in the European Union and 35% in the United States in 2021 (European Commission Eurostat 2022; EIA 2022).There are different ways for adjusting the energy consumption behaviour: First, grid operators can access aggregates that are not critical for daily oper-ations on the consumer side or that do not need to be continuously supplied with energy due to technical properties.In the event of bottlenecks in power generation, they can switch consuming equipment off and on again.Equipment not affected by the intervention is still supplied with power (Pallonetto et al. 2020).A second option is so-called demand response programmes (DRPs) that incentivise consumers to adjust their energy demand in a value-added manner by either changing electricity prices or granting incentive payments (Weitzel and Glock 2019).DRPs hand over the responsibility for adjusting energy demand to the consumer.This helps to stabilise the power grid and ensure energy supply (Lee et al. 2013).If the energy demand in the grid increases, the grid operator can use appropriate measures to reduce the energy demand instead of increasing energy production.Consumers, usually those with high energy consumption (like aluminium smelters), can voluntarily agree to participate in a DRP (Burns et al. 2020).
DRPs can generally be divided into incentive-based programmes (IBPs) and price-based programmes (PBPs).We also distinguish between the power demand or load per unit of time and the (total) energy or energy demand, with power being the amount of energy consumed per unit of time.In other words, the Total Energy Consumption (TEC) is the integral of the power demand over time.IBPs provide incentive payments to energy consumers if they reduce their power demand in response to energy shortages (Sun and Li 2014).With IBPs, it is not fully transparent to the consumer whether and when such an event will occur (Weitzel and Glock 2019).Therefore, consumers need to identify whether flexibility in their energy consumption exists and for how long.If power demand has to be adjusted in the short-term, the grid operator and energy consumer negotiate the necessary and maximum possible adjustment of power demand.The incentive payments in IBPs are thereby based on the amount of adjusted load.In the case of PBPs, the grid operator adjusts the price of electricity over time such that energy consumers change their demand patterns accordingly.These adjustments are based on real-time electricity costs mapped via dynamic price tariffs.Consequently, in times of high energy prices, consumers reduce the utilisation of their facilities and, at the same time, their total energy costs (Hussain et al. 2015).In comparison, IBPs limit the load on the consumption side in critical periods to the previously defined quantity.
In industrial production, however, energy is only one of several input factors, along with materials, personnel, and the necessary plant technology.To enable industrial companies to integrate their entire flexibility potential into a dynamic, self-stabilising energy system, an integrated production and energy planning approach is required.A first approach for an integrated production and energy planning can be provided by machine scheduling.Machine scheduling allows minimising not only economic but also energy-related objectives, and it therefore provides an opportunity to identify the potential benefits of DRPs in the context of industrial production (cf. Fernandes et al. 2022;Gahm et al. 2016).
The aim of this study is to investigate how IBPs can be incorporated into machine scheduling and which potential benefits may result from this integration.For this purpose, two machine characteristics are assumed to influence energy consumption in a general job-shop (JS) scheduling problem: the state of the machine (the machine can be idle or producing) and the machine speed.The problem considered here is a JS scheduling problem with variable Machine Speed (JSMS) (Salido et al. 2013).We consider a bi-objective JSMS that minimises the Total Weighted Tardiness (TWT) and the TEC simultaneously.For this bi-objective optimisation problem, we study Pareto frontiers that allow us to understand the trade-off between the (conflicting) energy and tardiness objectives in the context of IBPs for which we specify different load constraints.Based on that, we investigate how the use of variable machine speeds influences both objectives, which permits us to understand when it is worthwhile to participate in an IBP.We also systematically discuss the effects of IBPs in combination with variable machine speeds on machine scheduling and the associated power consumption using basic examples and explain the fundamental differences between IBPs and PBPs.To the best of our knowledge, existing JSMS models with different machine modes do not consider IBPs.This paper therefore contributes to research on industrial consumers participating in IBPs by using a scheduling approach with a discrete set of processing rates.The results can be used by manufacturers to identify potential benefits for their production processes and to optimise their energy consumption patterns.
The remainder of this paper is organised as follows.Section 2 reviews DR programmes and JSMS models as well as solution approaches relevant to this paper.A brief problem description and all relevant assumptions of the underlying model are presented in Section 3. Section 4 explains the genetic algorithm (GA) used for solving the proposed model and compares its performance to other solution approaches in a computational study.Section 5 discusses the investigated IBPs and presents the experimental design and the computational results.A conclusion and future research directions are provided in Section 6.Additionally, a review in tabular form of all scientific papers dealing with JS or flexible JS (FJS) models that aim on reducing energy consumption, subject to additional incentive systems, a discussion of the sensitivity analysis performed for the GA, and tables of test data considered for this study in Sections 5 are provided in an Online Supplement to this paper.

Literature review
Two research streams are of relevance to this paper: works dealing with machine scheduling considering variable machine speeds, and works on DRP, and here especially research that investigates IBPs in the context of machine scheduling.This section briefly reviews both lines of research.Subsection 2.1 concentrates mainly on IBPs in the context of scheduling problems, and Subsection 2.2 outlines existing JSMS models.Due to the extensive scientific literature that exists in both fields, only works that consider JS and FJS models that allow variable machine speeds and that consider energy consumption, energy costs or associated emissions in a scheduling context are reviewed here.Other model formulations are only considered as examples where appropriate.

Machine scheduling under DRPs
Three types of power demand are of relevance for industry: (1) In case the consumer has essential appliances that must be supplied with electricity under all circumstances, this is referred to as important loads.(2) Power demand that may be reduced is called controllable loads.(3) If a (short) interruption in the power supply does not impact the production processes, this is referred to as curtailable loads (Tang, Xu, and Chen 2010).DRPs incentivise consumers to adjust their power demand, which affects the power grid and the overall load curve.Four measures can be distinguished in this context: (1) Customers can shift the power demand from critical to less critical periods without reducing the total power demand (referred to as load deferral or load shifting).(2) Without shifting the power savings, consumers can reduce their power demand in critical periods (referred to as load curtailment or load shedding).(3) A mixture of the first two types is load shaping or load balancing.(4) Unlike the first three options, consumers can use onsite generation, which usually leads to only minor changes in the companies' power demands; it may, however, result in a lower power demand for the grid (Albadi and El-Saadany 2007;Cui and Zhou 2018).
DRPs enable grid operators to stabilise the power grid by adjusting demand to supply to better respond to the volatility of electricity generation.Demand response events usually consist of three stages: First, a DR signal is transmitted from the grid operator to the energy consumer.If the consumer wishes to participate, either a new production plan needs to be created, or already planned production sequences that consider an existing flexibility are implemented.Also here, onsite generation may be used to obtain the required energy from an alternative source.Second, the DRP becomes effective, and the power demand is ideally adjusted by the desired amount.Finally, a release signal is issued.In the recovery phase following the release signal, the customer switches back to normal operations with an influence on the power demand (Coe, Ott, and Pratt 2010).
Depending on the energy availability and the overall power demand, a distinction can be made between on-peak and off-peak periods.PBPs use dynamic pricing tariffs and charge high energy prices during on-peak periods and low energy prices during off-peak periods to flatten the demand curve.Due to a transparent pricing scheme, consumers can adjust their power demand in advance and benefit from price differences (Cui and Zhou 2018;Eissa 2011).This adjustment in energy consumption, driven by the energy tariff, is also called passive demand response.Here, an indirect service is provided to the grid operator because no direct request or interaction between the actors is necessary (Pallonetto et al. 2020).For time-dependent energy prices, time-of-use (TOU) tariffs, critical peak pricing, or real-time pricing are often investigated, with TOU being the most common mechanism studied in the scheduling literature when considering DRPs (Ashok 2006;Babu and Ashok 2008;Liu and Huang 2014;Moon and Park 2014).Several earlier works investigated scheduling models dealing with PBPs.The literature on machine scheduling with variable machine speeds has almost exclusively investigated TOU pricing schemes (cf.Table 1 in the Online Supplement in Appendix A).
IBPs are active DRPs, as the users change their consumption behaviour only in response to specific requests of the grid operator.IBPs are often characterised by economically problematic situations that result in increased operating costs that must be regained from the grid through incentive payments.IBPs can further be differentiated as follows: In direct load control, the grid operator can control the facilities of specific participants, and the participants receive payments in return.In the case of curtailable load, consumers are incentivised to turn off predefined power demands.Under demand-side bidding, consumers can bid on the wholesale market or accept to curtail electricity demand depending on the market price.In the capacity market, participants offer power demand reductions if this is necessary for grid stability.The corresponding payment depends on how much the peak load can be reduced.Using emergency services, incentives are offered to reduce power demand in times of reserve shortage.In this case, large consumers receive a signal for power demand reduction.Ancillary services, in turn, allow participants to offer a reduction in power demand as an operating reserve, often for short periods of time (Albadi and El-Saadany 2007;Cui and Zhou 2018;Pallonetto et al. 2020;Paulus and Borggrefe 2011).
Machine scheduling allows considering both the total energy consumption over the considered planning period and the power consumption for specific time intervals besides other machine-related workflows.It is therefore possible to consider, e.g.price signals and to influence the power demand through load deferral or load curtailment (Fernandes, Homayouni, and Fontes 2022;Gahm et al. 2016).In the literature, power demand and operating modes of machines are often centrally controlled within IBPs with the result that the reduction in power demand is treated as given (Haider, See, and Elmenreich 2016).
Aalami and Nojavan (2016) developed a model using a price-elastic customer demand function to investigate the impact of an IBP on capacity markets and power demand patterns based on a daily load curve.Chao and Chen (2005) studied alternating on-peak and off-peak periods under IBPs.Using Markov decision processes, they characterised the structure of the optimal production and shutdown strategies.While these two studies investigated the potential impact of IBPs, they were less specific in looking at production planning processes.Cui and Zhou (2018) concluded that further studies are needed on IBPs.In comparison, Weitzel and Glock (2019) proposed a procedure for generating Load Reduction Curves (LRCs) for industrial end users under an IBP that could be used to communicate the demand flexibility potential to the energy provider without centrally controlled power demand.These LRCs are predetermined and developed using a flow-shop scheduling problem that identifies the corresponding demand flexibility per time interval.
A literature review of DR techniques considering price signals, optimisation techniques, and device scheduling can be found in Hussain et al. (2015).Pallonetto et al. (2020) provided an overview of DRPs, focusing mainly on residential applications.Paulus and Borggrefe (2011) discussed the potential of DRPs in energy-intensive industries for the German electricity market.Considering peak load control, Fernandez, Li, and Sun (2013) proposed a method to reduce the energy demand of manufacturing systems with multiple machines and buffers during peak periods under the constraint of constant system throughput.Golmohamadi (2022) provided an overview of industrial load control in energy-intensive industrial sectors from the perspectives of both the utility and the consumer.
DRPs are important in industrial manufacturing.Previous studies investigating variable machine speeds in this context are, however, limited in that they only consider PBPs, particularly TOU schemes, as a DRP, if at all.As discussed above, IBPs offer different opportunities for load control and should therefore also be investigated in the context of machine scheduling.

Scheduling with variable machine speeds
In existing scheduling models, energy-related objectives are often a simple reduction of energy consumption or energy costs combined with one or more classical machine-or job-related objectives.The aim of existing studies is often to minimise total energy consumption (TEC) by adjusting specific machine configurations.Machine configurations that have been investigated include different machine operating modes (e.g.production, idle, start-up, turned off), a different energy consumption of job-machine assignments (e.g. with nonidentical parallel machines) or variable machine speeds that impact the machine's energy consumption.The energy consumption then depends on the schedule and potentially also on the machine setup (Fang et al. 2013;Salido et al. 2013).Although temporarily shutting down equipment is one way to save energy, machines are often kept in standby mode in practice to be available for urgent orders (Luo, Zhang, and Fan 2019) and to extend the operation life of the machines that may be negatively effected by frequent restarts (Lu et al. 2017).
Variable machine speeds have been discussed in the literature for different scheduling models and with different objectives.Earlier research differentiated between a continuous and a discrete set of machine speeds available for each operation (Luo, Zhang, and Fan 2019).An overview of papers considering scheduling problems with variable machine speeds, with a special focus on JS and FJS problems, is given in the Online Supplement in Appendix A. In the following, we focus on JS and FJS scheduling that often aims on minimising total energy consumption, makespan, or total weighted tardiness (cf.Biel and Glock 2016;Fernandes, Homayouni, and Fontes 2022;Gao et al. 2020;Gahm et al. 2016).
In general, JS and FJS problems are NP-hard, so exact procedures for solving them efficiently are not available (Chen et al. 2021;Fang et al. 2013).The JSMS is a generalisation of the JS problem with an even higher complexity, which makes finding feasible or best possible solutions even more difficult.As a result, earlier research has developed heuristic or meta-heuristic solution approaches to find solutions in a reasonable time (cf. Fernandes, Homayouni, and Fontes 2022).
For a given schedule subject to potential disruptions, Salido et al. (2017) presented a rescheduling approach for a JSMS.Here, the objective is to recover the original schedule by rescheduling as few jobs as possible.To this end, the authors proposed an algorithm that changes machine speeds to absorb the disturbance.Zhang and Chiong (2016) developed a multi-objective JSMS considering different machine states of three stages.In the first step, a scheduling problem minimising total weighted tardiness under fixed machine speeds is solved.Using this solution, two local improvement strategies are applied using a reduced cost analysis and a local search procedure.For the given schedule, TEC is improved in a final step considering variable machine speeds.Wu and Sun (2018) proposed a Non-dominated Sorted Genetic Algorithm (NSGA) for a FJSP to determine the processing speed and a potential shutdown of the equipment to minimise both energy consumption and makespan.Using the model of Lu et al. (2017), but assuming a discrete set of processing speeds, Luo, Zhang, and Fan (2019) developed a model for a multi-objective FJSP minimising makespan and TEC.The authors proposed a meta-heuristic to solve the machine and speed assignment and to determine an operation sequence.To solve a JSMS, Lu et al. ( 2021) developed a knowledge-based multi-objective memetic algorithm (MOMA) to obtain tradeoff solutions between TEC and makespan.Problemspecific properties of the JSMS were derived and used for constructing a novel local search heuristic that finds promising trade-off solutions between the two goals.The literature often focuses on solution methods for solving the JSMS.In some cases, variable machine speeds are used to fill idle times in an existing schedule, such that jobs are processed more slowly and the TEC is reduced (Dai et al. 2013;Fernandes, Homayouni, and Fontes 2022;Lei, Zheng, and Guo 2017).
Some earlier studies investigated PBPs in the context of JSMS.However, IBPs have not been investigated yet in the context of JSMS (cf.Table 1 in the Online Supplement in Appendix A).Based on the conference paper of Füchtenhans and Glock (2022), this paper analyzes a JSMS with variable machine speeds and uses an adapted solution approach to identify benefits resulting from IBPs.

Job-Shop model with variable machine speed
This study addresses the JSMS, a deterministic offline scheduling problem where the number of jobs, operations, and related machines are assumed finite and predefined.The processing times, release dates, due dates as well as energy consumption are known in advance and can be influenced in the optimisation process.We allow recirculation, which means that a job may visit a machine more than once (Pinedo 2016).Each machine is either in production or idle mode with the corresponding power demands until all jobs have been completed on the respective machine.The JSMS investigated here follows the work of Luo, Zhang, and Fan (2019); Zhang and Chiong (2016); Liu et al. (2014); Salido et al. (2013), and considers n jobs, J = {J j } n j=1 , with given release date r j , due date d j , and weighting factor w j .Each job J j is defined as a finite set of N ordered operations O j = {O l j } N l=1 , where O l j is the lth operation of job J j that can be processed on m machines M = {M k } m k=1 .For every operation O l j exists a predefined relation between all operations O l j of job J j and machine M k which is uniquely determined by job J j and operation O l j .This is due to the fact that there is no parallel machine environment at any stage.Consequently, for every operation O l j , the uniquely assigned machine can be identified from the given dataset.We use a discrete set of p alternative speed ratios V = {v z } p z=1 suitable for every machine.Operation O l j has a basic processing requirement denoted by T l j .In case the speed of machine M k is set to v z for processing operation O l j , the actual processing time is denoted as p l jz = T l j v z .It is assumed that a higher machine speed leads to shorter processing times but higher power demand and vice versa.In the bi-objective optimisation problem, the objectives are to find a schedule with variable machine speeds such that the total weighted tardiness and the total energy consumption are minimised.
The assumptions made in developing the proposed model are as follows: (1) Each job has the same number of operations N and follows an individual machine sequence.(2) Each operation has a unique machine relation.
(3) An infinite intermediate storage exists between any two machines.(4) Each job can be processed only on one machine at a time.(5) Each machine can handle at most one job at a time.(6) The sequence order between different operations of a job is binding.No sequence order exists between operations of different jobs.(7) Preemption is not allowed for any job or operation, i.e. once an operation has started, it has to be finished on the machine.(8) There are no parallel machines available.(9) Every machine is either in the 'production' or 'idle' mode until all jobs have been completed, i.e. a machine cannot be turned off until all jobs have been finished on the respective machine.
(10) The power demand in the idle mode is constant for each time interval.(11) Each machine can work at different processing speeds.( 12) The same number of machine speeds is available for each operation, and the selected speed of a machine cannot be modified during the process.( 13) An increased machine speed leads to shorter operating times and increased energy consumption.
The processing times for each combination of job, machine, and machine speed are known and fixed in advance (cf.Fang et al. 2013;Gutowski, Dahmus, and Thiriez 2006).( 14) Times and power demand for transport and setup are negligible, or it is assumed that these are already included in the respective parameter values.
The model investigated in this paper considers different decision variables and constraints to ensure feasible solutions based on the above assumptions.For a mathematical model formulation, we refer to Füchtenhans and Glock (2022).

Solution approach and performance evaluation
The bi-objective JSMS investigated in this paper aims to minimise two conflicting objectives: TWT and TEC.
In this case, no solution optimises both objectives at the same time, and we, therefore, search for an acceptable trade-off instead of an optimal solution.To analyse this trade-off, we use Pareto frontiers that contain all non-dominated solutions in the objective space.To determine the Pareto frontiers, we use the ε-constraint method by transforming one objective into a constraint while using the other objective function (called fitness value) to evaluate the solutions (Bérubé, Gendreau, and Potvin 2009;Mavrotas 2009;Zheng and Sui 2019).To determine the best possible approximated Pareto frontier, both objective functions are used consecutively as fitness values.The conceptual approach is illustrated in Figure 1.At first, only the TWT and TEC are minimised separately, which is represented by the two solutions in the left part of Figure 1.Afterwards, bounds are set for the TEC (illustrated by the dashed line) and the TWT is minimised again.By repetitively adjusting the bounds and minimising the fitness value, different non-dominated solutions are found that form the approximate Pareto frontier, as illustrated in the right part of Figure 1.To solve the resulting JSMS with one objective function each, a GA is developed.Subsection 4.1 describes the GA implemented and used in this study to determine feasible solutions for the JSMS with additional constraints on the IBPs to create Pareto frontiers.In advance, we conducted a sensitivity analysis that included different configurations of the GA and parameter settings to find a good approach to our problem.A detailed discussion of the sensitivity analysis conducted and the configurations considered are presented in the Online Supplement in Appendix B. To validate the performance of the GA, we compare the results to solutions obtained with the commercial solver IBM CPLEX CP Optimiser (Laborie et al. 2018) for small datasets and with priority rules that follow the algorithm of Giffler and Thompson (1960) in Subsections 4.2 and 4.3.For all further details and measures on IBPs, we refer to Section 5 and the following.

Genetic algorithm
A GA is a population-based metaheuristic that aims on finding good solutions to complex problems that cannot be solved exactly in general.It applies evolution concepts such as reproduction and survival of the fittest to solve the problem in reasonable time.Due to the complexity of the JSMS, GAs are often applied, but it is impossible to determine their solution quality in general (García-Martínez, Rodriguez, and Lozano 2018).
To apply the ε-constraint method and to determine a Pareto frontier, we need to apply the GA multiple times, each time with slightly different bounds on the transformed objective.The more gradations are inserted for the transformed objective, the more non-dominated solutions for the Pareto frontier are obtained.However, this also increases the computational effort since a new optimisation problem must be solved for each gradation.The focus of this paper is not on determining a complete Pareto frontier with a maximum number of non-dominated solutions.Therefore, for each problem considered, only up to 10 gradations were included.When considering energy constraints, the proposed solution method has proven to be efficient in quickly identifying non-dominated solutions to provide insight into the effect of IBPs on the JSMS problem under study.For the purpose of our research, this sufficiently illustrates how the Pareto frontier changes when considered with and without IBPs.Consequently, the solution method has the limitation of providing only a limited number of non-dominated solutions.If the Pareto frontier should be determined as precisely as possible, other solution methods might be more suitable, such as evolutionary algorithms (cf.Lu et al. 2021;Zhang et al. (2022)) or the NSGA-II (cf.Deb et al. 2002;Zheng and Sui 2019).With the additional energy constraints due to IBPs, it is necessary to adapt these methods accordingly to achieve good solution quality.For this study, we extended the commonly used GA based on Holland (1975) and Goldberg, Korb, and Deb (1989) according to our problem, which uses crossover and mutation as main operators and works on a constant population size.Figure 2 illustrates the basic procedure of the used GA in the form of a flowchart.
Using the given input data (number of jobs, orders, machines, machine speed), the initial population is randomly created and evaluated based on the fitness value.Afterwards, the algorithm iteratively creates new generations by recombining selected individuals (called parents) from the population and applying the mutation operator to the new elements (called offspring).To distinguish high-quality from low-quality solutions within the population, we used a proportionate selection scheme (Holland 1975).Thereby, the probability that a solution x i is selected for recombination increases with its fitness value f (x i ), which can be calculated with Starting from the population, 50% new offspring are generated in each iteration.For example, if the population size is 100, 50 new offspring are generated in each iteration.
Each solution is represented by a genotype that consists of two parts, namely the encoding of the production schedule and the encoding of the speed setting (cf.Vallejos-Cifuentes et al. 2019).This encoding scheme is referred to as a chromosome, and the individual elements of a chromosome are called genes.The associated phenotype, in turn, describes the corresponding production schedule.To determine this production schedule, a transformation uses the information from the genotype to construct a phenotype (Rothlauf 2011).For the problem under consideration, we use a two-dimensional chromosome structure.If there are no additional constraints on the fitness value and power consumption, each chromosome represents a feasible schedule.However, not every chromosome is necessarily a feasible solution with respect to given constraints that result from IBPs.For this reason, the selection process gives preference to those solutions that additionally meet all the constraints on the fitness value and power consumption.
The upper part of Figure 3 shows a simple example of a chromosome for three jobs with three operations each, and with two speed settings available for all operations.
In the first row, there is a permutation of integer numbers representing the jobs, while the second row represents the speed setting.The first gene stands for operation 1 of job 1 and must be programmed on the first available position on the respective machine (here machine 1) with the corresponding time resulting from the speed setting.If job 1 has a release date to follow, it will be considered here accordingly.The second gene stands for the first operation of job 2, the third gene stands for the second operation of job 2, and so on.Note that the first operation of job 3 is processed on machine 2. The operations are thus scheduled one after the other at the earliest possible time; we need to make sure, however, that a machine is never busy with more than a single job, and that no operation of a job is assigned to more than a single machine at a time.During the transformation from genotype to phenotype, the jobs are scheduled one by one on the respective machines.If there are idle times on a machine that are large enough to complete the operation to be scheduled, these will be ignored, however.The lower part of Figure 3 illustrates the Gantt chart for the example chromosome.It is easy to see that this is not the best solution with respect to the makespan.
Crossover operators imitate the principle of biological reproduction and are applied on the chromosome.In this study, we use the Generalised Order Crossover (GOX) with a crossover rate of 25% that produces two new offspring from two selected parents by exchanging substrings.To create an offspring, the GOX operator  selects a random substring from parent 1.All genes from the selected substring in parent 2 are deleted.The substring is inserted into the offspring at the position where the first gene of the substring occurred in parent 2. Afterwards, the free genes in the offspring are filled in the order of the remaining genes from parent 2 (Bierwirth, Mattfeld, and Kopfer 1996).For the second offspring, the parents are reversed.In addition, we modify the GOX operator for the machine speed to allow selection of other speeds that were not previously represented in any of the chromosomes.For this purpose, the machine speed for each inserted substring is randomly assigned in each iteration.The procedure is illustrated for one offspring as an example in Figure 4.
Mutation operators slightly change the genotype of a solution by swapping the positions of individual genes in a chromosome with each other after the crossover operator.In this study, we use the adjacent swap mutation operator, where a pair of genes to be swapped are adjacent to each other.The mutation rate is set to 10%, and the operator is applied to the machine sequence and machine speed simultaneously in each iteration.The procedure is illustrated for one offspring as an example in Figure 5 using the chromosome from Figure 4.
Finally, the newly generated and mutated offspring are compared with the previous population based on their fitness values, and only those elements that perform best are allowed to remain in the population (survival of the fittest).This way, the population size is always constant.The algorithm terminates when a critical number of iterations has not produced any improvement within the population or if a time limit has been reached (only for large data sets).The population size and the critical number of iterations depends on the size of the dataset.

Priority rules and commercial solver
To validate the performance of the GA and to check whether the solutions found are plausible, two alternative solution methods are used to obtain benchmark solutions.
The IBM ILOG CPLEX CP Optimizer is used to solve the proposed model presented in Füchtenhans and Glock (2022) for small datasets.To evaluate the performance of the GA on medium and large datasets in addition, the Algorithm of Giffler and Thompson (1960) (AGT) is used to apply different priority rules (Błażewicz, Domschke, and Pesch 1996;Nascimento 1993).Only the best solutions found with respect to the fitness values TEC and TWT are compared, and no Pareto frontiers are created.If the TEC is to be minimised, only the machine speeds with the lowest energy consumption per operation are considered.Conversely, if the TWT is to be minimised, only the machine speeds with the shortest processing times per operation are used.Using AGT, we implemented the priority rules listed in Table 1 (cf.Błażewicz, Domschke, and Pesch 1996).The procedure is summarised as follows: Table 1.Priority rules used in the algorithm of Giffler & Thompson (1960).Let m * denote the machine where j * needs to be processed.(5) Define the conflict set C(j * ) := { j ∈ Q(t)| job j is processed on machine m * and r j < c j * } where r j defines the earliest possible ready time of job j. (6) Select operation i ∈ C(j * ) based on the priority rule and schedule i with given r i and c i .( 7) Remove i from Q(t) and delete C(j * ).(8) Update c j and r j for all j ∈ Q(t).(9) If Q(t) = ∅: increase t and update Q(t).

Comparison of GA, commercial solver, and priority rules
We perform a computational study to evaluate the performance of the implemented GA.For this purpose, the solutions of the GA are compared with the solutions obtained by the commercial solver IBM CPLEX CP Optimiser and the priority rules; in terms of the latter, the best solution of the considered eight priority rules was used for each dataset.Each dataset is solved separately with each solution approach once for minimising TEC and once for minimising TWT.Here, the GA is not used to determine the Pareto frontier for the performance comparison, and only the best results for the TEC and TWT of each solution method are considered.For our computational study, we use instances originally published by Agnetis et al. (2011), Lawrence (1984), Taillard (1993), and Watson et al. (1999), that were extended by Salido et al. (2013) and Salido et al. (2016a) for the JSMS.All datasets contain three machine speeds for all operations with given power consumptions.We further extended the data for the computational study presented in this paper.The idle power consumption was determined for each dataset by setting the idle power consumption to one-tenth of the lowest power consumption of all operations.All jobs are equally weighted and immediately available.For a dataset, the due dates are determined equally for all jobs by summing up the processing times of all operations with the fastest machine speed and dividing the result by the number of jobs.
We classified the datasets depending on the number of operations and jobs as small (≤6 operations and ≤ 10 jobs), medium (7-10 operations and 11-20 jobs) and large (> 11 operations and > 20 jobs).The total number of datasets was 420.Depending on the problem size, we use alternative population sizes and terminating conditions: If the solution population is not improved, the GA terminates after 30, 60 and 90 consecutive iterations for small, medium, and large datasets, respectively.Regardless of the number of iterations, it also terminates after 360s, 900s and 1800s, and the population size is set to 100, 300 and 600.
We performed the computational study on a PC with an Intel R Core TM i7-1165 CPU, running at 2.8 GHz with 16 GB of RAM under a 64-bit version of Windows 11, and used Python 3.9.7 for both implemented algorithms.For the commercial solver, we set the time limit for each call to 600 s.
To evaluate the performance of the GA, we select the minimum fitness values for each dataset and measure the effectiveness of the solution approaches according to the average relative percentage deviation (ARPD) based on the best solution obtained during our investigations as the benchmark (Abedinnia, Glock, and Brill 2016;Fernández-Viagas Escudero and Framiñán Torres 2015;Pan and Ruiz 2013).The ARPD for a set of problem instance X is calculated as ARPD , where PI i denotes the fitness value obtained by the solution approach, and BKI i is the best-known fitness value of the considered objective of problem instance i.
Table 2 shows that the GA leads to the best-known solutions in 92.1% of the cases for TEC and in 84.3% of the cases for TWT with an average error of 0.0015% resp.0.13%.The average error of priority rules is 0.0044% (TEC) and 0.23% (TWT).In addition, priority rules lead to the best-known solutions in only 8% resp.14% of the cases.Overall, the GA shows a significantly better performance with respect to both objective functions.However, due to the simplicity of the implemented priority rules, this is an expected result.Among the priority rules, rule LNRO achieves more than 70% of the best results in minimising TEC followed by rule LRPT with still around 27%.For the minimisation of TWT, there are no specific rules to be preferred.The priority rules SPT, SRPT and LNRO achieve relatively good results between 34% and 26%.
Due to the complexity of the problem, only 35 small datasets could be solved using the commercial solver in the given time and compared to the other solution approaches.In comparison, the implemented GA calculated the same or a better solution quality for all datasets that could be solved with the CP Optimiser.The percentage values for the CP Optimiser in Table 2 refer to the 35 datasets.

Computational study
Following Weitzel and Glock (2019), we use a two-step procedure in which we first solve the initial JSMS to derive a Pareto set of feasible solutions, referred to as the baseline Pareto frontier.Afterward, the initial JSMS formulation is extended and solved by generating constraints to reduce the power demand for a DR period.Using solutions from the baseline Pareto frontier, the power consumption of the sub-periods defines additional constraints.These constraints require that the power consumption in the sub-periods differ from the power consumption of the respective baseline solution.The GA proposed in Section 4 is used to solve these problems, considering the respective constraints.This way, new Pareto-optimal sets are created, which contain alternative schedules with a different power consumption over time compared to the baseline Pareto frontier.
Section 5.1.first introduces the investigated IBPs using a simple example illustrated with the help of Gantt charts.Section 5.2.analyzes the impact of IBPs on the Pareto frontiers and power consumption for two datasets.Due to the problem sizes of the two datasets, they are not visualised as Gantt charts.

Experiment design and discussion of IBPs design
For the model formulation and solutions, it is important to distinguish between IBPs and PBPs.In PBPs, electricity prices are predetermined for different time periods, so that it can be used to reduce power costs over the entire planning period.The model formulation thus results in changes in the formulation of the objectives, but no further constraints need to be included.In IBPs, however, quantitative constraints must be considered, which limit the power demand for given time periods.Since the power constraints and the associated incentive payments have been negotiated in advance between the consumer and the grid operator, it is necessary to comply with them during the planning process.Consequently, power consumption and production processes are immediately influenced.Depending on the machines used and the power constraints defined by the IBPs, it may not be possible to continue operating critical machines in a particular period.For example, if a machine (e.g. an industrial furnace) has a higher power demand than the power limit defined by the IBPs, production may have to be stopped in the worst case.This is a significant difference compared to models that examined PBPs, which only define different prices for individual time periods.It should also be noted that IBPs are often deployed by the grid operator at short notice, so that all possibilities have to be considered each time during planning, or the production plan must be rescheduled during an already running production phase.
This study analyzes three different IBPs: (1) load deferral, (2) load curtailment, and (3) load balancing.The three IBPs are illustrated schematically in Figure 6 and visualised based on a small dataset including the respective Gantt charts and power consumption.The dataset used for the representation is provided in the Online Supplement in Appendix C. The upper solution, marked in Figure 6 with 'Baseline schedule without IBPs', represents the initial situation without additional constraints from IBPs.The three lower solutions represent the investigated IBPs, where the dashed lines illustrate the corresponding power limits for the respective periods and the solid lines illustrate the power consumption.
(1) Load deferral requires that a given power demand is reduced within a predefined period.In this case, the power demand can be postponed, such that jobs are processed later without further restrictions.In this study, an adjustment in power demand is predefined for a time interval, and the changes in average and peak demand are examined.
(2) Load curtailment also aims on reducing the power demand during a given period.Compared to load deferral, power demand cannot be postponed arbitrarily to another period.In this study, an adjustment in power demand is predefined for a time interval.In addition, almost no increase in TEC is allowed compared to the initial schedule (i.e. based on the average consumption from a solution of the baseline Pareto frontier).This study analyzes the resulting changes in peak demand and impact on the TWT.(3) Load balancing aims to influence the power demand not only for a short DR period, but for the entire planning period.Using the power consumption of, e.g. a particular workday, the system could increase power demand during off-peak periods and decrease demand during peak periods to ensure grid stability (cf.EIA 2020).The programme would thus not only specify at which point in time the power demand would be reduced, but also when it should be increased.Using a predetermined Pareto frontier, a load balancing curve is specified and used as a constraint to determine adjusted schedules that meet these conditions.This enables the grid operator to better synchronise power demand with energy supply.
The example in Figure 6 illustrates that variable machine speeds can be used to comply with the power limits due to IBPs without interrupting production.However, compared to the baseline schedule without IBPs, the completion times are increased with an impact on the TWT.To keep the TWT low, higher machine speeds are used more often, resulting in peak loads.If load deferral is applied, the TEC increases compared to the baseline schedule without IBPs due to the peak loads relative to the short total completion time.If load curtailment or load balancing is applied, the TEC decreases due to the more frequently used slower machine speeds.

Computational results
We illustrate the three IBPs using two datasets as examples, determine the baseline Pareto frontiers, and investigate which new Pareto frontiers can be identified.Dataset 1, which is based on Agnetis et al. (2011), contains n = 3 jobs, m = 3 machines, and N = 7 operations for each job.
The due date for each job is 318 time units, and the power consumption in the idle state is 0.02 per machine.Dataset 2 is based on Lawrence (1984) and contains n = 10 jobs, m = 5 machines, and N = 5 operations for each job.For each job, the due date is 285 time units and the idle state power consumption is 0.04 per machine.Both datasets consider three different machines speeds, and all jobs are equally weighted and immediately available.Due to the chromosome structure used in the GA, no idle time can occur between the processing of two jobs on one machine (except if the following job is still being processed on another machine).For this reason, a dummy job is added to each dataset that contains operations that can be used to generate an idle time during critical periods (with corresponding idle state consumption).By selecting a different machine speed, the dummy job can also be neutral with a processing time and power consumption of 0. The idea is that dummy jobs can be used to run machines idle during periods of reduced power demand caused by the IBPs.Tables with both datasets can be found in the Online Supplement in Appendix C. The system data and settings of the GA are those described in Section 4. First, a baseline Pareto frontier is generated for both datasets without considering IBPs.This helps us understand the trade-off between the conflicting objectives TWT and TEC by computing good solutions for each objective function separately.The charts in the upper left parts of Figures 7 and 8 show the baseline Pareto frontiers as solid lines for datasets 1 and 2, respectively.
From the baseline Pareto frontier, we use the solution with the lowest TEC for dataset 1 as a benchmark to further investigate the IBPs (marked with 'under investigation' in Figure 7).The related production plan has a TEC of 1180 and a TWT of 1125.The power demand is shown with a solid line in charts (a), (b), and (c) in Figure 7, and serves as the benchmark power consumption for the IBPs under study.Considering the benchmark power consumption, limits and time intervals representing the IBPs are set accordingly.The power limits and the duration of the DR period for dataset 1 are shown in Table 3, and they are also graphically indicated in the respective charts in Figure 7.The resulting simplified Pareto frontiers can be seen as dashed lines in the chart in the upper left corner of Figure 7 for dataset 1.From the Pareto frontiers, the solutions with the lowest TEC are selected for representing the power consumptions as shown using dashed lines in Figure 7 (distinguish between (a) load shifting, (b) load limiting, and (c) load balancing).
For dataset 2, we use the solution with the lowest TWT as a benchmark from the baseline Pareto frontier (also marked with 'under investigation' in Figure 8).The related production plan has a TEC of 5906 and a TWT of 2242.Due to the size of dataset 2 and the selected due dates impacting the TWT, the values here are larger compared to dataset 1.The benchmark power consumption is also shown using a solid line in Figure 8.The power limits and duration of the DR period for dataset 2 are given in Table 4 and are graphically indicated in the respective charts in Figure 8.The solutions with the smallest TEC from the resulting Pareto frontiers are again selected for representing the power consumptions shown as dashed lines in Figure 8.The lower sections of Figures 7 and 8 additionally show the average and peak power consumptions for each type of IBP.
Figures 7 and 8 show that the TWT and TEC both increase compared to the baseline Pareto frontier for datasets 1 and 2 if IBPs are applied.If we look at the TEC first (in comparison to the Pareto frontiers representing the IBPs with the baseline Pareto frontier in Figures 7 and 8), all programmes enable solutions with only a relatively small increase that does not exceed 25%.However, we also see that the TWT can be up to 10  times greater than the best solutions without IBPs.For all Pareto frontiers, the lower the TWT, the higher the TEC and vice versa.Regarding the power consumption, all limits defined by the IBPs are respected, which illustrates that one advantage of variable machine speeds is the additional flexibility they give to the production planner.One consequence of the IBPs we observe is that production is significantly slowed down leading to an increase in TWT.This also increases the idle times of the machines, causing additional energy consumption.The tables in Figures 7 and 8 show that all programmes reduce the average power consumption compared to the benchmark power consumption.The main reason for this can be seen in the extended production time.However, reducing peak power demand is becoming increasingly important for grid stability.Considering (a) load deferral in this context, the power demand in critical periods, and therefore, the peak power demands can be reduced.However, the power consumption curves for (a) in Figures 7 and 8 reveal peak demands after the critical periods.To avoid such shifts of peak power demands, load curtailment can be used.The examples in Figures 7  and 8 show that there are no peak power demands for (b).Using load balancing, peak power demands are only allowed during predefined periods, as shown in Figures 7 and 8 for (c).Thus, peak power demands are avoided in periods when grid stability is threatened.In the determined solutions, periods in which a higher power demand would be possible with (c) are not used for the given datasets (cf.Figures 7 and 8).Even if (b) load curtailment or (c) load balancing have restrictions in relation to power consumption, the results show a clear improvement in terms of the reduction of peak loads.
The JSMS is a complex and difficult to solve problem.By introducing additional constraints for the IBPs, finding good solutions becomes even more difficult.
Our computational study revealed that when considering IBPs, it is important to use problem-specific knowledge (like identifying machine processes or machine states that respect power limits during critical periods) to find feasible solutions or to improve solution quality.Much depends on the right choice of the machine speeds to make the best use of the high flexibility to meet the additional requirements for IBPs.Therefore, an important requirement for improved scheduling with IBPs is to identify problem knowledge and to incorporate them into the solution approach.

Conclusion
Supplying an increasing share of energy demand from renewable sources leads to challenges for grid operators due to the volatility of these sources.DRPs enable the grid operator to influence the demand side and to match energy supply with demand.With the use of IBPs, grid operators can incentivise consumers to adjust power demand in critical situations such that grid stability is not threatened.
This study investigated how IBPs can be incorporated into machine scheduling models and how this integration influences power consumption and total weighted tardiness.A JSMS that aims to minimise TWT and TEC under an IBP with two different machine modes was analysed for this purpose.We studied different types of IBPs, namely load shifting, load curtailment, and load balancing.We illustrated the effects of the investigated IBPs in examples and discussed the differences between IBPs and PBPs.To solve the proposed models, we implemented a genetic algorithm and applied it to three exemplary datasets.
The results of the computational studies show that variable machine speeds can add flexibility to the scheduling of power consumption and help meet IBP requirements.In addition, our results imply that programmes with load curtailment and load balancing can reduce peak power demands.On the manufacturing side, production planning needs to anticipate longer production phases in which machines either operate at slower machine speeds or are kept in idle mode to meet the requirements of the IBPs.
This study showed that IBPs not only have an impact on power consumption and related costs but also influence production times and due dates.Manufacturers must therefore consider whether the incentive payments offered by the grid operator compensate for the restrictions.Production planning is thus subject to some restrictions that would not exist without IBPs.For example, if the manufacturer must react to customer requirements or market changes quickly and change the production process to remain competitive, IBP restrictions should not be an additional bottleneck.Otherwise, IBPs will not be accepted and considered by manufacturers.
This paper has some limitations and could therefore be extended in different directions.The genetic algorithm used to solve the proposed problem formulation has a limited capability to identify many non-dominated solutions to create Pareto frontiers.It was also found that for large datasets, the variable machine speeds are not always best utilised.Based on our findings, more sophisticated solution approaches could be developed that consider the special properties that arise from variable machine speeds and IBPs in the context of JS problems.For example, future research could investigate if an adapted NSGA-II is more efficient in identifying a proper Pareto frontier.Moreover, an advanced evolutionary approach, such as that of Zhang et al. (2022), or a knowledge-based MOMA, such as that of Lu et al. (2021), could potentially handle the increased problem complexity due to the additional constraints of IBPs much better.
Our investigation of the power consumption with and without IBPs was limited to only three datasets.As the results depend on the given datasets, more extensive studies should be carried out to further understand how IBPs can be realised and how different machine modes, such as variable machine speeds, can efficiently contribute to a better trade-off between machine utilisation, power consumption, and grid stability.It would also be interesting to permit a larger variety in the magnitude and lengths of the power limits and peak periods to better understand how the structure of IBPs influences production.
Previous studies on IBPs have often highlighted the benefits in terms of lower power demand in critical phases and the reduction of peak loads.This study showed that the application of IBPs also negatively affects machine utilisation, the TWT, and the TEC compared to situations without IBPs.In the numerical experiment in Sections 5.1 and 5.2., it could also be seen that load deferral may only shift peak power loads.This may entail that at other times the power supply is now insufficient, or that the demanded power cannot be produced by renewable sources in these periods.Besides the effects on grid stability, further studies could investigate the effects on, e.g.supply chains, customer-supplier relationships, or warehouse management.In addition, the model formulation could be further developed to include set-up times and cost, and it would be worthwhile to investigate how IBPs interact.
published in renowned international journals, such as the European Journal of Operational Research, Decision Sciences, the International Journal of Production Economics, the International Journal of Production Research, Omega, Transportation Research Part E or IISE Transactions.

Figure 1 .
Figure 1.Illustration of the ε-constraint method for creating an approximate Pareto frontier.

Figure 2 .
Figure 2. Flowchart of the solution procedure.

Figure 3 .
Figure 3. Example of the chromosome representation.

Figure 5 .
Figure 5. Illustration of the mutation operator.

Figure 6 .
Figure 6.Visualisations of the investigated IBPs design compared to the baseline schedule without IBPs.

Figure 7 .
Figure 7. Pareto frontiers and power consumption profiles for the three IBPs for dataset 1.

Figure 8 .
Figure 8. Pareto frontiers and power consumption profiles for the three IBPs for dataset 2.

Table 2 .
Performance comparison of GA, commercial solver, and priority rules.

Table 3 .
Power limits and intervals representing IBPs for dataset 1.

Table 4 .
Power limits and intervals representing IBPs for dataset 2.