Adaptive Order-of-Addition Experiments via the Quick-Sort Algorithm

Abstract The order-of-addition (OofA) experiment has received a great deal of attention in the recent literature. The primary goal of the OofA experiment is to identify the optimal order in a sequence of m components. All the existing methods are model-dependent and are limited to small number of components. The appropriateness of the resulting optimal order heavily depends on (a) the correctness of the underlying assumed model, and (b) the goodness of model fitting. Moreover, these methods are not applicable to deal with large m (e.g., ). With this in mind, this article proposes an efficient adaptive methodology, building upon the quick-sort algorithm, to explore the optimal order without any model specification. Compared to the existing work, the run sizes of the proposed method needed to achieve the optimal order are much smaller. Theoretical supports are given to illustrate the effectiveness of the proposed method. The proposed method is able to obtain the optimal order for large m (e.g., ). Numerical experiments are used to demonstrate the effectiveness of the proposed method.


Introduction
Many experiments in science and engineering are affected by the addition order of m components. The primary goal in an orderof-addition (OofA) experiment is finding the optimal order among all possible orders, as shown in Jiang and Ng (2014). There are many applications of the OofA experiment, such as medical science, pharmaceutical science, food science, biochemistry, nutritional science, mechanics and engineering, just to name a few (see, Lin and Peng 2019 and references therein). If there are m different materials or components, then there are m! different orders for adding these components to a mixture. It is infeasible to explore all the orders, especially when m is large (e.g., when m = 10, m! ≈ 3.6 million).
The OofA experiment has received a great deal of attention in the recent literature. For example, Voelkel and Gallagher (2019) proposed a computer searching method to construct a number of (optimal) designs with small m (m ≤ 7), under the pairwiseorder (PWO) model of Van Nostrand (1995). Peng, Mukerjee, and Lin (2019) considered different types of optimality criteria and proposed a systematic method to construct a class of robust optimal fractional OofA designs. Lin and Peng (2019) reviewed the latest work on the design and model of OofA experiments, and introduced some future research directions. Chen, Mukerjee, and Lin (2020) and Winker, Chen, and Lin (2020) further investigated the optimal OofA with small sizes via balanced incomplete block designs and threshold accepting algorithm, respectively. Mee (2020) discussed a higher-order OofA model. Chen, Peng, and Lin (2021) proposed a general method to solve the OofA problems.  first discussed replicated OofA experiments. Yang, Sun, and Xu (2021) Zhao, Lin, and Liu (2021) constructed a class of minimal-point OofA designs with theoretical evaluations of their efficiencies. More OofA research can be found in Voelkel and Gallagher (2019), Chen et al. (2022), Xiao and Xu (2021), Stokes and Xu (2022), Wang and Mee (2022), Zhang et al. (2022), Zhao, Lin, and Liu (2022) and the references therein. In this article, the main goal of the OofA experiment is to identify the optimal order in a sequence of m components. With this in mind, this article proposes an efficient adaptive methodology without any model specification for the OofA problem.
On a seemingly unrelated issue, the sorting problem arises to order a sequence of elements in programming practice. Quick sort is considered the fastest sorting algorithm among all the sorting algorithms (Hossain et al. 2020). The classical quick sort works by selecting a "pivot" element and partitioning all the other elements into two subsets: where the left/right subset with all elements that are smaller/larger than the pivot elements. The algorithm then recursively sorts both the left and the right subset separately. For a comprehensive discussion on quick sort, one may refer to Hoare (1962), Philippas and Zhang (2003), Yaroslavskiy (2009), and Kushagra et al. (2014) and their references therein.
Motivated by the idea of quick sort, this article proposes a sequential method to divide all components into two subsets by selecting a "pivot" component from the original order. Recursively sorting two subsets and combining the sorted two subsets together, the final optimal order can be obtained. The proposed method starts with an initial run (a random permutation of {1, 2, . . . , m}). Any component can be selected as the pivot component. Without loss of generality, the first component is selected as the pivot component. The second run swaps the pivot component and the second position component, while keeping the order of other components fixed. In this way, the first m runs can be conducted. For the first m runs, the pivot component will be swapped sequentially with the other components, then the first iteration of quick sort can be performed. After conducting all first m runs, all components can be divided into two subsets: (a) all components that are believed to precede the pivot component, and (b) all components that are believed to follow the pivot component. Recursively applying the iteration of quicksort (the proposed method) to both subsets until the optimal order can be obtained. In doing so, the proposed method is obviously different from all the other existing works on OofA designs.
Most existing methods for the OofA experiment are only applicable for small m (m ≤ 7). It will be shown that the proposed method, which is an efficient sequential algorithm, yields the optimal order for any number of components m (even for m ≥ 20). Compared to existing work, the run sizes of the proposed method are much smaller. Simulation studies confirmed that the proposed approach performs well for OofA experiments. Theoretical supports are obtained (Section 6) to ensure the effectiveness of the proposed method. This article can be regarded as an early work for the OofA experiment with a large number of components. Especially, the unique optimal order can be found by the proposed method when the observed response of OofA experiment is noiseless.
The rest of this article is organized as follows. Section 2 proposes the new methodology for the OofA experiment. Section 3 presents an example for job scheduling problem with m = 8 to illustrate the proposed method. Section 4 extends the proposed method to handle the general OofA experiment. A complex scheduling problem with m = 20 is demonstrated in Section 5. The corresponding theoretical supports are provided in Section 6. Further results for the cases of unknown variances are summarized in Section 7. Section 8 gives the conclusion and discussion.

The Proposed Method: Noiseless Case
The classical quick sort works by selecting a pivot element from all the original elements and partitioning the other elements into two subsets (those are larger and those are smaller than the pivot element), and then recursively sorting these two subsets. Motivated by the basic idea of quick sort, the proposed methodology divides the all components into two subsets: the left subset contains those components which are preceded by the pivot component, and the right subset contains those components which are behind the pivot component. Recursively applying the method to these two subsets, the optimal order can then be obtained by combining all optimal subsets together. The procedure of the proposed method is presented in Algorithm 1, with an illustrative example in Section 3.
There are typically three types of optimal response to be considered: (a) "target is best"; (b) "the larger, the better"; and (c) "the smaller, the better". For simplicity, we only focus on case Table 1. First m runs in the experiment for A a 1 ,a t .

Order
Response ; the other two cases can be conducted in a similar manner. Assume that the response y is a function of the permutation order {1, 2, . . . , m}, and the experimenter aims to minimize the response. Let a j (j = 1, . . . , m) be the component at the jth position. Let the component a 1 be the pivot component. Without loss of generality, assume the initial run begins with a baseline order O 1 = (a 1 , a 2 , a 3 , . . . , a m ), which yields the response y 1 . Then we swap the components a 1 and a 2 to obtain the response of y 2 , with the corresponding order O 2 = (a 2 , a 1 , a 3 , . . . , a m ), that is, y 1 = y(O 1 ) = y(a 1 , a 2 , a 3 , . . . , a m ), and y 2 = y(O 2 ) = y(a 2 , a 1 , a 3 , . . . , a m ). Let A a 1 ,a 2 be the better of the observed orderings involving the pair a 1 and a 2 . A a 1 ,a 2 is determined by evaluating the difference between the response y 1 and y 2 , that is, where sgn(x) is defined as +1 if x > 0, and −1, otherwise. Let {a 1 −→ a t } denote the component a 1 should precede component a t , for all t = 2, . . . , m. The operator "⊗" is used to determine the optimal pairwise order of components a 1 and a 2 , for example, {a 1 ⊗a 2 }(+1) = {a 2 −→ a 1 } and {a 1 ⊗a 2 }(−1) = {a 1 −→ a 2 }. In general, for t = 2, . . . , m, we have y t−1 = y(O t−1 ) = y(a 2 , a 3 , . . . , a t−1 , a 1 , a t , a t+1 , . . . , a m ) for t = 2, . . . , m, and (2) y t = y(O t ) = y(a 2 , a 3 , . . . , a t−1 , a t , a 1 , a t+1 , . . . , a m ) for t = 2, . . . , m.
The order A a 1 ,a t = {a 1 ⊗ a t }(sgn(y t−1 − y t )) can then be evaluated. For example, if y t−1 − y t < 0, then A a 1 ,a t = {a 1 ⊗ a t }(sgn(y t−1 − y t )) = {a 1 ⊗ a t }(−1) = {a 1 −→ a t }; and if The equal case (y t−1 − y t ) = 0 will be discussed in Section 4. In general, a component a j (j = 1, . . . , m) is selected as the pivot component, and a random permutation of {1, 2, . . . , m} with a j as the first component is then selected. Without loss of generality, let Component 1 be the pivot component, and the order 1 → 2 → 3 → · · · → m be the initial run. Denote the response for such a sequence by y 1 . Next, one can swap the components in positions 1 and 2 while keeping the order of other components fixed, that is, 2 → 1 → 3 → · · · → m to yield response y 2 . Comparing y 1 and y 2 , the optimal order between components {1, 2} in positions 1 and 2 can be estimated by A a 1 ,a 2 = {1 ⊗ 2}(sgn(y 1 − y 2 )). Similarly, the third run swaps the components in positions 2 and 3 (for the second order) while keeping the order of other components fixed, that is, 2 → 3 → 1 → · · · → m to yield response y 3 . Comparing y 2 and y 3 , the optimal order between components {1, 3} can be estimated. In this way, the first m runs can be conducted as shown in Table 1.
For the first m runs, the pivot component a 1 has to be swapped sequentially with all the other components. After conducting the first m runs, all components can be divided into two subsets: (a) all the components a t 's (t = 2, . . . , m) that precede the pivot component a 1 (to be placed in the left subset), and (b) all the components a t 's that are believed to follow the pivot component a 1 (to be placed in the right subset). Thus, we obtain an original partition order, named as O * 1 , as follows.
Recursively applying the above steps to both the left and the right subsets, the optimal orders can be obtained. The proposed procedure starts from an initial run, followed by a series of sequential runs according to the partition order, until the optimal order is finally obtained. Details are given in Algorithm 1.

Algorithm 1
The Algorithm for exploring the optimal order (Noiseless case) Input: The number of components m, the cost function (f).

Output:
The optimal order(s).
Step 1: Select the component a 1 ∈ {1, 2, . . . , m} as the pivot component. Randomly select the initial order O 1 from a random permutation of {1, 2, . . . , m} with a 1 as the first component. Record the resulting response from this initial order as y 1 .
Step 2: For t = 2, . . . , m, the tth run O t conducted by swapping the pivot component a 1 with component a t in position t. Denote y t as the corresponding responses.
Step 3: Partition the order based on the pivot component. All components are to be divided into two (left and right) subsets by comparing y t and y t−1 . After this partitioning, the left subset contains all components which precede the pivot component, while the right subset contains all components which after the pivot component.
Step 4: Recursively apply the above steps 1-3 to both the two (left and right) subsets, until the optimal order is obtained.

Illustrative Example: Job Scheduling Problem
The job scheduling problem has been widely studied as a popular example for OofA experiments (see Chen, Mukerjee, and Lin 2020;Winker, Chen, and Lin 2020;Chen, Peng, and Lin 2021;Zhao, Lin, and Liu 2021). The goal of the job scheduling problem is to find an optimal order of m jobs to minimize the overall response (Leung 2004;Pimedo 2016). Suppose there are m jobs to be processed sequentially. For a given job-order, the completion time of the operation of the jth job is denoted by where p i represents the processing time of job i (i = 1, . . . , m). The response of total costs is typically present by y = m i=1 ω i C r i , where the weight ω i reflects the importance of each job, and the power r > 0 is describing how the cost of delay is inflated over the time. As an illustrative example, we consider the quadratic response y = m i=1 ω i C 2 i . In this section, a job scheduling problem with m = 8 is used to illustrate the proposed method given in Algorithm 1.
Recursively applying the above approach to the two (left and right) subsets, the optimal order will be obtained. Specifically, for the left subset {2, 3, 7}, using the same approach as in the previous step, additional runs 9-11 are shown in Phase II(a) in Table 2. Note that y 9 < y 10 , and y 10 < y 11 , the left subset becomes 2 → {3, 7}. Thus, one more run, Run 12 in Table 2, is conducted. Comparing y 9 and y 12 , Component 7 should precede Component 3. So the final left subset is 2 → 7 → 3.
For confirmation, we evaluate all possible 8! = 40,320 orders. Note that this is typically not possible in practice, especially for large m. The actual optimal order is found to be 2 → 7 → 3 → 1 → 4 → 5 → 8 → 6, with the cost of y = 8867.48. This is indeed the optimal order identified by our method (see Table 2). Note that our method only needs 18 experiments to run, while all possible n = 8! = 40, 320 orders are typically too large to run in practice. The following diagram depicts the workflow of the above algorithm ( Figure 1).
All the existing methods need to carefully select an optimal design to run all the experiments (runs) at one time, while the proposed method is sequential. The run size of the existing methods are typically much larger than the proposed method. The total number of potential orders is 8! = 40,320. The minimum-run size is n = 29 for PWO model (see, Van Nostrand 1995;Peng, Mukerjee, andLin 2019, Zhao, Lin, and, while it is n = 56 for CP model (Yang, Sun, and Xu 2021). Note that these methods do not ensure the correctness of their final results (see Zhao, Lin, andYang, Sun, and. The proposed method, however, takes only 18 runs. See Section 6 for detailed run size comparisons in general. The existing methods need to fit an approximate model. The final optimal order predicted by these existing methods depends on the accuracy of the fitted model. The existing methods may be misleading when the model does not fit well. On the contrary, the proposed method explores the optimal order without any model specification. As mentioned by one of the referees, the proposed method works well when the OofA problems attain the following two conditions: 1. There is no, or very low, random variation compare to the size of nonzero pair-wise-reordering effects. This issue will be further explored in Section 7. 2. Each i versus j effect does not depend on any other ordering (no interaction case), or more generally the effect does not change sign across other orderings and its nearest to zero effect still meets the criterion in 1. So, there may be higher-order effects (triple, for example) but that are not signchanging.

The Proposed Method: General Approach
The proposed method given in Section 2 does not consider the situation when there are "equal" responses (y t−1 ≈ y t ). This section proposes a general approach to handle the OofA experiment in this situation. Note that the number of conducted experiments in general approach is typically smaller than the noiseless cases. The three-way quick sort, introduced by Bentley and McIlroy (1993), divides the array into three subsets: left subset (elements which are less than the pivot component), a middle subset (elements which are equal to the pivot component), and right subset (elements which are greater than the pivot component). The middle subset is a collection of all equal components. Then, we only recursively applies the quick sort on the left and the right subsets. Since three-way quick sort performs fewer comparisons than quick sort, so three-way quick sort likely requires fewer comparisons than that of simple quick sort, in general.
Following the idea of three-way quick sort, the proposed method also can be modified for solving the OofA problem with "equal" responses. The general approach divides the original order into three subsets: left, right, and middle (insignificant difference) subsets. Under the framework of Algorithm 1, the function of sgn(x) is extended as follows.
where z is a threshold positive constant related to the hypothesis test (z = t α/2 √ 2σ , as will be shown in the end of Section 4). Let {a 1 ⊗ a t }(0) = {a 1 , a t }, where {a 1 , a t } denotes that a t ≈ a 1 , that is, the order of components a t and a 1 are irrelevant. For example, the order of the components a 1 (pivot) and a t (t = 2, . . . , m) can be identified by comparing y t−1 and y t .
the response difference for the order of a 1 and a t is insignificant.
In this case, the original order can be divided into three subsets: left subset, right subset and middle (insignificant difference) subset. In those three subsets, one can recursively apply the proposed method to both the left and the right subsets. Note that the optimal order may not be unique in this case.
A modification of Algorithm 1 to handle the identical responses is given in Algorithm 2. Steps 1 and 2 remain unchanged as in Algorithm 1. In Step 3, instead of dividing the all components into two sub-subsets, it is also considered one more subset, the middle (insignificant) subset. In Step 4, we recursively apply the proposed method to the left and right subsets, and obtain the optimal order by combining all sub orders.
The most critical idea of Algorithm 2 is to identify two responses are "equal" (insignificant difference). Suppose N is the number of total experiments, and Y = (y 1 , y 2 , . . . , y N ) is the observed responses. For simplicity, we assume that the OofA experiments have homogeneous variance (var(Y i ) = σ 2 ). Denote expectation E(Y t−1 − Y t ) = δ t and variance var(Y t−1 − Y t ) = 2σ 2 . The "equal" components are identified by hypothesis testing. Specifically, we need to test the hypothesis H 0 : δ t = 0 versus H 1 : δ t = 0.
Any test method can be used here. We may use the z-test with the test statistic z δ t = (δ t − 0)/ √ 2σ . For the case of homogeneous Algorithm 2 The general Algorithm for OofA experiment with noise Input: The number of components, the cost function (f). Output: The optimal order(s).
Step 1: Select Component a 1 ∈ {1, 2, . . . , m} as the pivot component. Randomly select the initial order O 1 from a random permutation of {1, 2, . . . , m} with a 1 as the first component. Record the resulting response from this initial order as y 1 .
Step 2: For t = 2, . . . , m, the tth run O t conducted by swapping the pivot component a 1 with component a t in position t. Denote y t as the corresponding responses.
Step 3: Partition the order based on the pivot component. All components are to be divided into three (left, middle, and right) subsets by comparing y t and y t−1 , where the left subset contains all components which precede the pivot component, the middle subset contains all components which are switchable with the pivot component, and the right subset contains all components which after the pivot.
Step 4: Recursively apply the above steps 1-3 to both the left subset and right subset, keep the middle subset (insignificant subset) unchanged, until the optimal order is obtained.
known variance σ , the point estimator of δ t isδ t = y t−1 − y t . Using confidence level α, . This enables us to construct a test to identify all components which are not significantly different with the pivot component. Then, these components will be put into the middle subset. For the case of unknown variance, the t-test can be used when one get the estimated varianceσ 2 . The case of unknown variance will be discussed in Section 7. Here, all components in the insignificant subset are considered order-invariant. Using Algorithm 2, all possible orders can be found and these orders can be considered as optimal orders (their responses are too close to be different).

Simulation Study
In this section, a job scheduling problem with m = 20 is provided to illustrate the usefulness of the proposed method for OofA experiment with large m. Note that such a large m cannot be solved by all existing OofA methods. Without loss of generality, the first component will be regarded as the pivot, and the order 1 → 2 → · · · → 20 as the initial run. In our simulation study, the variance σ 2 varies from 0 (noiseless) to small, and then to large (σ 2 =0, 0.25, 0.5, 0.75, 1, 1.5, and 2). The goal of this simulation is to verify the performances of the proposed method in different variance situations. It is found that the performance of the method deteriorates as the variance increases. Without loss of generality, we select three cases of σ 2 =0, 0.5, and 1 to demonstrate this trend. We assumed the OofA experiment in this section has homogeneous variance for all scenarios. The following six scenarios are considered: S1 the experiment noiseless, σ 2 = 0. S2 the homogeneous variance is known and small, σ 2 = 0.5. S3 assuming the homogeneous variance is known, σ 2 = 1. S4 the variance is unknown, the variance σ will be estimated by the sample variance (see Section 7 for the details). R1 Randomly choose 5000 samples from all possible 20! orders. R2 Randomly choose 10,000 samples from all possible 20! orders.
For scenarios R1 and R2, among these 5000 (10,000) responses, the minimal response was recorded. We repeat this process 200 times. These 200 minimal responses were then displayed for comparing the proposed method. The detail will be discussed in the last few paragraphs in Section 5.
Without loss of generality, the order O 1 = 1 → 2 → · · · → 20 is designated as the initial run, and Component 1 is the pivot. The simulation procedure is designed following the steps below.
1. Randomly draw a noise vector ε from a normal distribution N 0 r , σ 2 I r , where I r is an r × r identity matrix, where r is the number of experiments. 2. Generate the response vector y with the random error vector generated in 1. 3. Letδ t = (y t−1 − y t ). Test the hypothesis H 0 : δ t = 0 versus H 1 : δ t = 0, the test statistic z δ t = δ t −0 √ 2σ is used (with significance level α = 5%). 4. Apply the Algorithm 2 to obtain the optimal order. Note that all components are divided into three (left, middle, and right) subsets according to Expression (3). Here, the left subset contains all components which precede the pivot component, the middle subset contains all components whose orders (vs. the pivot) is insignificant, and the right subset contains all components after the pivot.
The simulation results are displayed in Figure 2 for different scenarios (the details of S4 will be discussed in Section 7). Figure 2 presents the boxplots of the performances under those 3 simulation scenarios (S1, S2, S3) and those two random sampling cases (R1, R2). It is clear from Figure 2 that the performance of S1 is superior to S2, and S2 is superior to S3. Furthermore, the performances of S1, S2 and S3 are much better than the random sample approaches R1 and R2. Note that under the scenario S1 (noiseless), the final optimal order is unique. It is showed that the proposed method performs well in all three scenarios (S1-S3). The simulations are detailedly described below.
The true optimal order can be found by evaluating all 20! orders. This is too large to be conducted in practice (20! = 2.43×10 18 ). For comparison, we randomly choose 5000 samples from all possible 20! orders, and evaluate their corresponding responses. Among these 5000 responses, the minimal response was recorded. We repeat this process 200 times. These 200 minimal responses were then displayed in the fifth boxplot in Figure 2. Similarly, the same process is applied to a sample of size 10,000. The minimal responses are also displayed in the last boxplot in Figure 2. As expected, when the sample size is larger, the minimal response found is smaller (with more consistency as well). For a fair comparison with our findings, we repeat our method for 200 times and the corresponding responses were evaluated. Figure 2 presents the boxplots of the performances under those three simulation scenarios when σ 2 is known (S1, S2, S3), one scenario when σ 2 is unknown (S4, as will be discussed in Section 7), and those two random sampling cases (R1, R2). It is clear from Figure 2 that the performances of those four scenarios are ranked (from the best to the worst) by S1, S2, S3, S4. This rank is consistent with the magnitudes of variances of those four scenarios. Note that under the scenario S1 (noiseless), the final optimal order is unique. It is showed that the proposed method in all four scenarios has better performances compared to the optimal order from randomly choose 5000 samples and 10,000 samples. It also can be found that the run size of the experiment will be reduced with the variance increasing. It is indicated that the proposed method performs well even in the cases when the responses are contaminated with noises.

The Optimal Result
This section presents some theoretical supports for the proposed method. The following result shows the sufficient condition when the proposed method works well.
Proposition 1. Assume that the response of the OofA experiment is dominated by the pairwise effect. Then the proposed algorithm will yield the true optimal order. Specifically, if the OofA experiment is noiseless (i.e., σ 2 = 0), then the obtained optimal order is unique.
Proposition 1 shows that the proposed method works well when the response of the OofA experiment is dominated by the pairwise effect. For example, the job scheduling problems can be represented by the pairwise form. Thus, it can be solved by the proposed method. The proposed method does not require any model assumption.
Next, we consider the case of OofA experiment with noise. Using Algorithm 2 and putting all identified optimal pairwise effect A a 1 ,a t 's into a set A = {A a 1 ,a t : pairwise effect A a 1 ,a t s}, we have the result of Theorem 2.
Proposition 2. If the set A is not empty, the optimal order is not unique.

Run Size Consideration
The quick sort algorithm requires respectively O(m log m) and O(m 2 ) comparisons on the average case and worst case, where m is the size of the arranged array (Yaroslavskiy 2009). The theoretical lower bound (best case) is m log m. Here, we are interested in the number of experiments to find the optimal order. For simplicity, let us focus the run sizes of the proposed method under noiseless (Algorithm 1). From the simulation study in Section 5, the run sizes of the general method with noise case are even smaller than the proposed method under the noiseless case.
The worst case occurs when the partition process always selects the extreme (the first or the last) component as the pivot component. These are the most unbalanced partitions possible. For the worst case, we have the following result; the proof is trivial thus omitted.
Lemma 1. Under the worst case, the run size of the proposed method is Note that the worst case run size is equivalent to the minimal run size of the PWO model for OofA experiment (See Zhao, Lin, and Liu 2021).
The best case scenario occurs when the partitions are as evenly balanced as possible, that is, the sizes on left and right subsets are equal (or differ by 1). Let C m denote the expected number of comparisons of the proposed algorithm sorting an order with m components.
Lemma 2. Under the best case, the sample size is determined by the following recurrence formulas: For the average-case, we have the following result.
Lemma 3. For the average case, the sample size is determined by the following recurrence formula: Combining Lemmas 1, 2, and 3, the expected number N of the proposed method can be stated as follows.
Theorem 1. Let C m denote the expected number with m components. For the proposed method, we have 1. the upper-bound of the number N is given by (5), that is, m 2 + 1; 2. the lower-bound of the number N is displayed by (6) 3. the average number of proposed method N is given by (7), that is, m+1 The PWO model and CP model are currently two popular models for studying OofA experiments. The minimum run sizes of those models are m 2 + 1 (PWO model) and m(m − 1) (CP model), respectively. Figure 3 and Table 2 (in supplementary materials) shows the expected number of the proposed sorting Algorithm for m = 3, . . . , 50.
For Figure 3 and Table 2 (in supplementary materials), the cost of the proposed method is considerably smaller than all the existing methods. Taking m = 50 as an example, for the best case, comparing the number of experiments with current models (PWO model: 1226; CP model: 2450), the proposed method only needs 191 experiments. The proposed method, under the best cases, only uses 15.58% of the runs from the PWO model and 7.79% of the runs from the CP model. As for the average case, the proposed method only uses 21.20% of the runs from the PWO model and 10.60% of the runs from the CP model. Note that the proposed method rarely reaches the worst case. Its probability is found to be (2/m) p (m is the number of components and p is the recursive times).

Further Results: Unknown Variance
As mentioned in Section 4, the most critical idea of Algorithm 2 is to identify whether two responses are "equal" (insignificant difference). This is done by testing the hypothesis (4): . . , N (N is the total number of runs required by the method). For the OofA problem under the cases of noiseless (σ 2 = 0) and equal known σ 2 , the above sections have been discussed the details and showed that the proposed method performs well. In this section, we will discuss the case of unknown variance(s).
To estimate the unknown variance, the tth run could be replicated (say, n t times). For the replications of the tth and (t − 1)th (t = 2, . . . , N) runs, assume (Y t 1 , . . . , Y t n 1 ) ∼ N(μ t , σ 2 t ) and (Y (t−1) 1 , . . . , Y (t−1) n 2 ) ∼ N(μ t−1 , σ 2 t−1 ). The hypothesis (4) is in fact the very basic conventional problem of comparing two population means. For example, one can use the classical z-test The μ t and μ t−1 can be estimated by the corresponding sample means of the responses with n t and n t−1 replications; while the σ 2 t and σ 2 t−1 can be estimated by the corresponding sample variances. To reduce experimental cost, we seek for reliable estimates of μ t , μ t−1 , σ 2 t and σ 2 t−1 via a minimum number of runs. For example, assume the variances of any run are equal, that is, σ 2 t = σ 2 for all t. Then, one can only replicate the initial run (or any run) n times to estimate the unknown variance σ 2 . Using Algorithm 2, all possible orders can be found and these orders can be considered as optimal orders. Note that a larger number of replicates (n) will likely result in a more reliable estimate for the variance. This will also increase the experimental costs.

Conclusion
Order-of-Addition (OofA) experiments have recently received a great deal of attention. The OofA experiment has been used in many areas, including bio-chemistry, food science, nutritional science, and pharmaceutical science, etc. The existing methods are mostly model-dependent. The appropriateness of the resulting optimal orders thus rely on (a) the correctness of the underlying assumed model (such as PWO or CP models), and (b) the goodness of model fitting. Furthermore, these methods are not applicable to large m (m ≥ 7, say).
Motivated by the basic idea of quick sort, this article proposes an efficient adaptive methodology for the OofA experiment for any number of components m. Specifically, for a given pivot component a 1 , Algorithm 1 divides all components into two subsets: (a) all the components a t 's (t = 2, . . . , m) that precede the pivot component a 1 (to be placed in the left subset), and (b) all the components a t 's follow the pivot component a 1 (to be placed in the right subset). Recursively applying Algorithm 1 to both the left and the right subsets, until the optimal order is obtained. Algorithm 1 works well when all responses are unique. Moreover, Algorithm 2 is proposed to solve the OofA problem when there are "equal" responses (y t−1 ≈ y t ). Instead of dividing all components into two subsets, Algorithm 2 considers three subsets, the left, middle, and right subsets. Recursively applying Algorithm 2 to both the left and the right subsets, until the optimal order is then obtained.
Note that there may exist potential multiple hypotheses issues in the middle subset. To simplify our presentation, the Benjamini-Hochberg procedure is used to solve these potential issues in the middle subset. The details can be found in Benjamini and Hochberg (1995). Simulation studies confirmed that the proposed approach performs well for OofA experiments (even for a large m). Theoretical supports are given (Section 6) to illustrate the effectiveness of the proposed method. It is shown that the proposed method is able to find the optimal order. Especially, the unique optimal order can be found by the proposed method when there does not exist any equal observed responses.
This article can be regarded as an early work of an OofA experiment for a large number of components. Compared with the existing work, the proposed method can obtain the optimal order for large m (e.g., m ≥ 20) via a considerably smaller number of runs. The proposed method, building upon the quicksort algorithm, explores the optimal order without any model specification. Note that the proposed method does not need any model to design and analysis for OofA experiment. For the OofA experiment with identical responses cases, the critical procedure of the proposed method is to test the hypothesis H 0 : δ t = 0 versus H 1 : δ t = 0, where δ t = E(Y t−1 − Y t ). As a conventional tool, the t-test is used. To perform the t-test, we should estimate the unknown variance of the errors. On one hand, more experiment runs (replications) give a more reliable estimate of the unknown variance. On the other hand, more replicates also implies a higher experimental cost. There is a balance that one needs to keep.

Supplementary Materials
The supplementary material contains (a) all the proofs of the theoretical supports; (b) two additional tables in Sections 5 and 6.