A deterministic sequential maximin Latin hypercube design method using successive local enumeration for metamodel-based optimization

Space-filling and projective properties of design of computer experiments methods are desired features for metamodelling. To enable the production of high-quality sequential samples, this article presents a novel deterministic sequential maximin Latin hypercube design (LHD) method using successive local enumeration, notated as sequential-successive local enumeration (S-SLE). First, a mesh-mapping algorithm is proposed to map the positions of existing points into the new hyper-chessboard to ensure the projective property. According to the maximin distance criterion, new sequential samples are generated through successive local enumeration iterations to improve the space-filling uniformity. Through a number of comparative studies, several appealing merits of S-SLE are demonstrated: (1) S-SLE outperforms several existing LHD methods in terms of sequential sampling quality; (2) it is flexible and robust enough to produce high-quality multiple-stage sequential samples; and (3) the proposed method can improve the overall performance of sequential metamodel-based optimization algorithms. Thus, S-SLE is a promising sequential LHD method for metamodel-based optimization.


Introduction
In order to save the computational expense of modern engineering design optimization involving high-fidelity simulations (e.g. finite element analysis and computational fluid dynamics models), metamodels, also called surrogate models, have been widely utilized to replace those expensive black-box functions. In the past two decades, significant development of metamodelbased design optimization (MBDO) has been achieved, including novel MBDO algorithms and industrial applications Wang and Shan 2007;Forrester and Keane 2009;Shan and Wang 2010;Viana et al. 2014). As a key technique in the MBDO area, design of computer experiments greatly influences the global accuracy of metamodels and the overall performance of MBDO algorithms including global exploration capability, convergence speed, efficiency and robustness. It is commonly recognized that high-quality samples should possess favourable space-filling and projective properties (Johnson, Moore, and Ylvisaker 1990;Morris and Mitchell 1995). The space-filling property requires points to distribute in the design space as *Corresponding author. Email: bitryu@gmail.com; tenglong@bit.edu.cn evenly as possible, and the projective property requires the samples to be uniformly positioned in each dimension or the lower dimensional projective planes (van Dam, Husslage, and den Hertog 2007). The Latin hypercube design (LHD) method naturally guarantees the perfect projective property. In recent years, lots of one-stage optimal Latin hypercube design (OLHD) methods space-filling and projective behaviours through a genetic algorithm (GA) optimization process. But Quasi-LHD is not as efficient since a global optimization process is required. Crombecq, Laermans, and Dhaene (2011) used a Monte Carlo approach to develop the mc-intersite-proj-th (MIPT) method for sequential sampling, which a employed space-reduction strategy to ensure the projective property of samples. Liu, Xu, and Wang (2015) proposed a sequential maximin LHD approach using a Monte Carlo method and space-reduction strategy (i.e. MCSR), and then a local boundary search algorithm was used to further improve the space-filling property. The Monte Carlo approach introduced randomness into the MIPT method and its variants. Hence, new sequential maximum LHD methods are still expected to upgrade the performance of move-limit-based MBDO algorithms.
In this article, a novel sequential maximum LHD method using successive local enumeration, named sequential-successive local enumeration (S-SLE), is proposed to sequentially produce samples in regions of interest considering the space-filling and projective properties. S-SLE is an extension of a previous one-stage SLE method whose satisfactory performance has been demonstrated, especially for low-and intermediate-dimensional problems (Zhu et al. 2012). A practical mesh-mapping algorithm is first proposed to map the existing points previously locating in the region of interest into the evenly meshed hyper-chessboard. Then, using the local enumeration mechanism, new samples are determined by maximizing the minimal distance between the newly added and existing points. Since no global optimization process is required, the developed S-SLE is expected to efficiently generate high-quality sequential samples.
The rest of the article is organized as follows. Section 2 details the proposed S-SLE method. S-SLE is then compared with several one-stage OLHD methods and sequential LHD methods in Section 3. In Section 4, further comparative studies are presented to show the appealing merits of S-SLE for metamodel-based optimization. Finally, conclusions and directions for future work are given in Section 5.

Description of the sequential-successive local enumeration method
To ensure space-filling and projective properties, the proposed S-SLE method consists of two subroutines, namely, the mesh-mapping algorithm to map existing points into the evenly meshed hyper-chessboard, and the sequential maximin sampling algorithm using successive local enumeration based on the maximin distance criterion. The two algorithms are detailed as follows.

Mesh-mapping algorithm
The original one-stage SLE method gradually positions the samples in a hyper-chessboard to ensure the projective property in each dimension, and a successive local enumeration process is then executed to improve the space-filling property. For two-dimensional (2D) problems, the hyper-chessboard is a square; for three-dimensional (3D) problems, it becomes a cube. For n-dimensional problems (n > 3), it can be regarded as a hypercube. According to the SLE algorithm, the samples need to be placed in the available grids in a uniformly meshed hyperchessboard based on the number of samples (m). Each grid is one element of the meshed hyper-chessboard, which has the same geometric topology as the hyper-chessboard. Since the existing samples may locate at arbitrary positions, the authors derived a mesh-mapping algorithm to identify the grids that have been occupied by existing points to ensure the projective property of sequential sampling. Assume that the number of existing points is m 1 ; the number of sequential samples is m 2 ; and the existing points set is represented by k stands for the jth coordinate of the kth existing point. The procedure of the n-dimensional mesh-mapping algorithm is explained below.
Step 1. (Normalization): The current region of interest, also called the design space for simplification, and the existing points are first normalized according to Equation (1). In Equation (1), indicates the original coordinates, max and min are the upper and lower bounds of the current design space, and¯ is the normalized coordinates lying in the interval [0,1]. The normalized design space becomes a unit hypercube.
Step 2. (Generate evenly meshed hyper-chessboard): Since the sequential sampling process of the proposed method is executed cell by cell, the cell number should be equal to the number of total samples including existing points and sequential samples, i.e. m = m 1 + m 2 . Thus, the first dimension of the design space is divided into m = m 1 + m 2 parts to form m cells. Then, each of the other dimensions is divided into m o = m + c parts to construct denser meshes for achieving better space-filling performance, where c is a positive integer called the mesh density parameter. The influence of parameter c is analysed in Section 2.3 to find the recommended value of c. In the first dimension, each part has the equivalent size of 1/m, and in the other dimensions, the size is 1/m o . In terms of m, m o and n, the design space can then be converted into an evenly meshed hyper-chessboard with m × m n−1 o grids. The position of an arbitrary grid in the hyperchessboard can be expressed by a set of integers (I 1 , I 2 , . . . , I n ), where I 1 ∈ {1, 2, . . . , m} and Step 3. (Map the coordinate of one existing point): The normalized coordinates of the kth existing point can be expressed asP ex To map the first dimension of the kth existing pointp (1) k , first set j = 1, and then comparep (1) k with j/m. Ifp (1) k < j/m, the mapped valuep (1) k ofp (1) k is set to j. Otherwise, set j = j + 1 and continue untilp (1) k < j/m. For the other dimensions of the kth existing point, comparep (i) k with j/m o to determine the mapped valuep (i) k through the same process as above. After that, the mapped coordinates of the existing points in the hyper-chessboard can be expressed by an integer setP ex k = (p (1) k ,p (2) k , . . . ,p (n) k ), which indicates the grid already occupied by the kth existing sample.
Step 4. (Map all the existing points and output): Step 3 is repeated to map all the existing points in the same manner. The mapped coordinates of all the existing points, as expressed by an integer matrixP ex = {P ex 1 ,P ex 2 , . . . ,P ex m 1 }, and the normalized coordinates of existing pointsP ex are output for the subsequent sequential maximin sampling process. Output: the locations of the existing points in the m × m n−1 k is an integer between 1 and m 0 ; the normalized coordinates of existing pointsP . In order to illustrate the mesh-mapping algorithm clearly, a 2D example is presented, where the hyper-chessboard degenerates to a square. Assume that four points already exist in the design space (i.e. m 1 = 4), as shown in Figure 1(a), and another four sequential samples need to be generated (i.e. m 2 = 4). The number of total samples is thus equal to eight (i.e. m = 8). By setting c = 4, the first dimension is divided into eight parts, while the second dimension is divided into 12 parts (i.e. m o = 12). For the first existing point (from the left), the first dimensionp (1) 1 satisfies the inequalityp (1) 1 = 0.054 < 1/8, so the mapped value is set to 1 (i.e.p (1) 1 = 1). Similarly, the mapped value of the second dimension for the first point is 4 (i.e.p (2) 1 = 4). Thus, the mapped position of the first point in the grid is (1,4) in the plane chessboard, as shown in

Sequential maximin sampling algorithm
Once the mapped coordinates of the existing points have been obtained, a sequential maximin sampling algorithm using successive local enumeration inherited from one-stage SLE method can be used to generate the desired sequential samples. In the m × m n−1 o unit hyper-chessboard, the new sequential points are determined cell by cell, and only one point can be designated in each cell that consists of (m o ) n−1 grids. Algorithm 1 presents the procedure of the sequential maximin sampling algorithm for n-dimensional problems.
Step 1. (Update the available grids for the selected cell, lines 2-3): According to the principle of LHD, every dimension of newly added samples has to differ from those occupied by existing points. Similar to SLE, the S-SLE method generates sequential samples cell by cell. All the cells are numbered with integers from 1 to m according to their distance from the origin. All cells are checked one by one until the first cell without any existing points is found, which is selected for generating the jth new sample. The first dimension of the jth sequential sampleP seq(1) j is set to the selected cell number. In the evenly meshed hyper-chessboard, candidate positions in the remaining n -1 dimensions are represented by integers from 1 to m o , also called identifiers. To obtain available grids, the identifiers that are not occupied by existing samples are stored in an (m 2 + c + 1 − j) × (n − 1) matrixP . For the jth new sample, each column ofP stores m 2 + c + 1 − j candidate identifiers in each of the remaining n − 1 dimensions. Since the first dimension of the jth new sample has been already determined, n − 1 rows of matrixP are used to record the candidate information for the remaining dimensions. Columnwise permutation on matrixP is performed to form the other n − 1 dimensions of the available grids. The number of available grids for the jth sequential point is (m 2 + c + 1 − j) n -1 .P ava is a matrix whose lth row isP ava l , i.e. the lth available grid, l = 1, 2, . . . , (m 2 + c + 1 − j) n−1 .
Step 2. (Determine the jth potential grid setP po , lines 4-15): Since the centers of available grids are used to compute the distances, the normalized coordinate of the lth available grid can be calculated in Equation (2), whereP ava is the integer coordinate. The coordinates of all the available grids can be normalized to obtain matrixP ava .
Algorithm 1 Sequential maximin sampling using successive local enumeration Distances between the lth available gridP ava l and all the existing pointsP ex k (k = 1, 2, . . . , m 1 + j − 1) are then calculated to identify the minimal distance (i.e. d min,l ), which is defined as the eigenvalue of the lth available grid. Next, those eigenvalues are sorted in descending order and the available grids with the top n p largest eigenvalues in the selected cell are considered as the potential gridsP po t (t = 1, 2, . . . , n p ), which are stored in the set of potential gridsP po . The desired point with the best projective and space-filling properties probably locates in the design space represented byP po . In this study, the number of potential grids is set to five. If the number of available grids is less than five, all of them are chosen as potential grids.
Step 3. (Determine the potential pointP po t in the tth potential grid, lines 18-30): Since it cannot be guaranteed that the centre point of each potential grid has the best space-filling and projective properties, a subsampling process is executed in each potential gridP po t to pursue better spacefilling performance. TPLHD (Viana, Venter, and Balabanov 2010), a deterministic one-stage maximin LHD method, is employed to generate s candidate pointsP can(q) t (q = 1, 2, . . . , s) in the tth potential grid. The minimal allowable projective distance, i.e. d p allow = 0.5/(m − 1), is employed to remove the points with poor projective property.
If the minimum projective distance between the candidate pointP can(q) t and any existing point is less than d p allow , the minimal distance (i.e. d min,q ) is set to zero, which indicates a non-promising point. Otherwise, the actual distances between the candidate point and existing points are calculated. The candidate point with the maximal d min,q (i.e. d mxm,t ) is defined as the potential point P po t in the tth potential grid.
Step 4. (Determine the jth sequential sampleP seq j , lines 16-34): Step 3 is repeated to identify all the potential points. Next, the potential point with the maximal d mxm,t (i.e. d mxm ) is picked as the jth sequential sampleP seq j . The potential grid expressed by integers that contains the jth sequential sample is added to the existing samples setP ex to update the available grids for subsequent iterations.
Step 5. (Determine all the m 2 sequential samples, lines 1-37): Steps 1-4 are repeated to generate all the m 2 sequential samples, which are also saved in the sequential samples setP seq and entire samples setP. Since the output samples inP seq andP are still normalized values locating inside the unity hypercube, users can perform the inverse operation of Equation (1) to obtain the actual coordinates. The same 2D sequential sampling example in the previous section is used to illustrate the procedure of Algorithm 1, as shown in Figure 2. Note that the mesh-mapping of the four existing points has been accomplished, as shown Figure 1(b). For this 2D problem, a cell is defined as a column consisting of 12 squares (i.e. grids). After eliminating all the columns occupied by the existing points to ensure the projective property, column 2 is the first available cell where no existing point locates, so the first dimension for the first sequential sample is set to integer 2. In column 2, the available identifiers for the second dimension include 2, 3, 5, 6, 7, 9, 10 and 12, which means eight candidate grids, as marked by shaded squares in Figure 2 (a), need to be inspected to determine the first sequential sample. According to Algorithm 1, matrixP can be expressed asP = (2, 3, 5, 6, 7, 9, 10, 12) T . By combining integer 2 with every row ofP , the available grids can be expressed byP ava : P ava = 2 2 2 2 2 2 2 2 2 3 5 6 7 9 10 12 T (3) In terms of Equation (2) consisting of the promising grids with the top five largest eigenvalues, is selected, as shown in Equation (5). The potential grids are marked by red-shaded squares with 45°bars (//) in Figure 2(a), and the black-shaded ones with -45°bars (\\) are discarded non-promising grids. P po = 2 2 2 2 2 7 6 9 2 5 T (5) Next, a subsampling process is applied for all the potential grids to determine the potential pointsP po t . Five TPLHD candidate points generated in the first potential gridP po 1 = (2, 7) can be expressed in Equation (6), as marked by the four blue circles and one red diamond in Figure 2 The minimal projective distance between each candidate point and all existing samples is calculated to discard the points with undesired projective performance. In this example, since the minimal projective distance values of candidates 2, 3, 4 and 5 are less than the minimal allowable distance d remaining three sequential points can be generated, and Figure 2(e) shows the initial points and newly added sequential samples. As can be seen in Figure 2(e), the sequential samples do not become overcrowded with the previous points and all the samples have favourable space-filling and projective properties.

Influence analysis of parameter c
The mesh density parameter c is introduced to control how many grids should be generated in the hyper-chessboard. It is expected that denser meshes will lead to better space-filling performance but higher computational cost. Thus, the appropriate value of c is beneficial to balancing the effectiveness and efficiency of the S-SLE method. To conduct an analysis of the influence of parameter c on space-filling performance and sampling efficiency, the S-SLE method with various values of c was used to generate sequential samples on four problems. Ten initial samples from the MATLAB lhsdesign function were generated for all the problems. Note that for the each problem, S-SLE sampling processes using difference c values were performed based on the identical initial samples set. In each sampling problem, parameter c was gradually increased from 0 to 10 with a unit increment. Each experiment was repeated 30 times to record the average values of d min (indicated by circles-line) and sampling time (second) (indicated by squares-line), as shown in Figure 3. Figure 3 shows that for all the cases the increasing c value leads to larger d min and sampling time, which is consistent with expectations. For smaller values of c, parameter c has a significant positive influence on both d min and CPU time. However, when c reaches a certain value (e.g.  Figure 4. Comparison of two-dimensional sequential sampling results obtained by (a) successive local enumeration (SLE; two stages) and (b) sequential-successive local enumeration (S-SLE; two stages) (n = 2, m 1 = 10, m 2 = 10). Solid circles indicate existing points and hollow circles indicate sequential points. 4 or 5), the improvement in the space-filling property slows down as parameter c continuously increases, but the sampling expense still has a significant increasing trend. To achieve a good trade-off between sampling quality and efficiency, the value of parameter c should range from 4 to 8.

Properties of the proposed sequential-successive local enumeration method
Since the S-SLE method is derived from the previous one-stage SLE method, all the merits of SLE are fully inherited, and the limitations of SLE have also been solved or at least alleviated. First, by using the mesh-mapping and sequential maximin sampling algorithms, S-SLE is capable of generating sequential samples with good projective and space-filling properties. Secondly, the first sequential point is identified through local enumeration instead of random selection, which eliminates the effect of the first random point on the sampling quality. Thirdly, instead of simply designating sequential samples at the grid centres, a subsampling process using the deterministic TPLHD method is performed in potential grids to find higher quality potential points. Moreover, S-SLE is a deterministic sequential sampling method, which always produces a constant sequential samples set for the given existing samples and number of sequential samples. Finally, because of its favourable flexibility, S-SLE needs to produce only a small amount of new samples in the region of interest at each iteration even for high-dimensional problems, and then gradually increase the samples via a multiple-stage sampling pattern. Figure 4(a) and (b) illustrate the 20 samples generated from a two-stage sampling process using SLE and S-SLE, respectively. For both methods, the same 10 existing samples were generated by MATLAB lhsdesign function using the 'maximin' criterion within 200 iterations. Note that the same setting of lhsdesign is also used for the following sections. It is clear that the results for S-SLE have better space-filling and projective properties than those for SLE.

Comparative studies on sampling performance
In this section, the S-SLE method was compared with several one-stage OLHD methods and sequential maximin LHD methods through a number of sequential sampling experiments.

Comparison with one-stage optimal Latin hypercube design methods
A few experiments with different dimensionalities (i.e. 2D, 4D and 6D) were carried out for comparison. For each experiment, it was assumed that 20 points already existed in the design space and 20 sequential samples needed to be generated. The lhsdesign function was used to produce 10 different sets of 20 existing samples for all the experiments. Based on the same initial samples, sequential points were produced using lhsdesign, ESEA (Jin, Chen, and Sudjianto 2005), TPLHD (Viana, Venter, and Balabanov 2010), SLE (Zhu et al. 2012) and the proposed S-SLE. In this work, the online MATLAB implementations of TPLHD and ESEA developed by Viana (2011) were used. In the S-SLE method, mesh density parameter c is set to 5 and the number of TPLHD candidate points s is set to 500, which were also used for the following experiments. To reduce random variations, each LHD method was consecutively executed 10 times on all the experiments based on the same initial lhsdesign points. All the experiments in this article were conducted in MATLAB ® release 2012a, installed on a desktop computer equipped with an Intel Core 2 Duo 2.93 GHz CPU and 2 G RAM. After normalizing the samples to (0 ∼ 1), a bar plot of mean values of the maximin distance criterion (i.e. d min ) was produced, as shown in Figure 5.
As can been seen from Figure 5, the proposed S-SLE outperforms all the other one-stage OLHD methods in terms of the d min space-filling uniformity criterion.

Comparison with sequential maximin Latin hypercube design method
To illustrate the performance of the proposed method, several existing sequential maximin LHD methods were involved in the comparison, including Quasi-LHD (Xiong et al. 2009), MIPT (Crombecq, Laermans, and Dhaene 2011) and MCSR (Liu et al. 2015).
Several criteria have been developed to evaluate the space-filling uniformity of samples. Besides the d min criterion, three other widely used criteria were considered: φ p , potential energy U, and centred L 2 discrepancy criterion CL 2 (Jin, Chen, and Sudjianto 2005;Viana, Venter, and Balabanov 2010). In this study, the positive integer p in the φ p criterion is set to 50. Note that a larger d min or smaller values of φ p , U and CL 2 indicate better space-filling properties. Moreover, two projective metrics (i.e. μ d min and μ var ) used in Quasi-LHD (Xiong et al. 2009) were also used to compare the projective property. Larger μ d min and smaller μ var indicate better projective properties. The formulae of μ d min and μ var can be found in Xiong et al. (2009).
Comparisons between the S-SLE method and the other three sequential LHD methods were performed for 2D, 3D and 6D problems in terms of the aforementioned criteria. For all the problems, it was assumed that 10 initial points existed and 20 sequential samples needed to be generated using S-SLE, Quasi-LHD, MIPT and MCSR. Based on the same lhsdesign initial sample sets, both sequential sampling processes were performed 10 times to record the average values of various criteria, as shown in Table 1. It is worth noting that the S-SLE method generates Note: MIPT = mc-intersite-proj-th; MCSR = Monte Carlo method and space reduction strategy; S-SLE = sequential-successive local enumeration. all the sequential samples in one stage, while the other three sequential sampling methods are performed in a one-by-one fashion.
As shown in Table 1, Quasi-LHD generally performs worse in terms of the space-filling criteria compared with the MCSR and S-SLE methods. For the MIPT method, its results for 2D and 3D problems are better than Quasi-LHD but a little worse than MCSR and S-SLE; however, the results of the MIPT method for the 6D problem show the poorest quality. The results in Table 1 reveal that MCSR and S-SLE generally outperform the other two competitors. In 2D and 3D problems, S-SLE shows the best space-filling performance; however, for the 6D problem, the MCSR is superior to S-SLE. Moreover, comparing the values of μ dmin and μ var , the projective uniformity of S-SLE samples is better than that of the other three sequential LHD methods. In terms of sequential sampling quality, the comparison results thus demonstrate the effectiveness of the proposed S-SLE method for sequential sampling: it obviously outperforms Quasi-LHD and MIPT methods and shows comparable performance with MCSR. Figure 6 shows the 2D sequential sampling results of Quasi-LHD, MIPT and MCSR.
In addition, the sequential sampling efficiency of the four methods was compared; this is measured by the CPU time consumed to generate sequential samples, as listed in the last column of Table 1. For the low-dimensional problems, sequential sampling of S-SLE and MCSR is remarkably faster than for Quasi-LHD. However, as the dimensionality increases, the increased enumeration candidates decay the sampling efficiency of S-SLE. For the 6D problem, the CPU time of the S-SLE method is much larger than that of other three methods.
Unlike the deterministic S-SLE, for the same existing samples, different runs of the other three sequential LHD methods could produce varying results with fluctuating quality. The randomness of the Quasi-LHD method is caused by the GA optimization process, and the randomness of MCSR and MIPT is caused by the Monte Carlo approach. Taking 2D and 4D sequential sampling experiments for example, based on the 20 identical initial samples, the four methods were each performed 10 times to produce 20 new samples. Figure 7 shows the box plots of d min values of sampling results obtained by S-SLE, Quasi-LHD, MIPT and MCSR. As one can see from Figure 7, S-SLE does not produce any variation in the space-filling property. In contrast, the results of the other three methods show different degrees of variation for both 2D and 4D experiments, and some poor results of Quasi-LHD and MIPT are obviously inferior to those of S-SLE samples. Note that although the variations in MCSR are much smaller, this method still shows a large likelihood of producing poorer results compared with the constant results of S-SLE. Moreover, d min values of S-SLE samples are larger than the median values of the other three methods, which demonstrates that the space-filling performance of S-SLE samples is better than over 50% of the other three methods' results. In sequential sampling applications, the uncertainty of Quasi-LHD, MIPT and MCSR may produce poor results, while S-SLE guarantees the generation of sequential samples with favourable quality.
It is worth noting that the original Quasi-LHD, MIPT and MCSR can also be modified to perform sequential sampling in a deterministic fashion. For example, Quasi-LHD can employ a deterministic global search algorithm (e.g. DIRECT) to generate sequential points, and MIPT and MCSR can also use deterministic sampling methods (e.g. TPLHD) to generate candidates. In future work, the authors will compare S-SLE with the deterministic versions of Quasi-LHD, MIPT and MCSR in terms of space-filling and projective performance.
The comparison results with Quasi-LHD, MIPT and MCSR reveal that the proposed S-SLE method can effectively generate sequential samples with favourable space-filling and projective properties. S-SLE is more efficient for low-dimensional problems, but its sampling efficiency for high-dimensional applications still needs to be improved. The deterministic sampling behaviour of S-SLE is another advantage over the other methods. In addition to the two-stage sequential sampling applications presented in previous sections, as a sequential maximin LHD method, S-SLE is flexible and can thus support multiple-stage sequential sampling. Thus, S-SLE can generate high-dimensional samples in multiple stages, which improves its efficiency without greatly decreasing the sampling quality.

Benefits of sequential-successive local enumeration to sequential metamodel-based optimization
In this section, the sequential radial basis function (SRBF) method using the improved significant sampling space (ISSS) method (a specific move-limit approach) (Peng et al. 2014) was employed to demonstrate the benefits of the proposed S-SLE to sequential metamodel-based optimization. During the SRBF optimization, significant sampling space (SSS) with a varying size that is identified according to the current acquired knowledge of the underlying optimization problem moves towards the actual global optimum, and inside each SSS certain existing points may locate at arbitrary positions. More detailed description of SRBF can be found in the literature (Peng et al. 2014). The original SRBF employs the lhsdesign function to generate initial and sequential samples in the renewed SSS.
First, the SRBF using the S-SLE method for sequential sampling is compared with the SRBF with lhsdesign and the SLE method. Then, several versions of SRBF using various sequential LHD methods (including Quasi-LHD, MIPT, MCSR and S-SLE) for sequential sampling were developed for comparison. Several well-known benchmark problems, as formulated in Appendix A (see Supplementary material), were solved using different versions of SRBF. For the 2D problems, the number of initial and sequential samples is 12 and 6, respectively, while for higher dimensional (n > 2) problems, the number of initial and sequential samples is 20 and 2n. The convergence tolerance and constraints violation tolerance are set to 1e-4 and 1e-2, respectively. For all the problems, 10 runs of each SRBF were performed, and the average values of optimal solutions (f*) and number of function evaluations (Nfe) are presented in Tables 2 and 3 to compare the overall performance. As can be seen in Table 2, the average optimal solutions of SRBF incorporated with the S-SLE method was closer to the theoretical global optima, especially for the Peak (PK), Rastrigin (RS), Hartman (HN) and internal combustion engine design (ICED) (Wagner and Papalambors 1991) problems. Furthermore, for all the problems except for SUR-T1-14 (HD1), SRBF with S-SLE used far fewer Nfe than SRBFs using lhsdesign and SLE. Note that although SRBF with lhsdesign used fewer Nfe to solve the HD1 problem, its solution is inferior to that of SRBF with S-SLE.
The comparison results of the various SRBFs with different sequential maximin LHD methods are shown in Table 3. SRBF with sequential LHD methods can converge to better solutions compared with the SRBFs using one-stage LHD methods. SRBF with S-SLE outperforms the other competitors in terms of convergence performance. Moreover, SRBF incorporated with S-SLE requires the fewest Nfe for all the problems, which demonstrates that the S-SLE method Table 2. Optimization results of sequential radial basis functions (SRBFs) using sequential-successive local enumeration (S-SLE) and one-stage Latin hypercube design (LHD) methods.   has an advantage of improving optimization efficiency beyond the other three sequential LHD methods. The favourable space-filling and projective performance of S-SLE means that SRBF incorporated with S-SLE has the best convergence capability and highest efficiency. Box plots of f* and Nfe for those testing problems obtained by SRBFs using four sequential LHD methods are shown in Figures B1 and B2 (Appendix A), respectively. As can be seen from Figures B1  and B2, the variations in f* and Nfe obtained by SRBF using S-SLE were also smaller than those from SRBFs using the other sequential LHD methods. The smaller variations in SRBF with S-SLE are due to the unique deterministic sampling feature of the S-SLE method. Thus, the S-SLE method also improves the robustness of metamodel-based optimization. Hence, S-SLE has been demonstrated to show benefits by improving the global convergence capability, efficiency and robustness of sequential metamodel-based optimization algorithms, since more accurate metamodels can be constructed using S-SLE sequential samples with better space-filling and projective properties. As pointed out by Jones et al. (1998), matrix ill-condition or even singularity may occur when newly added samples are too close to existing ones. That may result in poor accuracy of the updated metamodel and even optimization failure. Therefore, it is crucial for MBDO algorithms to prevent that situation. The proposed S-SLE has another merit in avoiding matrix ill-condition in metamodelling. To illustrate this merit, seven benchmark problems were solved by several SRBF variants using different LHD methods (including lhsdesign, SLE, TPLHD, Quasi-LHD, MIPT, MCSR and S-SLE) for sequential sampling. All of them were performed 100 times on each problem. Table 4 summarizes the occurrence numbers of matrix ill-condition for various LHD methods. Note that in this work, a matrix is regarded to be ill-conditioned if its reciprocal condition number is smaller than 1e-15. As can be seen from Table 4, the use of S-SLE and other sequential LHD methods significantly reduced the frequency of matrix ill-condition occurring in SRBF optimization compared with other one-stage OLHD methods. In fact, the S-SLE and MCSR methods again show slightly better performance than Quasi-LHD and MIPT in avoiding matrix ill-condition. Note that if the region of interest becomes too small, sequential maximin LHD methods are still likely to produce extremely crowded samples, which may finally cause matrix ill-condition. That is why a single occurrence of matrix ill-condition was found in SRBF optimization using S-SLE. Therefore, move-limit-based MBDO algorithms should also restrict the minimum size of the region of interest to avoid this numerical difficulty. Moreover, owing to worse space-filling performance, the Quasi-LHD yielded a larger frequency of matrix illcondition than the other three sequential LHD methods. Table 4. Summary of occurrence numbers of matrix ill-condition in sequential radial basis function (SRBF) optimization.

Conclusions
In this article, a novel deterministic sequential maximin LHD method (i.e. S-SLE) was proposed for metamodel-based optimization. The favourable projective property of the new samples was accomplished by using the proposed mesh-mapping algorithm. For pursuing better space-filling uniformity, S-SLE employed successive local enumerations to determine sequential samples by maximizing the minimal distance between new samples and existing points. Comparative studies with several existing LHD methods have been carried out to validate the performance of S-SLE. The comparison results demonstrated well the appealing merits of S-SLE. First, the sequential samples generated by S-SLE have better space-filling and projective properties, and the overall performance of S-SLE is also better than Quasi-LHD and MIPT in generating highquality sequential samples, and shows comparable performance with MCSR. Secondly, as a deterministic sequential sampling method, for the given existing points, S-SLE always produces an identical sequential sample set without being affected by any stochastic factors. Besides, the flexible S-SLE is naturally applicable to multiple-stage sequential sampling applications. Finally, the proposed S-SLE can evidently improve the overall performance of sequential metamodelbased optimization, and reduce the frequency of numerical difficulty in metamodelling caused by overcrowded samples. However, a major limitation of S-SLE has also been found in this work. As dimensionality grows, the local enumeration process becomes more and more computationally expensive, which significantly decreases the sampling efficiency of the S-SLE method for high-dimensional applications. In further work, more efficient integer programming algorithms will be tested to perform the local search to maximize the minimal distance between samples.

Disclosure statement
No potential conflict of interest was reported by the authors.