Modeling an Augmented Lagrangian for Blackbox Constrained Optimization

,


Introduction
The area of mathematical programming has produced efficient algorithms for nonlinear optimization, most of which have provable convergence properties.They include algorithms for optimizing under constraints and for handling so-called blackbox functions, where evaluation requires running an opaque computer code revealing little about the functional form of the objective and/or constraints.Many modern optimization approaches for blackbox problems converge without derivative information and require only weak regularity conditions.Since their search is focused locally, however, only local solutions are guaranteed.
Statistical approaches to blackbox optimization have the potential to offer more global scope.Methods based on Gaussian process (GP) surrogates and expected improvement (EI, Jones et al., 1998) enjoy global convergence properties and compare favorably with classical alternatives when objective evaluations are expensive, simulated by (noisy) Monte Carlo (Picheny et al., 2013) or when there are many local optima.In more conventional contexts, however, nonstatistical approaches are usually preferred.Global search is slower than local search; hence, for easier problems, the statistical methods underperform.Additionally, statistical methods are more limited in their ability to handle constraints.Here, we explore a hybrid approach that pairs a global statistical perspective with a classical augmented Lagrangian localization technique for accommodating constraints.
We consider constrained optimization problems of the form min where f : R d → R denotes a scalar-valued objective function, c : R d → R m denotes a vector1 of constraint functions, and B ⊂ R d denotes a known, bounded, and convex region.Here we take B = {x ∈ R d : l ≤ x ≤ u} to be a hyperrectangle, but it could also include other constraints known in advance.Throughout, we will assume that a solution of (1) exists; in particular, this means that the feasible region {x ∈ R d : c(x) ≤ 0} ∩ B is nonempty.
In (1), we note the clear distinction made between the known bound constraints B and the constraints c 1 (x), . . ., c m (x), whose functional forms may not be known.
The abstract problem in (1) is challenging when the constraints c are nonlinear, and even more difficult when evaluation of at least one of f and c requires blackbox simulation.In Section 2 we review local search algorithms from the numerical optimization literature that allow for blackbox f and c.Until very recently the statistical literature has, by contrast, only offered solutions tailored to certain contexts.For example, Schonlau et al. (1998) adapted EI for blackbox f and known c; Gramacy and Lee (2011) considered blackbox f and blackbox c ∈ {0, 1}; and Williams et al. (2010) considered blackbox f and c coupled by an integral operator.These methods work well in their chosen contexts but are limited in scope.
The current state of affairs is unfortunate because statistical methods have much to offer.Beyond searching more globally, they can offer robustness, natively facilitate uncertainty quantification, and enjoy a near monopoly in the noisy observation case.In many realworld optimization problems, handling constraints presents the biggest challenge; many have a simple, known objective f (e.g., linear, such as total cost f (x) = i x i ) but multiple complicated, simulation-based constraints (e.g., indicating if expenditures so-allocated meet policy/physical requirements).And yet, to our knowledge, this important case is unexplored in the statistical literature.In Section 5 we present a hydrology problem meeting that description: despite having a simple linear objective function, learning a highly nonconvex feasible region complicates the search for a global minimum.
One way forward is to force the problem (1) into an existing statistical framework.For example, one could treat c(x) as binary (Gramacy and Lee, 2011), ignoring information about the distance to the boundary separating feasible ("valid") and infeasible ("invalid") regions.
To more fully utilize all available information, we propose a statistical approach based on the augmented Lagrangian (AL, e.g., Bertsekas, 1982), a tool from mathematical programming that converts a problem with general constraints into a sequence of unconstrained (or simply constrained) problems.
For the unconstrained problem(s) we develop novel surrogate modeling and EI techniques tailored to the form of the AL, similar to the way that Parr et al. (2012) deploy EI on a single conversion from constrained to unconstrained problems.Borrowing the AL setup, and utilizing an appropriate sequence of unconstrained problems, weakens the burden on the statistical optimizer-deployed in this context as a subroutine-and leverages convergence guarantees from the mathematical programming literature.Under specific conditions we can derive closed-form expressions, like EI, to guide the optimization subproblems, and we explore numerical/Monte Carlo alternatives for other cases.Importantly, our use of Monte Carlo is quite unlike optimization by stochastic search, e.g., simulated annealing (SA, Kirkpatrick et al., 1983).SA, and other methods utilizing inhomogeneous Markov chains, offer global convergence guarantees asymptotically.However, in our experience and indeed in our empirical studies herein, such schemes are less than ideal when expensive blackbox evaluation severely limits the number of simulations that can be performed.
Although the approach we advocate is general, for specificity in this paper we focus on blackbox optimization problems for which the objective f is known while the constraints c require simulation.This setting all but rules out statistical comparators that emphasize modeling f and treat c as an inconvenience.Throughout, we note how our approach can be extended to unknown f by pairing it with standard surrogate modeling techniques.
The remainder of the paper is organized as follows.We first describe a synthetic problem that introduces the challenges in this area.Then, in Section 2, we review statistical optimization and introduce the AL framework for handling constraints.Section 3 contains the bulk of our methodological contribution, combining statistical surrogates with the AL.Section 4 describes implementation details and provides results for our toy example.Section 5 provides a similar comparison for a challenging real-data hydrology problem.We conclude in Section 6 with a discussion focused on the potential for further extension.
A toy problem.Consider the following test problem of the form (1) with a (known) linear objective in two variables, f (x) = x 1 + x 2 , bounding box B = [0, 1] 2 , and two (blackbox) nonlinear constraints given by Figure 1 shows the feasible region and the three local optima, with x A being the unique global minimizer.We note that at each of these solutions, the second constraint is strictly satisfied and the first constraint holds with equality.For x C , the lower bound on x 1 is also binding because if this bound were not present, x C would not be a local solution.The second constraint may seem uninteresting, but it reminds us that the solution may not be on every constraint boundary and thereby presents a challenge to methods designed to search that boundary in a blackbox setting.This toy problem has several characteristics in common with the real-data hydrology problem detailed in Section 5. Notably, the two problems both have a linear objective and highly nonlinear, nonconvex constraint boundaries.

Elements of hybrid optimization
Here we review the elements we hybridize: response surface models, expected improvement, and the augmented Lagrangian.Implementations details are deferred to Section 4.

Surrogate modeling framework for optimization
Examples of statistical models guiding optimization date back at least to Mockus et al. (1978).The technique has since evolved, but the basic idea still involves training a flexible regression model f n on input-output pairs {x (i) , y (i) } n i=1 and using aspects of f n to help choose x (n+1) .One option is to search the mean of a predictive surface f n (x) derived from f n , serving as a surrogate for the true f (x), for minima.Gaussian process (GP) regression models provide attractive f n for a deterministic objective, f , since GPs produce highly accurate, conditionally normal predictive distributions and can interpolate if desired (see, e.g., Santner et al., 2003).This so-called surrogate modeling framework for optimization is focused primarily on the objective f .Extensions have been made to accommodate constraints (e.g., Audet et al., 2000), often by restricting search to the valid region.

Expected improvement
In initial usage, outlined above, the full statistical potential of f n remained untapped: estimated uncertainties-a hallmark of any statistical endeavor-captured in the predictive distributions were not being used.Jones et al. (1998) changed this state of affairs by recognizing that the conditionally normal equations provided by a GP surrogate f n (x), completely described by mean function µ n (x) and variance function σ 2n (x), could be used together to balance exploitation and exploration toward a more efficient global search scheme.They defined an improvement statistic I(x) = max{0, f n min − Y (x)}, where f n min is the minimum among the n y-values seen so far, and Y (x) ∼ f n (x) is a random variable.The improvement assigns large values to inputs x, where Y (x) is likely below f n min .Jones et al. showed that the expected improvement (EI) could be calculated analytically in the Gaussian case: where Φ and φ are the standard normal cdf and pdf, respectively.The equation reveals a balance between exploitation (µ n (x) under f n min ) and exploration (σ n (x)).Leveraging (2), Jones et al. proposed an efficient global optimization (EGO) scheme using branch-and-bound to maximize E{I(x)}.In a later paper, Schonlau et al. (1998) provided an analytical form for a generalized EI based on a powered up improvement I g (x) = |f n min − Y (x)| g I {Y (x)<f n min } .Special cases of E{I g (x)} for g = 0, 1, 2 lead to searches via Pr(Y (x) < f n min ), the original EI, and the hybrid E{I(x)} 2 + Var[I(x)], respectively.Under weak regularity conditions, search algorithms based on EI converge to the global optimum.EGO, which specifically pairs GP surrogates with EI, can be seen as one example of a wider family of routines.For example, radial basis function surrogates have been used with similar success in the context of local search (Wild and Shoemaker, 2013).Although weak from a technical viewpoint, the computer model regularities required are rarely reasonable in practice.They ignore potential feedback loops between surface fits, predictive distributions, improvement calculations, and search; in practice, these can pathologically slow convergence and/or lead to local rather than global solutions.Practitioners instead increasingly prefer hybrids between global EI and local search (e.g., Gramacy and Le Digabel, 2011).

Augmented Lagrangian framework
Augmented Lagrangian methods (see, e.g., Nocedal and Wright, 2006) are a class of algorithms for constrained nonlinear optimization that enjoy favorable theoretical properties for finding local solutions from arbitrary starting points.The main device used by these methods is the augmented Lagrangian, which, for the inequality constrained problem (1), is given by where ρ > 0 is a penalty parameter and λ ∈ R m + serves the role of Lagrange multiplier.The first two terms in (3) correspond to the Lagrangian, which is the merit function that defines stationarity for constrained optimization problems.Without the second term, (3) reduces to an additive penalty method (APM) approach to constrained optimization.Unless considerable care is taken in choosing the scale of penalization, however, APMs can introduce ill-conditioning in the resulting subproblems.
We focus on AL-based methods in which the original nonlinearly constrained problem is transformed into a sequence of nonlinear problems where only the bound constraints B are imposed.In particular, given the current values for the penalty parameter, ρ k−1 , and approximate Lagrange multipliers, λ k−1 , one approximately solves the subproblem Given a candidate solution x k , the penalty parameter and approximate Lagrange multipliers are updated and the process repeats.Algorithm 1 gives a specific form of these updates.Functions f and c are evaluated only when solving (4), comprising the "inner loop" [step 2].
We note that termination conditions have not been explicitly provided in Algorithm 1.In our setting of blackbox optimization, termination is dictated primarily by a user's computational budget.Our empirical comparisons in Sections 4-5 involve tracking the best (valid) value of the objective over increasing budgets determined by the number of evaluations of the blackbox (i.e., the cumulative number of inner iterations).Outside that context, however, one could stop the outer loop when all constraints are sufficiently satisfied and the (approximated) gradient of the Lagrangian is sufficiently small; for example, given thresholds η 1 , η 2 ≥ 0, one could stop when max c(x k ), 0 ≤ η 1 and ∇f (x k ) + m j=1 λ k i ∇c j (x k ) ≤ η 2 .Determining convergence within the inner loop [step 2], is solver dependent; common solvers in our motivating context are discussed below.The theory for global convergence of the overall AL scheme is forgiving about the criteria used to end each local search.

Derivative-free augmented Lagrangian methods
The inner loop [step 2] of Algorithm 1 can accommodate a host of methods for solving the simply constrained subproblem (4).Solvers can leverage derivatives of the objective and/or constraint functions when available, or be derivative-free otherwise.We specifically focus on the derivative-free case because this subsumes blackbox optimization (see, e.g., Conn et al., 2009).In our comparisons in Sections 4-5 we consider two benchmark solvers for the inner loop.We now briefly introduce how these solvers can be situated within the AL framework; software/implementation details and convergence detection are provided in the supplementary material, which also contains the details of three additional comparators that do not leverage the AL.
Direct Search: Loosely, direct search involves probing the objective at stencils centered on the current best input value.The outputs obtained on the stencil determine the placement and size of the next stencil.In particular, we consider the mesh adaptive direct search (MADS) algorithm (Audet and Dennis, 2006).MADS is a directional direct-search method that uses dense sets of directions and generates trial points on a spatial discretization called a mesh.The most important MADS parameters are the initial and minimal poll sizes, which define the limits for the poll size parameter, determining the stencil size, and the maximum mesh index, which limits poll size reductions after a failed iteration (when a stencil does not find an improved solution).In the context of Algorithm 1 it makes sense to allow the initial poll size parameter to take a software-recommended/default value but to set the maximum mesh index to k − 1, prescribing a finer subproblem as outer iterations progress.
Model-based: These are closest in spirit to the statistical methods we propose.Modelbased optimization employs local approximation models, typically based on local polynomials (e.g., Conn et al., 2009) or nonlinear kernels such as radial basis functions (e.g., Wild and Shoemaker, 2013), which are related to GPs.Here we consider the trust-region-based method that was previously used as an AL inner solver by Kannan and Wild (2012).This method builds interpolating quadratic approximation models to the objective and constraints.The AL subproblem (4) is then approximately solved by locally solving a sequence of quadratics.

Statistical surrogate additive penalty methods
The methods above are not designed for global optimization, and it is hard to predict which local minima they will ultimately converge to when several minima are present.Hybridizing with statistical surrogates offers the potential to improve this situation.The simplest approach involves deploying a statistical surrogate directly on the AL (3), but this has drawbacks.To circumvent these, we consider separately modeling the objective f and each constraint c j .We then pursue options for using the surrogate to solve (4), either via the predictive mean or EI, which has an enlightening closed-form expression in a special case.

Surrogate modeling the augmented Lagrangian
Consider deploying GP regression-based surrogate modeling of the AL (3) in order to find x k .In each iteration of the inner loop (step 2 of Algorithm 1), proceed as follows.Let n denote the total number of blackbox evaluations obtained throughout all previous "inner" and "outer" iterations, collected as (x (1) , f (1) , c (1) ), . . ., (x (n) , f (n) , c (n) ).Then form y (i) = L A (x (i) ; λ k−1 , ρ k−1 ) via f (i) and c (i) , and fit a GP surrogate to the n pairs {(x (i) , y (i) )} n i=1 .Optimization can be guided by minimizing µ n (x) in order to find x (n+1) or via EI following Eq.( 2) with Y (x) ≡ Y n (x) ∼ N (µ n (x), σ 2n (x)).Approximate convergence can be determined by various simple heuristics, from the number of iterations passing without actual improvement, to monitoring the maximal EI (Gramacy and Polson, 2011) over the trials.
At first glance this represents an attractive option, being modular and facilitating a global-local tradeoff.It is modular in the sense that standard software can be used for surrogate modeling and EI.It is global because the GP is trained on the entire data seen so far, and EI balances exploration and exploitation.It is local because, as the AL "outer" iterations progress, the (global) "inner" searches organically concentrate near valid regions.
Several drawbacks become apparent, however, upon considering the nature of the composite objective (3).For example, the y (i) values, in their relationship with the x (i) s, are likely to exhibit behavior that requires nonstationary surrogate models, primarily because of the final squared term in the AL, amplifying the effects of c(x) away from the boundary with the valid region.Most out-of-the-box GP regression methods assume stationarity, and will therefore be a poor match.A related challenge is the max in (3), which produces kinks near the boundary of the valid region, with the regime changing behavior across that boundary.
Modern GP methods accommodate nonstationarity (e.g., Schmidt and O'Hagan, 2003) and even regime-changing behavior (Gramacy and Lee, 2008).To our knowledge, however, only the latter option is paired with public software.That method leverages treed partitioning, whose divide-and-conquer approach can accommodate limited differentiability and stationarity challenges, but only if regime changes are roughly axis-aligned.Partitioning, however, does not parsimoniously address effects amplified quadratically in space.In fact, no part of the above scheme, whether surrogate modeling (via GPs or otherwise) or EI-search, acknowledges the known quadratic relationship between objective (f ) and constraints (c).By treating the entire apparatus as a blackbox, it discards potentially useful information.Moreover, when the objective portion (f ) is completely known, as in our motivating example(s), the fitting method (unless it accommodates a known mean) needlessly models a known quantity, which is inefficient (see, e.g., Kannan and Wild, 2012).

Separately modeling the pieces of the composite
Those shortcomings can be addressed by deploying surrogate models separately on the components of the AL, rather than wholly to the composite.With separate models, stationarity assumptions are less likely to be violated since modeling can commence on quantities prior to the problematic square and max operations.To clarify, here we take "separate" to mean independent as that simplifies many matters, although extensions to correlated models may yield improvements.Under independence, surrogates f n (x) for the objective and c n (x) = (c n 1 (x), . . ., c n m (x)) for the constraints provide distributions for Y f n (x) and Y n c (x) = (Y n c 1 (x), . . ., Y n cm (x)), respectively.The n superscripts, which we drop below, serve here as a reminder that we propose to solve the "inner" AL subproblem (4) using all n data points seen so far.Samples from those distributions, obtained trivially via GP surrogates, are easily converted into samples from the composite, serving as a surrogate for L A (x; λ, ρ): When f is known, we can forgo calculating f n (x) and swap in a deterministic f (x) for Y f (x).
As in Section 3.1, there are several ways to choose new trial points using the composite distribution of the random variable(s) in (5), for example, by searching the predictive mean or EI.We first consider the predictive mean approach and defer EI to Section 3.3.We have The first two expectations are trivial under normal GP predictive equations, giving via a vectorized µ n c = (µ n c 1 , . . ., µ n cm ) .An expression for the final term, which involves E{max(0, Y c j (x)) 2 }, can be obtained by recognizing its argument as a powered improvement for −Y c j (x) over zero, that is, I (0) −Yc j (x) = max{0, 0 + Y c j (x)}.Since the power is 2, an expectation-variance relationship can be exploited to obtain by using a result from the generalized EI (Schonlau et al., 1998).Combining ( 6) and ( 7) completes the expression for E{Y (x)}.When f is known, simply replace µ n f with f .

New expected improvement
The composite random variable Y (x) in Eq. ( 5) does not have a form that readily suggests a familiar distribution, for any reasonable choice of f n and c n (e.g., under a GP model), complicating analytic calculation of EI.A numerical approximation is straightforward by Monte Carlo.Assuming normal predictive equations, simply sample y and N (µ n c j , σ 2n c j ), respectively, and then average: where j ) 2 } is the best value of the AL observed so far, given the current λ and ρ values.We find generally low Monte Carlo error, and hence very few samples (e.g., T = 100) suffice.
However, challenges arise in exploring the EI surface over x ∈ X , since whole swaths of the input space emit numerically zero E{I Y (x)}.When f is known, whereby Y f (x) ≡ f (x), and when the outer loop is in later stages (large k), yielding smaller ρ k , the portion of the input space yielding zero EI can become prohibitively large, complicating searches for improvement.The quadratic nature of the AL composite (5) causes Y to be bounded below for any Y c -values under certain (λ, ρ), no matter how they are distributed.
To delve a little deeper, consider a single blackbox constraint c(x), a known objective f (x), and a slight twist on Eq. ( 5) where a new composite Ỹ is defined by removing the max.In this special case, one can derive an analytical expression for the EI under GP model for c.Let I Ỹ = max{0, ỹmin − Ỹ } be the improvement function for the new composite Ỹ , suppressing x to streamline notation.Calculating the EI involves the following integral, where c(y c ) represents the density c n of Y c , which for a GP is specified by some µ and σ 2 : dy c , θ = {y c : ỹ < ỹmin }.
Besides pointing to analytically zero EI values under (5), the above discussion suggests two ideas.First, avoiding x values leading to f (x) > y min will boost search efficiency by avoiding zero EI regions.Second, dropping the max in (5) may lead to efficiency gains in two ways: from analytic rather than Monte Carlo evaluation, and via a more targeted search when f is a known monotone function bounded below over the feasible region.In that case, a solution is known to lie on the boundary between valid and invalid regions.Dropping the max will submit large negative c(x)'s to a squared penalty, pushing search away from the interior of the valid region and towards the boundary.
While the single-constraint, known f formulation is too restrictive for most problems, some simple remedies are worthy of consideration, especially when Monte Carlo is not feasible.Extending to blackbox f , and modeling Y f , is straightforward since Y f (x) features linearly in Eq. ( 5).Extending to multiple constraints is much more challenging.One option is to reduce a multiple constraint problem into a single one by estimating a single surrogate c n (x) for an aggregated constraint function, say, i Y c j (x).Some care is required because summing can result in information loss which may be extreme if positive and negative Y c j (x) cancel valid and invalid values.A better approach would be to use ) even though that may result in challenging kinks to model.The former option, using absolute values, could lead to improvements when f is a known monotone function, exploiting that the solution is on the constraint boundary.However, it may introduce complications when the constraint set includes constraints that are not active/binding at the solution, which we discuss further in Section 6.

Implementation and illustration
In this section, we first provide implementation details for our proposed methods (Section 3), and then demonstrate how they fare on our motivating toy data (Section 1).Implementation details for our comparators (Section 2.4) are provided as supplementary material.which includes details for three further comparators used in our study: a MADS modification which hands constraints natively, simulated annealing, and an "asymmetric entropy" heuristic (Lindberg and Lee, 2015).All methods are initialized with λ 0 = (0, . . ., 0) and ρ 0 = 1/2.Throughout, we randomize over the initial x 0 by choosing it uniformly in B.

Implementation for surrogate model-based comparators
Multiple variations were suggested in Section 3. We focus our comparative discussion here on those that performed best.To be clear, none of them performed poorly; but several are easily criticized, and those same methods are consistently dominated by their more sensible counterparts.In particular, we do not provide results for the simplistic approach of Section 3.1, requiring modeling a nonstationary composite AL, since that method is dominated by separated modeling of the constituent parts, as described in Section 3.2.
We entertain alternatives from Sections 3.2-3.3that involve guiding the inner optimization with E{Y } and E{I Y }, following Eqs.(7-8), respectively.We note here that the results based on a Monte Carlo E{Y }, via the analog of (8) without "max[0, y n min −", and the analytical alternative (7) are indistinguishable up to Monte Carlo error when randomizing over x 0 .Taking inspiration from the analytic EI derivation for the special case in Section 3.3, we entertain a variation on the numeric EI that discards the max term.We do not provide results based on the analytic expression (9), however, because doing so requires modeling compromises that hamper effective search.In total we report results for four variations pairing one of E{Y } and E{I Y } with the original AL ( 5) and a version obtained without the max, which are denoted by the acronyms EY, EI, EY-nomax, and EI-nomax, respectively.
Throughout we treat f as known and model each Y c j with separate GPs initialized with ten random (space filling) input-output pairs from B (i.e., the outer loop of Algorithm 1 starts with x (1:10) ).For fast updates and MLE calculations-when designs are augmented as Algorithm 1 progresses-we used updateGP and mleGP, from the laGP package (Gramacy, 2014) for R.Each inner loop search in Algorithm 1 is based on a random set of 1,000 candidate x locations X n .Spacing candidates uniformly in B is inefficient when f is a known linear function.Instead we consider random objective improving candidates (OICs) defined by X = {x : f (x) < f n * min }, where f n * min is the best value of the objective for the n * ≤ n valid points found so far.If n * = 0, then f n * min = ∞.A random set of candidates X n is easy to populate by rejection sampling even when X is small relative to B.
A nice feature of OICs is that a fixed number |X n | organically pack more densely as improved f n * min are found.However, as progress slows in later iterations, the density will plateau, with two consequences: (1) impacting convergence diagnostics based on the candidates (like max E{I Y }) and ( 2) causing the proportion of X n whose EI is nonzero to dwindle.We address (1) by declaring approximate convergence, ending an inner loop search [step 2 of Algorithm 1], if ten trials pass without improving y n min .When guiding search with E{I Y }, earlier approximate convergence [also ending step 2] is declared when max x∈B E{I Y (x)} < , for some tolerance .Consequence (2) may be addressed by increasing |X n | over time; however we find it simpler to default to an E{Y } search if less than, say, 5% of X n gives nonzero improvement.This recognizes that gains to the exploratory features of EI are biggest early in the search, when the risk of being trapped in an inferior local mode is greatest.
While the above explains some of the salient details of our implementation, many specifics have been omitted for space considerations.For full transparency please see optim.auglag in the laGP package, implementing all variations considered here.Gramacy (2014), the package vignette, provides a worked-code illustration for the toy problem considered below.

Empirical results for the toy problem
Figure 2 summarizes the results of a Monte Carlo experiment for the toy problem described in Section 1.Each of 100 repetitions is initialized randomly in B = [0, 1] 2 .The graph in the figure records the average of the best valid value of the objective over the iterations.The plotted data coincide with the numbers shown in the middle section of the accompanying table for the 25 th , 50 th and final iteration.The other two sections show the 90% quantiles to give an indication of worst-and best-case behavior.
Figure 2 indicates that all variations on our methods eventually outperform all comparators in terms of both average and worst-case behavior.All methods find the right global minima in five or more cases (5% results), but only the EI-based ones perform substantially better in the worst case (using the 95% numbers).In only one case out of 100 did EI not find the global minima, whereas 15% of the model-based runs failed to find it (these runs finding other local solutions instead).Except for a brief time near iteration n = 50, and ignoring the first 20 iterations for which all methods perform similarly, EI-based comparators dominate EY analogues.There is a period (n ≈ 35) where EI's average progress stalls temporarily.We observed that this usually marks a transitional period from primarily exploratory to primarily exploitive behavior.Toward the end of the trials, the methods based on dropping the max from Eq. ( 5) win out.Ignoring regions of the space giving large negative values of c(x) seems to help in these latter stages, but it can hinder performance earlier on.SA fares worst in this example; however, it does better in our real-data one below.Although we summarize results for the first 100 evaluations, we ran the SA comparator to convergence and report here that each repetition did eventually converge to the global minimum.Reaching convergence, however, took an average of 8,240 evaluations.Compared with our EI-based results, this represents an enormous expense, which foreshadows further discussion in Section 6 about why stochastic search may be less than ideal for optimization of expensive computer simulation experiments.The asymmetric entropy method ("AsyEnt") performs well once active search begins after ten space-filling evaluations.Subsequently its progress plateaus indicating a struggle to consistently hone-in on solutions near the boundary.

Pump-and-treat hydrology problem
We turn now to our motivating example.Worldwide, there are more than 10,000 contaminated land sites (Meer et al., 2008).Environmental cleanup at these sites has received increased attention over the past 20-30 years.Preventing the migration of contaminant plumes is vital to protect water supplies and prevent disease.One approach is pump-andtreat remediation, in which wells are strategically placed to pump out contaminated water, purify it, and inject the treated water back into the system to prevent contaminant spread.A case study of one such remediation is the 580-acre Lockwood Solvent Groundwater Plume Site, an EPA Superfund site located near Billings, Montana, where industrial practices have led to groundwater contamination (United States Environmental Protection Agency, 2013).Figure 3 shows the location of the site and provides a simplified illustration of the contaminant plumes that threaten the Yellowstone River.To prevent further expansion of these plumes, the placement of six pump-and-treat wells has been proposed, as shown in the figure.The right panel illustrates the plume sites, its boundaries (including the Yellowstone river), and the proposed location of six remediation wells (A1, A2, B1, B2, B3, B4).Mayer et al. (2002) posed the pump-and-treat problem as a constrained blackbox optimization problem.For the version of the problem considered here, the pumping rates are varied in order to minimize the cost of operating the system subject to constraints on the contaminant staying within the plume boundaries.Letting x j denote the pumping rate for   (Matott et al., 2011); the right one abstracts the left graph for further comparison (e.g., using OICs).
well j, one obtains the constrained problem min The objective f is linear and describes the costs required to operate the wells.In the absence of the constraints c, the solution is at the origin and corresponds to no pumping and no remediation.The two constraints denote flow exiting the two contaminant plumes.An analytic element method groundwater model simulates the amount of contaminant exiting the boundaries and is treated as a blackbox (Matott et al., 2006).This model never returns negative values for the constraint, and this nonsmoothness-right at the constraint boundary-can present modeling challenges.
5.1 Some comparators Matott et al. (2011) featured this example in a comparison of MATLAB and Python optimizers, treating constraints via APM.The results of this study are shown in the left panel of Figure 4 under a total budget of 1,000 evaluations.All comparators were initialized at the valid input x 0 = (10, 000, . . ., 10, 000) .To abstract these trajectories as a benchmark we shall superimpose our new results on an image of the first 500 iterations.The right panel of Figure 4 shows an example, with a simple comparator overlaid based on stochastic search with OICs (Section 4.1).It may be surprising that simple stochas-tic search-based on sampling one OIC in each trial and updating the best valid value of the objective when a new one is found-performs well relative to much more thoughtful comparators.Since the method is stochastic, we are revealing its average behavior over thirty replicates.That average is competitive with the best group of methods for the first twenty-five iterations or so, suggesting that those methods, while implementing highly varied protocols, are not searching any better than randomly in early stages.Observe that even after those early stages, OICs still outperform at least half the comparators for the remaining trials.Those methods are getting stuck in local minima, whereas OICs are shy of clumsily global.However pathologically slow a random search like this may be to converge, its success on this problem illustrates a clear benefit to exploration over exploitation in early stages.

Using augmented Lagrangians
Figure 5 shows the results of a Monte Carlo experiment set up like the one in Section 4.2.In this case each of thirty repetitions was initialized randomly with x 0 ∈ B = [0, 20000] 6 .Note that comparators from Section 5.1 (in gray) used a single fixed starting location x 0 , not a random one.From the figure we can see that the relative ordering of our proposed hybrid surrogate/AL comparators is roughly the same as for the toy problem.But some individual APM runs outperform our average results.We believe that the initializing value x 0 used by those methods is a tremendous help.For example, when running MADS (no AL) with that same value, it achieved the best result in our study, 23,026.That MADS' average behavior is much worse suggests extreme differential performance depending on the quality of initialization, particularly with regard to the validity of the initial value x 0 .Moreover, the 95% section of the table reveals that a substantial proportion (5-10%) of the repetitions resulted in no valid solution even after exhausting the full budget of n = 500 iterations.2Had the Matott et al. comparison been randomly initialized, we expect that the best comparators would similarly have fared worse.By contrast, in experiments with the surrogate-based methods using the same valid x 0 we found (not shown) no differences, up to Monte Carlo error, in the final solutions we obtained.
The SA comparator performed rather better in this experiment compared our synthetic example in Section 4.2, out-performing all but one of the classical AL methods.Due to the expense of the blackbox simulations (and the likelihood that convergence could require thousands of iterations) we did not attempt to run SA to convergence to see if it is ultimately competitive with our EI-based methods.The story is similar for "AsyEnt".After the one hundred space-filling candidates, progress is rapid-beating out our classical comparators-but it quickly plateaus, never quite matching our surrogate-model-based AL ones.Interestingly, SA and AsyEnt have the worst best-case (5% and 500 iterations) results.

Discussion
We explored a hybridization of statistical global optimization with an amenable mathematical programming approach to accommodating constraints.In particular, we combined Gaussian process surrogate modeling and expected improvement methods from the design of computer experiments literature with an additive penalty method that has attractive convergence properties: the augmented Lagrangian.The main advantage of this pairing is that it reduces a constrained optimization into an unconstrained one, for which statistical methods are more mature.Statistical methods are not known for their rapid convergence to local optima, but they are more conservative than their mathematical programming analogues: in many cases offering better global solutions with fewer blackbox evaluations.We extended the idea of EI to a composite objective arising from the AL and showed that the most sensible variations on such schemes consistently outperform similar methods leveraging a more traditional optimization framework whose focus is usually more local.Still, there is plenty of room for improvement.For example, we anticipate gains from a more aggressive hybridization that acknowledges that the statistical methods fail to "penetrate" into local troughs, particularly toward the end of a search.In the unconstrained context, Gray et al. (2007) and Gramacy and Le Digabel (2011) have had success pairing EI with modern direct search optimizers.Both setups port provable local convergence from the direct method to a more global search context by, in effect, letting the direct solver take over toward the end of the search in order to "drill down" to a final solution.
Other potential extensions involve improvements on the statistical modeling front.For example, our models for the constraints were independent for each c j , j = 1, . . ., m, leaving untapped potential to leverage cross correlations (e.g., Williams et al., 2010).Ideas from multiobjective optimization may prove helpful in our multiconstraint format.Treating them as we do in a quadratic composite (via the AL) represents one way forward; keeping them separated with Pareto-optimal-like strategies may be another promising option (see, e.g., Svenson and Santner, 2012;Picheny, 2013Picheny, , 2014)).
Better analytics offer the potential for further improvement.We resorted to Monte Carlo (MC) for two aspects of our calculations, however it is important to clarify we are not advocating a stochastic search.An analytic EI calculation, or a branch-and-bound-like scheme for optimizing it at each inner-loop search step, would eliminate MC errors and yield an entirely deterministic scheme.Sometimes a bit of stochasticity is welcome in statistical design applications, of which blackbox optimization is an example.However, there are clearly limits to the usefulness of random search in that setting, especially when simulations for the objective or constraint(s) are expensive.This is borne out in our empirical work where a simulated annealing (SA) implementation demonstrates lukewarm results under tight computational budgets.SA offers nice technical guarantees for global solutions but, as the R documentation for optim's method="SANN" cautions, it "depends critically on the settings of the control parameters.It is not a general-purpose method but can be very useful in getting to a good value on a very rough surface." There may be alternative ways to acknowledge-in the known monotone objective (f ) case, as in both of our examples-that the solution lies on a constraint boundary.Our ideas for this case (e.g., dropping the max in the AL (5)) are attractive because they can be facilitated by a minor coding change, but they yield just modest improvements.It is also risky when the problem includes nonbinding constraints at the solution, by inappropriately inflating the importance of candidates well inside the valid region according to one constraint, but well outside for another.The slack variable approach of Kannan and Wild (2012) may present an attractive remedy, especially when c(x) returns only nonnegative values like in our hydrology example, as might explicit learning of classification boundaries to guide sampling (e.g., Lee et al., 2011;Chevalier et al., 2014;Lindberg and Lee, 2015).
In closing, however, we remark that perhaps extra complication, which is what many of the above ideas entail, may not be pragmatic from an engineering perspective.The AL is a simple framework and its hybridization with GP models and EI is relatively straightforward, allowing existing statistical software to be leveraged directly (e.g., laGP was easy to augment to accommodate the new methods described here).This is attractive because, relative to the mathematical programming literature, statistical optimization has few constrained optimization methods readily deployable by practitioners.The statistical optimization literature is still in its infancy in the sense that bespoke implementation is required for most novel applications.By contrast, software packages like NOMAD, implementing MADS (see supplementary material) generally work out-of-the-box.It is hard to imagine matching that engineering capability for difficult constrained optimization problems with statistical methodology if we insist on those methods being even more intricate than the current state-of-the-art.

Figure 1 :
Figure 1: Toy problem and its local minimizers; only x A is a global minimizer.

Figure 3 :
Figure 3: Lockwood site and its contaminant plumes.The map on the left identifies the Lockwood Solvent region and shows its proximity to the Yellowstone River and the city of Billings (image from the website of Agency for Toxic Substances & Disease Registry (2010)).The right panel illustrates the plume sites, its boundaries (including the Yellowstone river), and the proposed location of six remediation wells (A1, A2, B1, B2, B3, B4).

Figure 4 :
Figure 4: Convergence Behavior of the Selected Algorithms (as applied to the Pump-and-Treat problem) Figure5shows the results of a Monte Carlo experiment set up like the one in Section 4.2.In this case each of thirty repetitions was initialized randomly with x 0 ∈ B = [0, 20000] 6 .Note that comparators from Section 5.1 (in gray) used a single fixed starting location x 0 , not a random one.From the figure we can see that the relative ordering of our proposed hybrid surrogate/AL comparators is roughly the same as for the toy problem.The surrogate-modelbased average and worst-case behaviors are better than those of the other AL comparators and are competitive with those of the best APMs from Matott et al.Many of the individual Monte Carlo runs of our EI-and EY-based methods outperformed all APM comparators.But some individual APM runs outperform our average results.We believe that the initializing value x 0 used by those methods is a tremendous help.For example, when running MADS (no AL) with that same value, it achieved the best result in our study, 23,026.That MADS' average behavior is much worse suggests extreme differential performance depending on the quality of initialization, particularly with regard to the validity of the initial value x 0 .Moreover, the 95% section of the table reveals that a substantial proportion (5-10%) of the repetitions resulted in no valid solution even after exhausting the full budget of n = 500 iterations. 2 Had the Matott et al. comparison been randomly initialized, we expect that the best comparators would similarly have fared worse.By contrast, in experiments with the surrogate-based methods using the same valid x 0 we found (not shown) no differences, up to Monte Carlo error, in the final solutions we obtained.The SA comparator performed rather better in this experiment compared our synthetic example in Section 4.2, out-performing all but one of the classical AL methods.Due to the expense of the blackbox simulations (and the likelihood that convergence could require thousands of iterations) we did not attempt to run SA to convergence to see if it is ultimately competitive with our EI-based methods.The story is similar for "AsyEnt".After the one hundred space-filling candidates, progress is rapid-beating out our classical comparators-but it quickly plateaus, never quite matching our surrogate-model-based AL ones.Interestingly, SA and AsyEnt have the worst best-case (5% and 500 iterations) results.
Results for the motivating problem in Section 1 over 100 Monte Carlo repetitions with a random x 0 .The plot tracks the average best valid value of the objective over 100 blackbox iterations; the table shows distributional information at iterations 25, 50, and 100.
Results for the Lockwood problem over 30 Monte Carlo repetitions with a random x 0 .The plot tracks the average best valid value of the objective over blackbox iterations; the table shows more distributional information at iterations 100, 200, and 500.