Multilevel quasi-interpolation on a sparse grid with the Gaussian

Motivated by the recent multilevel sparse kernel-based interpolation (MuSIK) algorithm proposed in Georgoulis et al. (SIAM J. Sci. Comput. 35, 815–832, 2013), we introduce the new quasi-multilevel sparse interpolation with kernels (Q-MuSIK) via the combination technique. The Q-MuSIK scheme achieves better convergence and run time when compared with classical quasi-interpolation. Also, the Q-MuSIK algorithm is generally superior to the MuSIK methods in terms of run time in particular in high-dimensional interpolation problems, since there is no need to solve large algebraic systems. We subsequently propose a fast, low complexity, high-dimensional positive-weight quadrature formula based on Q-MuSIKSapproximation of the integrand. We present the results of numerical experimentation for both quasi-interpolation and quadrature in high dimensions.


Introduction
Over the last half century, numerical methods have obtained much attention, not only among mathematicians but also in the scientific and engineering communities.
High dimensionality usually causes some problems for mathematical modelling on gridded data.The main problem is known as the curse of dimensionality, a term due to Bellmann [4].There is an exponential relationship between the computational cost of approximation with a given error bound and the dimension d of the space R d for any given problem.For this reason, classical approximation techniques are limited to low dimensions.For example, the complexity of solving an approximation problem on gridded data over a bounded domain ∈ R d is O(N d ), where d is the dimension of the input data.Radial basis function (RBF) approximation is one of the tools which is effective in high-dimensions when the data is scattered in its domain; see e.g.[17].The potential smoothness of the basis functions (such as Gaussians) means that RBF methods have also been successfully applied for the solution of smooth partial differential equations; see [12].
Another powerful tool for multidimensional problems is quasi-interpolation, which is widely used in scientific computation, mechanics and engineering.Theoretical errors estimates for RBF quasi-interpolation are available, for instance in [2,20,21].A quasi-interpolation technique based on radial basis functions is discussed in the sequel.
Given a continuous function u on R d , the quasi-interpolant Q h u, h > 0, is a superposition of dilated and shifted versions of a given generator function ν on R d using as coefficients the samples of u on the lattice hZ d Q h u(x) := z∈hZ d u(hz)ν(h −1 x − z). ( In this paper ν = μ c is the Gaussian function We will truncate the expansion (1) so that u is evaluated only inside the unit cube; see Section 2. The kernel μ c of the quasi-interpolant must have integral 1 in order that it be an approximate identity.This is what motivates the choice of normalisation constant in the above definition.We also wish to have the flexibility to chose a particular width of Gaussian.We will write μ = μ 1 .
The main advantage of quasi-interpolation is that we do not need to solve a linear system as would have to do with interpolation.This means that the approximation process is much faster.We compare our results with the analagous interpolation method, MuSIK, described in [9] and see that they give more accurate results in the same amount of time.
We should also comment that we are aiming at approximation of smooth functions, so that we choose a smooth basis function.In high dimensional applications low dimensional non-smooth artefacts become more difficult to see, so that functions, from the point of view of an observer, smooth out.We should also observe that to test the uniform approximation of a method in ten dimensions on the unit cube would require 10 10 points for very low resolution in each direction.Hence we can have no confidence that the results we give are appropriate.We explore this issue in the numerical examples.Thus, in high dimensions we observe the approximation of functionals of the solution, in this case, integrals.
Convergence for quasi-interpolation using the Gaussian has been discussed [14,15], where they coin the phrase approximate approximation.This refers to the fact that low degree polynomials are not reproduced by standard quasi-interpolation with Gaussians.In [3] the authors generate a new kernel in (1) using a linear combination of Gaussians, and then balance the choice of width of the Gaussian and the parameter h in (1) in order to gain almost optimal error estimates.In Muller and Varnhorn [16], and Chen and Cao [5,6] researchers study convergence on a compact interval.Here we present our algorithm and numerical results, the analysis of the algorithm will be done in a series of subsequent papers, starting with [13].
Our algorithm is based on the sparse grid method introduced by Zenger [22] in 1991 as a solution for PDEs.These techniques have also been used for approximation and interpolation.It can be seen that these methods are similar to the hyperbolic cross notion of Babenko [1].In 2012, Georgoulis et al. [9] introduced a new sparse grid kernel-based interpolation technique which circumvents both computational complication and conditioning problems using.The same basic algorithm was implemented by Schreiber [18], but was not effective because Gaussians with a fixed width were used.Schemes which retain the same shape of basis function while the points get more dense are termed non-stationary, and non-stationary Gaussian interpolation is known to be ill-conditioned.The innovation in [9] was to use stationary interpolation with Gaussians (see the scheme in Section 2), mimicking the sparse-grid scheme with B-Splines (this scheme is called SIK).However, SIK with Gaussians does not converge, in contract to the b-spline case.In order to gain convergence they used an iterative correction scheme, interpolating the residual from the previous level at the next level.This scheme is called MuSIK, and is observed in [9] to work very well.In this paper we use the same scheme with quasi-interpolation Q-SIK and Q-MuSIK respectively instead of SIK and MuSIK, and observe similar good results in high dimensions.
Multilevel techniques have been suggested by a number of researchers.The first to consider the multilevel approximation were Floater and Iske [8].They combined a thinning algorithm and compactly supported radial basis function interpolation.Furthermore, multilevel interpolation techniques enable us to combine the benefits of stationary and non-stationary standard RBFs interpolation, such that this leads to an accelerated convergence.Other researchers have since contributed the multilevel interpolation literature [10,11,19].
MuSIK has been extended in [7] both theoretically and numerically.More detailed this study showed that SIK and accordingly MuSIK scheme are interpolatory for the special case of scaled Gaussian kernels.In addition to this a numerical integration algorithm is also proposed in [7], based on interpolating the (high-dimensional) integrand.A series of numerical examples are presented, highlighting the practical applicability of the proposed algorithm for both interpolation and quadrature for up to 10-dimensions.We compare our results with these.

Sparse quasi-interpolation with kernels
, and let u : → R. In order to describe the sparse grid construction, we need to introduce some multi-index notation.Throughout this and the next sections we will use l := (l 1 , . . ., l d ) ∈ N d and i := (i 1 , . . ., i d ) ∈ N d as a spatial position.For the above multi-indices, the family of directionally uniform grids ϒ l on can be described with mesh size h l ; = 2 −l = (2 −l 1 , . . ., 2 −l d ).That is, the family of grids ϒ l consists of the points where x l p ,i p := i p • h l p = i p • 2 −l p and i p ∈ {0, 1, . . ., 2 l p }.These grids may have different mesh sizes for each coordinate direction.In addition to this, one can calculate the number of nodes P l in ϒ l by using the formula For instance, should one choose h l,p = 2 −n , for all p = 1, . . ., d, the Cartesian full grid converts to a uniform full grid denoted by ϒ n,d with P = (2 n + 1) d , where n is the grid level.We also consider the following subset of ϒ n,d , with |l| 1 := l 1 + l 2 + • • • + l d , which will be referred to as the sparse grid of level n in d dimensions.We refer to Fig. 1 for a visual representation of (5) for n = 4 and d = 2. Notice that there is some redundancy in this definition as some grid points are included in more than one sub-grid.As can easily be seen, the sparse grid treatment itself already requires the use of anisotropic basis functions since interpolation of data with anisotropic distribution of data sites in the domain requires special consideration.For this reason, we will use anisotropic Gaussian functions for Q-SIK.
For each l we construct a quasi-interpolant Then we obtain the Q-SIK approximation by the combination technique: For example, in 2-D the Q-SIK approximation is The key idea here is that we view the sparse grid in Fig. 1 as an algebraic sum of grids, as in Fig. 2. We can summarize the algorithm of sparse kernel-based interpolation as follows.

Algorithm 1 Sparse quasi-interpolation with kernels
and ϒ l with mesh size h l .2: Solve the anisotropic sub-grid quasi-interpolation problems Q l u(x) = z∈ϒ l u(z)μ l (x − z).3: Combine the all sub-grid quasi-interpolation problems obtained above As can be seen from the above description, the Q-SIK method is very amenable to parallel computation since each sub-approximation problem can be solved independently.Therefore this method can be applied in a computational cluster or distributed across workstations.Thus, the Q-SIK method provides us with computationally cheap approximation, especially for multidimensional problems.

Multilevel sparse quasi-interpolation with kernels
We will see in Section 5 (and the same is observed in SIK), Q-SIK does not converge, even for the function u ≡ 1.In the case of the infinite grid in one dimension, this can be seen easily.For a 1/n spaced grid the quasi-interpolant is This function is 1/n-periodic and even and so has a Fourier series where, for k ≥ 1, the Fourier coefficient of the normal distribution.Additionally a 0 = 1.Hence, for any n so that there is no convergence as n → ∞.
Thus we adopt a multilevel refinement strategy, which we call Q-MuSIK.The only difference between MuSIK and Q-MuSIK is that we will use the Q-SIK method instead of the SIK method for each sub-grid approximation problem.
In contrast to the single level Q-SIK method discussed in the previous section, we will now use nested sparse grids which are sorted from the lower level to the higher level.In other words, sparse grids with increasingly greater data densities provide us with a hierarchical decomposition of the sparse grid data, The hierarchical decomposition in two dimensions can be seen in Fig. 3.After obtaining the sparse grid decomposition, the sparse grid interpolation Q n 0 ,d needs to be evaluated for level 1.In other words, the coarsest level approximation is S 0,d u = Q n 0 ,d u.Then j needs to be calculated for each level using Q-SIK on the residual j = Q n 0 +j,d (u − S j −1,d u) on ϒ n 0 +j,d , and S j,d u = S j −1,d u + j , for 1 ≤ j ≤ n.This multilevel iterative refinement algorithm is called Q-MuSIK.

Algorithm 2 Quasi multilevel sparse interpolation with kernels
Require: Sparse grid data decomposition and function u 1: Initialize the first interpolation value at zero with Q-SIK, that is S 0,d u = 0. 2: Construct the nested sparse grids as

Q-MuSIK quadrature
Now, we introduce a new quadrature formula by integrating the Q-MuSIK approximation over the unit cube: In the case when the Gaussians are tensor products, namely, , we can straightforwardly calculate the weights as follows: The integral of a univariate Gaussian is of the form for some A l , m l ∈ R, which, upon the change of variables ξ = A l (x − z l ), gives with the error function erf defined by Here the error function can be calculated to arbitrary precision.In a similar way, all the sub-grids for each level can be constructed in a similar manner to the method discussed in previous sections.In other words, we will use the grid set ϒ l with mesh size h l constructed via sparse grids.In order to then solve the each sub-grid quadrature problem, we need to use the anisotropic quasi-quadrature technique, which uses sparse grids.In detail, by using the quadrature construction from the previous section, the anisotropic quasi-quadrature formula of u at the grids ϒ l can be defined by where We then obtain the Q-SIK quadrature formula by combining all sub quadrature problems with the combination technique, that is, the Q-SIK quadrature is defined by The multilevel quadrature scheme is achieved by iterative refinement exactly as described in the previous section.

Numerical experiments
In this section, we present a collection of numerical experiments for approximation for d = 2, 3, 4, where the implementation of Q-SIK and Q-MuSIK algorithms is assessed and compared against both the standard quasi interpolation (QI) on uniform full grids and its standard multilevel version (MLQI).It is worth remarking that in higher dimensions it becomes harder to calculate approximation errors.In all examples we use the level basis function μ 0.4 .Thus, for higher dimensions we move to assessing the errors in quadrature, which is the subject of the final examples, where we consider d = 4, 5, 10.We compare Q-MuSIK with MuSIK for these examples.

2-D example
The Q-SIK and Q-MuSIK algorithm with Gaussian basis functions is applied to the test function P 2d (x) : [0, 1] 2 → R, with Here "SGs" stands for the number of sparse grid centers used, "NoVs" represents the total number of nodes visited in the Q-SIK and Q-MuSIK, "Maxerror"  and "RMS-error" are the maximum norm and root-mean-square (L 2 -norm) errors, respectively, evaluated on a 160 × 160 uniform grid.We see in Table 1 that, as expected, Q-SIK does not converge.In other words, the error in the Q-SIK method remains stable after a few levels.However we see in the same table that Q-MuSIK converges.
In Fig. 4 we graph the results of quasi interpolation, multilevel quasi interpolation, Q-SIK and Q-MuSIK.Obviously, quasi interpolation and Q-SIK do not converge to the target functions.However their multilevel versions shows good convergence behaviour.
In Fig. 5 we compare the MuSIK and Q-MuSIK results with regard to computational time.According to this figure Q-MuSIK reaches the same error level as MuSIK in a shorter time.This is because quasi interpolation does not require the solution of any large algebraic system as does an interpolatory scheme such as MuSIK.

3-D example
In this section we approximate two functions, the first smooth and the second with derivative singularities.Let We test the Q-MuSIK approximation to this function on an equally spaced 50 × 50 × 50 evaluation grid.The corresponding results to those in 2 dimensions can be seen in Figs. 6 and 7. Again we see that Q-MuSIK outperforms MuSIK.Now consider where (x) + = x if x is positive and is zero otherwise.In Fig. 8 we see that the error appears to behave better than for the smoother function G 3D .The error is tested on a 50×50×50 uniform grid.If we test on a 100×100×100 uniform grid we see different behaviour for the maximum error (and similarly for RMS): On the finer grid with 10 6 test points we see that the error in the less smooth H 2D is greater.This emphasises the point we made in the introduction that error testing in high dimensions becomes more and more difficult, and that singularities in high dimensional functions are hard to locate (Table 2).

4-D example
Let us now consider a 4 dimensional non-tensor product test function In Fig. 9 the root mean-square error for QI, MLQI, Q-SIK and Q-MuSIK, respectively, for H 4D (x), are plotted against the number of data sites.We see similar performance as in 2 and 3 dimensions.
N or SG nodes In Fig. 10 we give the QI, MLQI, Q-SIK and Q-MuSIK quadrature results for M 3D (x) with regard to the corresponding exact values of the integral of M 3D (x) on the domain [0, 1] 3 , which is (with 16 digits accuracy) 0.122434028745371.
In Fig. 11 we compare the MuSIK quadrature and Q-MuSIK quadrature in terms of absolute error versus degree of freedom and absolute error versus evaluation time.The results in MuSIK were produced using the initial Gaussian basis function with standard deviation 0.45.We note that choice of this parameter is restricted in MuSIK by the conditioning of the interpolation problem.No such restriction is present for  Q-MuSIK as there are no linear systems to invert.As discussed in the previous examples Q-MuSIK has better performance especially for small n.In these comparisons we have used four, five and ten dimensional test functions which are given below: and the corresponding exact integral values of these functions on the domain [0, 1] 4 , [0, 1] 5 and [0, 1] 10 with 8 digits accuracy are 0.07766696, 0.001953125 and 0.19427907 respectively.These results confirm that the Q-MuSIK method provide us more rapid results for the same amount of error as the compared methods.

Concluding remarks
A new quasi multilevel sparse interpolation with Gaussian kernel, Q-MuSIK, is proposed and tested.One of the most significant positive aspects of quasiinterpolation is that it can yield a solution directly with no need to solve large algebraic systems, and the associated quadrature rule has positive weights.Numerical experiments suggest that Q-MuSIK is faster than the analagous interpolation algorithm, MuSIK, to give the same level of approximation accuracy.The algorithm is targeted at high dimensional approximation, where functions are typically observed as smooth.This motivates the choice of the Gaussian kernel as a basis function.

Fig. 3
Fig. 3 Sparse grid decomposition for levels 1 to 8 in 2 dimensions

Fig. 5 P
Fig. 5 P 2D :RMS error versus computational time for MuSIK (blue) and Q-MuSIK (red) on a 160 × 160 uniform grid in 2 dimensions

Table 1 P
2D:Q-SIK and Q-MuSIK error on an equally spaced 160 × 160 evaluation grid in 2 dimensions

Table 2
Q-MuSIK maximum error on 100 × 100 × 100 for G 3D and H 3DIn this section we give a number of examples for quadrature.We use two tensor product examples, respectively in 5 and 10 dimensions, and two non-tensor product examples.We consider the following non-tensor product example