QUASI-MONTE CARLO METHOD FOR INFINITELY DIVISIBLE RANDOM VECTORS VIA SERIES

. An inﬁnitely divisible random vector without Gaussian component admits representations of shot noise series. Due to possible slow convergence of the series, they have not been investigated as a device for Monte Carlo simulation. In this paper, we investigate the structure of shot noise series representations from a simulation point of view and discuss the eﬀectiveness of quasi-Monte Carlo methods applied to series representations. The structure of series representations in nature tends to decrease their eﬀective dimension and thus increase the eﬃciency of quasi-Monte Carlo methods, thanks to the greater uniformity of low-discrepancy sequence in the lower dimen-sion. We illustrate the eﬀectiveness of our approach through numerical results of moment and tail probability estimations for stable and gamma random variables.


1.
Introduction. An infinitely divisible random vector without Gaussian component admits representations of shot noise series. Such series representations have played an important role in theories such as the tail probability of stable random vectors and have also been studied in the applied literature, known as "shot noise." (See, for example, Vervaat [28] and the references therein.) Series representations involving Poisson arrival times are given by Ferguson and Klass [6] for real independent increment processes without Gaussian component and with positive jumps. The theory of stable processes and their applications are expanded, due to LePage [16], on series representation of stable random vectors. The simulation of nonnegative infinitely divisible random variables is considered, and their series representations as a special form of generalized shot noise is developed in Bondesson [3]. The same approach is used in Rosiński [21] as a general pattern for series representations of Banach space valued infinitely divisible random vectors.
Infinitely divisible random vectors consist of three independent components: a constant, a centered normal random vector, and a Poisson component. The generation of the Gaussian component is standard and will not be discussed in this paper. The Poisson component is governed in full by the so-called Lévy measure. If the Lévy measure is infinite, then the Poisson component consists of infinitely many jumps. It is obviously impossible to generate infinitely many jumps on a computer. Possible simulation methods can be summarized as follows.
(i) Direct generation: When an explicit density function, or an alternative exact simulation method, is available, a simulation should be straightforward in principle.
Stable, exponential, gamma, geometric, and negative binomials are in this category. To simulate random variables from more complicated distributions, a numerical inversion method is proposed in Hörmann and Leydold [7] to numerically approximate the density functions with piecewise Hermite functions. Moreover, this method is applied in Imai and Tan [13,14] to the simulation of variates from infinitely divisible random distributions of practical interest, such as generalized hyperbolic, normal inverse Gaussian, and Meixner distributions.
(ii) Generation with series representation: Series representation provides perfect and often easy simulation of infinitely divisible random vectors. A disadvantage of this method is that some series may converge at an extremely slow rate. Then, one might need a huge number of terms to reach a desired accuracy of the approximation. On the other hand, with ever increasing computational speed, a slow convergence may no longer cause a serious practical issue. (See, for example, Kawai [15] and Madan and Yor [17] for simulation use of series representations.) (iii) Generation from compound Poisson: If a neighborhood of the origin of the Lévy measure is discarded or replaced by its mean value, then the remainder corresponds to a compound Poisson random vector with a constant shift. It converges to its true random vector as the intensity of the discarded part of the Lévy measure decreases. The compound Poisson component can be simulated in a precise manner. The aforementioned series representation provides a consistent way to construct such approximations. However, when the discarded part of the Lévy measure is too intense, the corresponding component may produce a substantial error. In such cases, one may often approximate the discarded component by a Gaussian random vector with an appropriate small variance, as investigated in Asmussen and Rosiński [2]. This approximation complements a method through series representations when the series converges slowly.
The main purpose of this paper is to discuss the effectiveness of the quasi-Monte Carlo (QMC) method applied to shot noise series representations. We aim at moment estimations as well as tail probability estimations, rather than a pointwise approximation. It is well known that the QMC method could produce more accurate estimations than the Monte Carlo (MC) method since a low-discrepancy sequence in the QMC method has an asymptotically superior convergence rate compared to random numbers used in the MC method. The practical advantage of the QMC method, on the other hand, can be justified by a notion of effective dimension of the problem, first introduced in Caflisch, Morokoff, and Owen [5]. It is demonstrated that the QMC method works better than the MC method for nominally high-dimensional problems as long as their effective dimensions are small. In this paper, we demonstrate that the structure of series representations tends to decrease their effective dimension in nature and thus increase the efficiency of the QMC method. Since the QMC method makes the best use of a greater uniformity of low-discrepancy sequence in the lower dimension, it is natural to expect the applicability of the QMC method to series representations of infinitely divisible random vectors. We illustrate the effectiveness of our approach through numerical results of moment and tail probability estimations for stable and gamma random variables.
The rest of this paper is organized as follows. Section 2 briefly summarizes background material on series representations of infinitely divisible random vectors with a view towards simulation. In section 3, we discuss the effectiveness of the QMC method applied to Poisson interarrival times in series representations and review the concept of cumulative explanatory ratio. In section 4, we provide numerical results to illustrate the effectiveness of the QMC method applied to series representations of stable and of gamma random variables. Finally, section 5 concludes.

Series representation of infinitely divisible random vectors.
Let us begin this section with the notation which will be used throughout this paper. We denote by N the collection of positive integers, by R d the d-dimensional Euclidean space with the norm · , We denote by L = the identity in law. We say that an infinitely divisible random vector X in R d is generated by the triplet (γ, A, ν) if its characteristic function is given by Throughout this paper, we consider only infinitely divisible laws without Gaussian component, that is, A ≡ 0. Moreover, we denote by {Γ k } k∈N the arrival times of a standard Poisson process and by {E k } k∈N a sequence of independent and identically distributed (i.i.d.) exponential random variables with unit mean.
Let us review generalities on the series representation of infinitely divisible random vectors with a view towards simulation. Our discussion is essentially parallel to the so-called inverse Lévy measure method [6,16]. Notice first that the random variable u 0 h(s)ds < t}, provided that T 0 h(s)ds < +∞. Therefore, by regarding the intensity h(t) as a Lévy measure ("on state space" rather than on time), we deduce that +∞ k=1 H(Γ k )½(Γ k ∈ [0, T ]) is an infinitely divisible random variable with Lévy measure ν(dz) = h(z)dz defined on (0, T ]. Notice here that the definition of H(t) implicitly assumes that the Lévy measure ν has a compact support. Moreover, the condition T 0 h(s)ds < +∞ implies that the Lévy measure is finite. We can extend this formulation to an infinite Lévy measure on R + , simply by redefining the inverse function H as running down from the infinity rather than up the other way, that is, (See Asmussen and Glynn [1].) A series representation of stable random vectors can be derived in closed form through this inverse Lévy measure method as follows.
Example 2.1 (stable random vector). An infinitely divisible random vector X in R d , without Gaussian component, is called stable if its Lévy measure is given by where α ∈ (0, 2) is the stability index and where σ is a finite positive measure on where ζ denotes the Riemann zeta function. Then, it holds that The above centering constants are obtained in Proposition 5.5 of Rosiński [22].
Every infinitely divisible random vector admits a series representation through the inverse Lévy measure method. In most cases, however, the tail inverse of Lévy measure, that is, the function H(r) defined by (2.1), is not available in closed form, with exceptions such as the stable random vector of Example 2.1 and the layered stable random vector of Houdré and Kawai [9]. To obtain a closed form, some alternative methods have been proposed, for example, the thinning method and the rejection method of [21]. Each of those methods can be considered as a special case of the so-called generalized shot noise method of [3,21], which we describe as follows. Assume that Lévy measure ν can be decomposed as where U is a random variable taking values in a suitable space U and where H : Then, an infinitely divisible random vector X generated by the triplet (0, 0, ν) is identical in law to the shot noise series representation where a > 0 and b > 0. The inverse Lévy measure representation provides a shot noise series representation where Ei(x) := +∞ x u −1 e −u du is the exponential integral function. However, this representation is not directly useful since neither the function Ei(x) nor its inverse can be given in closed form. On the other hand, an explicit representation is given in [3] by where {V k } k∈N is a sequence of i.i.d. exponential random variables with unit mean.
There are yet other explicit representations available through the thinning method and the rejection method of [21]. The availability of different representations implies nonuniqueness of the decomposition (2.4). For example, the representation (2.6) is based on the decomposition where R+ ½(· ∈ B)e −v dv acts as the standard exponential law. Observe that for each v ∈ R + the random sequence {e −Γ k /a v/b} k∈N is almost surely decreasing in k, while The next example presents a series representation of more intricate structure. Example 2.3 (CGMY random variable). The CGMY random variable is characterized by a Lévy measure in the form where C > 0, G > 0, M > 0, and Y < 2. (See Carr et al. [4] for more details.) If Y ∈ (0, 2), then it is in a subclass of the tempered stable random vector of Rosiński [22]. An explicit representation is given in [22] by (Three other representations are available in closed form and will be discussed in [11]. See also, for example, [15,17], where the above series representation is used for simulation.) As can be seen in the above examples, series representations for an infinite Lévy measure necessarily have a form of infinite sum as well. To deal with such infinite series towards simulation, we need to truncate the infinite series up to a certain finite point. A straightforward approach is the truncation to a finite number of terms of the series An alternative approach to the infinite series is the truncation to a finite time span of the Poisson process In connection with the fact that the function H(r, v) is nonincreasing in r, when the truncation level n is large enough, it is fairly reasonable to expect that the above two truncations (2.7) and (2.8) behave in a similar manner. The common and key building block for simulation of series representations is sampling of the epochs of a standard Poisson process. This can be generated iteratively as a successive summation of i.i.d. exponential random variables: The exponential random variables {E k } k∈N act as interarrival times of a standard Poisson process and can easily be generated as The truncation (2.8) provides an interesting interpretation of discarded small jumps. Assume, for simplicity, that the function H has the form H(r, u) = h(r), where h is a nonincreasing function from R + to R + . Then, for each n ∈ N such that h(n) < 1, the Itô-Wiener isometry yields the L 2 (Ω)-estimate of the discarded small jump component where μ is a Poisson random measure on R d 0 × [0, 1] whose compensator is the Lévy measure ν. This discarded small jump component may be replaced with a Gaussian random vector under some mild condition. (For details, see [2].)

Variance contribution of the lower dimension.
As mentioned in the introduction, the advantage of the QMC method is attributed to its potentially superior convergence rate of O(N −1 ln d N ), while that of the MC method is O(N 1/2 ), where N and d denote, respectively, the number of iterations and the dimension. Nevertheless, several studies point out that this theoretical higher asymptotic convergence rate of the QMC is not achievable in practice for high-dimensional problems. To resolve the above puzzle and to explain a practical advantage of the QMC method, especially when we deal with shot noise series representations, let us review in this section the analysis of variance (ANOVA) decomposition and the concept of effective dimension. We refer the reader to Wang and Fang [27] for more details.
Let the set A := {1, 2, . . . , d} denote the coordinate axes of the d-dimensional unit cube [0, 1) d . Then, for each subset S ⊆ A, we define by |S| its cardinality and by A\S its complementary set. Each point in [0, 1) d is denoted by x := (x 1 , . . . , x d ) , and x S denotes the |S|-vector of components x k for k ∈ S. Let f be a function mapping from [0, 1) d to R, and consider a problem of computing the following integral: We can employ the MC and QMC methods for approximating the integral by taking a sample mean, that is, where the sequence {x (k) } k∈N corresponds to a sequence of i.i.d. replications in the MC method, while it is generated from a low-discrepancy sequence in the QMC method. It is well known that the MC error is of order O(n 1/2 ) for square integrable functions and is independent of the dimension d. On the other hand, the Koksma-Hlawka inequality provides the QMC error bound by where D * ({x (k) } k=1,...,n ) denotes the star discrepancy of the sequence {x (k) } k=1,...,n and V HK is the variation in the sense of Hardy and Krause. Under the mild condition that f is a square integrable function, the ANOVA decomposition expresses the integrand f as a sum of 2 d additive functions: where the function f S , which depends only on the component of x in the dimension S, is defined recursively by with the usual convention f ∅ (x) : whenever |S| > 0. By using the above variance decomposition, we define the total variance for the subset S by We are now in a position to define the effective dimension in the truncation sense by the smallest integer d satisfying where p is a quantile of the variance. In other words, the first d dimensions capture more than p = 95%, say, of the total variations, even though the nominal dimension of f may be very large.
The efficiency of the QMC is intricately related to the effective dimension. Fix N ∈ N, and consider a low-discrepancy point set P : where P S indicates the projection of the point set P on [0, 1) |S| , where D S,N (P S ) is the discrepancy of P S consisting of N points, and where f denotes the variation of the function f . The above inequality indicates that the QMC error bound associates with the uniformity of all the projections P S and all the low-dimensional structures of f S in an explicit manner. Although the success of the QMC method relies on a greater uniformity of low-discrepancy sequences than random sequences, the uniformity is not always preserved for every component and projections with a finite number of points. In fact, it is well known that the uniformity of low-discrepancy sequences decreases as the dimension increases. Nevertheless, as claimed in Wang and Fang [27], the QMC method can still be more effective than the MC method, in particular, for functions with a low effective dimension in the truncation sense. This observation can be justified by decomposing the bound as provided that the effective dimension of the function f is given by d . This inequality indicates that when d is small the discrepancy of all the low-dimensional projections of low-discrepancy point sets P S are much smaller relative to those of the random point sets. This means that the first summation is much smaller for the QMC method than for the MC method. As the dimension increases further, the uniformity of lowdiscrepancy point sets may deteriorate, resulting in a higher value of D S,N (P S ). Yet the second sum may not be significant in the sense that the quantity f S is often small. Consequently, the QMC method can be more effective than the MC method when the function f has a low effective dimension.
Let us next illustrate why the structure of series representations suits this effective dimension argument. There are two primal aspects: the domination of {Γ k } k∈N by a lower dimension of the interarrival times {E k } k∈N , and the tails of Lévy measure.
For simplicity, we focus on the function H(r) defined by (2.1). In view of (2.9), the first few terms of {H(Γ k )} k∈N are expressed solely by the first few terms of {E k } k∈N as well. For a simple illustration, fix λ > 1 and consider the infinite sum of a decreasing sequence, where the interchange of two summations can be justified by the Fubini theorem due to the almost sure positivity of the summands, including the case where both sides are infinite. To see the impact of the first exponential E 1 , we compare (3.3) to its counterpart, with all the rest of {E k } k≥2 being set to be degenerate, in terms of the first and the second moments, that is, Here, a greater λ assigns a more significant role to the lower dimension of the interarrival times {E k } k∈N . Roughly speaking, this tends to happen with a faster decay of the function H(r) in r, that is, less activity near the origin of Lévy measure. Next, concerning the Lévy measure tails, recall that most of the Lévy measure tails tends to be expressed by the first few terms of the sequence {H(Γ k )} k∈N . The Lévy measure tails are closely related to moments of its associated infinitely divisible random vector in such a way that for a submultiplicative, locally bounded, measurable function f :  [24] for details.) In this respect, the simulation of the largest jumps through a systematic generation of the lower dimension of the interarrival times {E k } k∈N is expected to contribute to improvements in both precision and convergence when computing expectations.
The effectiveness of the QMC method may be increased by combining this structure with the superior uniformity of a low-discrepancy sequence in earlier dimensions. Accordingly, it is expected that the QMC method may work better than the MC method. The efficiency of the QMC method is often measured through the ratio which we refer to as the cumulative explanatory ratio (CER), where D S (f ) is defined by (3.1). In other words, this represents the proportion of the variance captured by the first d dimensions. In view of the inequality (3.2), it is possible that the effective dimension is given by d when the CER becomes larger than p. In general, it is difficult to provide an explicit expression for D S (f ). However, as shown in Sobol' [26] and [27], this can be computed via the formula Then, for natural numbers n and N such that n ≤ N , define, in view of (2.9), Then, the CER with respect to the first n interarrival times is defined by We note that the sequence {U k } k∈N appears in common on both sides of the covariance operator. As a toy example, the CER of the random variable (3.3) is given by

Numerical illustrations.
In this section, we provide numerical results of estimations of moments and tail probabilities for the stable and gamma random variables presented in Examples 2.1 and 2.2. Let us begin by formulating five statistics we will report in the following numerical examples for discussion. Ideally, we are able to estimate μ := E[f (X)], where X = +∞ k=1 [H(Γ k , U k ) − c k ] and f is a function of interest; for example, f (x) = x 2 for estimation of the second moment. In reality, as discussed, we need to truncate infinite shot noise series up to a finite number of terms; that is, we instead estimate μ N : ). Then, we compute average, standard error, root mean squared error (RMSE), and relative error defined, respectively, by Note that we consider only examples for which the limiting value μ is available. Note also that the RMSE and the relative error are affected by both the estimator accuracy and the finite truncation of infinite shot noise series. The fifth statistic is the effective dimension by estimating the CER (3.5).
In our numerical experiments, we fix N 1 = 4096 and N 2 = 30 and use a scrambled version of Sobol' sequences [25], proposed in Owen [20]. The reason for N 1 = 4096 = 2 12 is because the Sobol' sequence, which is a (t, m, s)-net in base 2, attains a better uniformity when sample size is in the power of 2 (see Niederreiter [18]). Furthermore, we employ the Latin supercube sampling method of Owen [19] to avoid generating more than 50-dimensional Sobol' sequences. For example, 500-dimensional low-discrepancy sequences are constructed as 10 sets of 50-dimensional (randomized) low-discrepancy sequences. This randomization scheme makes full sense of the standard error (4.1) and has often been employed, for example, in [12,13] for error analysis under the uniformity of low-discrepancy sequence in the randomized QMC (RQMC) framework.
For comparison, we employ the same procedure for estimating those reference numbers with the MC method, except that low-discrepancy sequences are replaced by pseudorandom sequences. All the experiments are implemented on the computer platform Intel Xeon(R) CPU 3.00GHz with 3.25GB memory. It takes around 437 seconds to generate and store 4096 × 1000 random variables in the MC simulation. Meanwhile, in the RQMC method, it takes around 3593 seconds to implement 4096 sets of 1000-dimensional Latin supercube sampling sequences. Therefore, relative to the plain MC experiments, implementation of the RQMC method increases computing time by roughly a factor of 8.

One-sided stable random variable.
Let X be a stable random variable in R with series representation where the truncation level N is supposed to be set sufficiently large so that X N is close enough to the infinite sum X. As a reference of the closeness, consider a different truncation X (N ) up to a finite time span of a standard Poisson process in the spirit of (2.8). As discussed earlier, the L 2 (Ω)-distance is given by , whose decay is of O(N 2−2/α ) as N ↑ +∞. This implies that the truncation X (N ) converges in L 2 (Ω) to the infinite sum X at a slower rate for a greater α. First, we examine the effectiveness of the RQMC method applied to Poisson interarrival times in moment estimation, where moments are given in closed form (Example 25.10 and Example 24.12 of Sato [24]); for η ∈ (−∞, α), As can be seen in  Numerical results of moment estimations are presented in Table 1. To illustrate the differences between average and analytical value, both RMSEs and relative errors are also reported. Each relative error is given by a percentile of the difference between the estimated average and the analytical value. The speed of convergence depends on the parameter value α. The relative errors indicate that each average value tends to be very close to its limiting value even when N is very small for smaller α, while it is slower for larger α. This observation is consistent with the speed of the L 2 (Ω)convergence presented in (4.3). Most importantly, we can assert that the RQMC method produces a smaller standard error than the MC method by a factor in the range of 10-200. Recall that, in plain MC simulation, 100-times more replications are required, in principle, to obtain a 10-times more accurate estimate. In comparing the RMSE values, it can also be observed that each moment estimate via the RQMC method tends to converge to its true limiting value E[X η ] much faster than the MC method. The difference in relative error is mainly attributed to an inefficiency of the MC method; that is, the sample size 4096 × 30 is not sufficient for estimating the average. In fact, this also asserts the superiority of the RQMC method.
Note that the standard errors of the RQMC method are almost invariant under an increase of the numerical dimension N . We conjecture that Var(X 10 ), Var(X 100 ), and Var(X 1000 ) are close to each other. If so, the effective dimension remains small even in the high nominal-dimensional problem such as N = 1000.
Let us next examine the cumulative explanatory ratio (CER) n to analyze the impact of the RQMC method in terms of effective dimension. Each CER is estimated through (3.5) by a plain MC method with 1000000 replications. By computing this quantity, we examine the variance contribution of the first n dimensions of {E k } k∈N . We report numerical results in Table 2, where all the numbers are rounded off to the nearest integer. (Hence, "100%" may be any number greater than 99.49%, not necessarily exactly 100%.) Here, we present only results of the case where the truncation level is set at N = 1000 to avoid overloading this section with nonessential details. (Let us note that we have obtained very similar results even for smaller N 's.) Let us emphasize that the CERs achieve almost 100% at the first dimension. In other words, the effective dimension with p = 0.99 is equal to one without additional variance reduction techniques, even when dealing with the high truncation level N = 1000. This indicates that the RQMC method remains very effective even with a very high truncation level at which the RQMC method often ruins its effectiveness against a plain MC method. Those CER results are also consistent with the superiority of the RQMC method in terms of standard error and support our claim that the RQMC method has an advantage over the MC method for shot noise series representations of infinitely divisible laws. Next, we examine the effectiveness of the RQMC method in the probability tail asymptotics (Property 1.2.15 of Samorodnitsky and Taqqu [23]): We again fix b = 1, and the parameter sets tested are (α, λ) = (0.1, 10 35 ), (0.4, 10 10 ), and (0.7, 10 5 ). We first present results of the CER in Table 3. Let us emphasize that they are all exactly 100% without rounding operations, unlike the results presented in Table 2. This is not surprising at all, since, as discussed in [23], the first term of the series (4.2) dominates the tail probability behavior in such a way that as λ ↑ +∞. In addition, recall that the first exponential E 1 also appears in all the remaining terms of the series, not only in the first term. This fact provides a stronger reason for the result (CER) 1 = 100%. Numerical results of tail probability estimation are presented in Table 4. Note that b/α is the limiting value in terms of both the truncation level N and the threshold λ. On the one hand, we can observe through the standard errors and the RMSEs that the RQMC method yields estimates with very small variance, regardless of the truncation level N . Clearly, this fact is attributed to the great uniformity of 4096 samples

JUNICHI IMAI AND REIICHIRO KAWAI
has a gamma distribution. It is straightforward that, for η ∈ (−a, +∞), and that, for λ > 0, which can be computed by using an incomplete gamma function in standard math packages. As is the case with the one-sided stable random variable, the parameter b serves only as a constant scale. Hence, without essential loss of generality, we fix b = 1.

RQMC method only for Poisson interarrival times.
First, we apply the RQMC method to the Poisson interarrival times {E k } k∈N , while the independent exponential random variables {V k } k∈N are still governed by the plain MC method. Numerical results of moment estimation are presented in Table 5. The truncation level is set at N = 1000 throughout the experiments. In short, those results indicate that the RQMC method improves the plain MC method in standard error only by a factor in the range of 2-4. (We have confirmed that even much smaller truncation levels, say N = 50, provide very similar results.) The improvements here are not as appealing as in the case of one-sided stable random variables examined in section 4.1. The effectiveness of the RQMC method is somewhat "watered down" by the purely random element {V k } k∈N . To confirm this conjecture, consider the random variable that is, all the {V k } k∈N in (4.6) are frozen at 1. We can prove that the random variable Y of (4.7) is an infinitely divisible random variable with Lévy measure ν(dz) = a/zdz defined only on (0, 1/b) and E[Y ] = a/b. As a matter of course, this random variable is not directly comparable with gamma random variables in terms of estimator efficiency since they have different distributions. For example, their variances are not identical, while their means coincide. Nevertheless, the series representation (4.7) provides a useful intuition as to how much the exponential random variables {V k } k∈N in (4.6) have ruined the effectiveness of the RQMC method applied to {E k } k∈N . To avoid overloading this section with nonessential details, we give in Table 6 only numerical results of moment estimation of E[Y ] = a/b. For a better comparison with the results presented in Table 5, we also fix b = 1 and the truncation level N = 1000. Those results indicate a sufficient efficiency of the RQMC method against the MC method. Based upon the standard errors, the RQMC method provides more accurate values by a factor in the range of 37-67, and similar results are observed in the RMSE comparison.
Numerical results of moment estimation for gamma variates are reported in Table 7. As in the previous examples, the truncation level is fixed at N = 1000. All the numerical results support the effectiveness of the RQMC method (Schemes II, III, and IV) over the plain MC method (Scheme I). The effectiveness of the RQMC method is also observed by comparing Schemes II and III, where {V k } k∈N is generated, respectively, by the MC and the RQMC methods. Amongst the four schemes, Scheme IV yields the most accurate estimates. The superiority of Scheme IV can be explained in terms of effective dimension. We present in Table 8 a CER of moment estimation for gamma random variables. To distinguish from (CER) n of (3.5), we define an appropriate CER for Scheme IV as follows. Let {V k } k∈N be an i.i.d. copy of {V k } k∈N and also be independent of {E k } k∈N . Define, for n ∈ N, where U 0 := {V k } k∈N . Then, define a CER, slightly different from (3.5), by Recall that in computing (CER) n via (3.5) a common i.i.d. random sequence U 0 is plugged into both sides of the covariance operator. We present in Table 8 numerical results of (CER) n and (CER) (m,n) , again with rounding operations. The results provide clear evidence that Scheme IV significantly increases the CER with the first few dimensions, relative to Scheme III. Surprisingly, a simple use of the RQMC method without additional variance reduction techniques can capture almost 90% of the total variance only with the first five dimensions out of as many as 1000 nominal dimensions if the sequence is assigned in a smart way. 5. Concluding remarks. In this paper, we have proposed an effective application of QMC methods to shot noise series representations of infinitely divisible random vectors. We have observed that the largest jumps consisting of an infinitely divisible random vector are expressed only by the first few terms of the series and that this structure tends to decrease the effective dimension and thus increase the efficiency of the QMC method, thanks to the greater uniformity of low-discrepancy sequence in the lower dimension. We have illustrated the effectiveness of our approach through numerical results of moment and tail probability estimations for stable and gamma random variables. We expect that our results support a simulation use of shot noise series representations of various infinitely divisible laws with ever increasing computational speed.
Let us close by describing some related research topics. First, we have discussed that an explicit representation is rarely available through the inverse Lévy measure method, while some other methods may provide a representation in closed form, as seen in Examples 2.2 and 2.3. Nevertheless, additional random sequences are required for such representations and ruin the effectiveness of the QMC method applied to Poisson interarrival times, as demonstrated in sections 4.2.1 and 4.2.2. In particular, the representation for CGMY random variables in Example 2.3 contains so many additional random sequences that our approach in section 4.2.2 would no longer be effectively applicable. To address this issue, the authors have recently developed in [10] a numerical method for implementing the inverse Lévy measure method on a computer. This is worthwhile for two reasons: (i) the effectiveness of the QMC method is not watered down by other random sequences, and (ii) in principle, any infinitely divisible random vector can then be simulated with shot noise series representation.
Second, it is well known that the series representation can be extended to Lévy processes on a compact time interval simply by scattering all the jumps uniformly on the time interval, that is, where {T k } k∈N is a sequence of i.i.d. uniform random variables on [0, T ]. This representation is known to be useful in, for example, simulation of trajectories of Lévy processes, weak approximation of Lévy-driven stochastic differential equations, and simulation of fractional Lévy motions (see, for example, Houdré and Kawai [8]). Here, QMC techniques can also be applied to the jump timings {T k } k∈N , not only to the jump size driving {Γ k } k∈N , just as in section 4.2.2.