A Multivariate Stochastic Degradation Model for Dependent Performance Characteristics

Abstract Simultaneous degradation of multiple dependent performance characteristics (PCs) is a common phenomenon for industrial products. The associated degradation modeling is of practical importance yet challenging. The dependence of the PCs can usually be attributed to two sources, one being the overall system health status and the other the common operating environments. Based on the observation, this study proposes a parsimonious multivariate Wiener process model whose number of parameters increases linearly with the dimension. We introduce a common stochastic time scale shared by all the PCs to model the dependence from the dynamic operating environment. Conditional on the time scale, the degradation of each PC is modeled as the sum of two independent Wiener processes, where one represents the common effects shared by all the PCs, and the other represents degradation caused by randomness unique to this PC. An EM algorithm is developed for model parameter estimation, and extensive simulations are implemented to validate the proposed model and the algorithms. For efficient reliability evaluation under a multivariate degradation model, including the proposed one, a bridge sampling-based algorithm is further developed. The applicability and the advantages of the proposed methods are demonstrated using a multivariate degradation dataset of a coating material. Supplementary materials for this article are available online.


Background
Degradation is common for most industrial products. The degradation can be characterized by the deterioration in performance or condition with usage. For example, the rechargeable Lithium-ion battery, which is widely used in electric vehicles, suffers from the well-known problem of capacity degradation with charge-discharge cycles. In most cases, multiple performance or condition indicators are used to characterize the overall performance of the product, and these performance indicators, or performance characteristics (PCs), usually degrade simultaneously. Degradation of these PCs is usually correlated because they determine, and in turn, are affected by the overall health condition of the product. For example, each component in a multi-component system has its own degradation and the degradation of these components interacts with each other. These components are working under a common environment, which further introduces positive correlations in their degradation. Another example is the aging of polymer coating materials under ultraviolet (UV) radiation (Gu et al. 2009). UV radiation causes photooxidative degradation that breaks polymer chains, produces free radical and reduces the molecular weight, which can lead to a deterioration of mechanical properties and introduce useless chemicals (Yousif and Haddad 2013). The coating material contains different chemical compositions. Different compositions under UV radiation can have different but correlated degradation, due to the correlated photooxidation and the common dynamic environments. The mechanical properties of the coating material, such as the coating modulus and hardness, change greatly when the compositions degrade.
To accurately evaluate the product degradation and thus the product reliability, it is necessary to model the degradation of multiple PCs simultaneously. Most existing studies focus on modeling the degradation of bivariate PCs, and copula-based methods are the most widely-used, see Pan et al. (2013), Wang, Guo, and Cheng (2014) Peng et al. (2016b), Rodríguez-Picón et al. (2017), Sun et al. (2018), Shi et al. (2021), and Palayangoda and Ng (2021) to name a few. Some copula functions can be directly extended to higher dimensions without increasing the number of parameters, for example, the Gumbel copula and the Frank copula (Nelsen 2006, sec. 4.6). However, these copulas lack sufficient flexibility in modeling the correlations in higher dimensions. Other copulas, such as the Gaussian copula or the vine copula, though flexible enough, have quadratically increasing number of parameters with the dimension (Aas et al. 2009;Xu et al. 2017). In addition, a copula-based degradation model can be properly defined only when all PCs are measured synchronously, which can be restrictive. In view of these difficulties, this study proposes a general model as an alternative to the copula methods for degradation modeling of multiple dependent PCs, and develop the inference procedure.
When there are multiple PCs, a failure is normally defined when the multivariate degradation hits a failure-defining region. For a given mission time, computing the probability of hitting this region is usually through simulations. However, a simple Monte Carlo using fixed grid points, which is adopted in most existing degradation studies (Peng et al. 2016b), does not have control on the accuracy of the computed probability. Another contribution of our article is to propose an adaptive sampling scheme for accurate evaluation of the product reliability, and demonstrate its effectiveness in controlling the simulation error.

Related Literature
Because of the capability to model the inherent randomness in the degradation process, stochastic processes have been widely applied in degradation modeling (Ye and Xie 2015). Among these models, the Wiener process and its variants are the most popular (Whitmore and Schenkelberg 1997;Liao and Tian 2013;Hajiha, Liu, and Hong 2021;Wang, Liao, and Ma 2021). The Wiener process is a continuous and non-monotone stochastic process, which has been successfully applied to model the degradation of rechargeable batteries, LEDs, mechanical components, etc (Si et al. 2013;Lei, Li, and Lin 2016). Some authors further extend the traditional Wiener process model to account for the heterogeneous population, measurement errors, and dynamic degradation rate (Whitmore and Schenkelberg 1997;Ye et al. 2013;Si et al. 2014;Zhai and Ye 2017). Other two popular stochastic process models are the gamma process and the Inverse Gaussian (IG) process (Chen and Ye 2018;Shen, Cui, and Ma 2019). These two models share some similarity in that they are both pure jump processes, and their paths are monotone . The application of the gamma process and the IG process in degradation modeling can be found in , Peng et al. (2017), and Chen and Ye (2018), to name a few.
Most existing studies on degradation modeling focus on the degradation of a single PC. Pan et al. (2013) and Wang, Guo, and Cheng (2014) considered the degradation of two PCs, where each PC is modeled by an independent Wiener process and the dependence between the degradation of the two PCs is modeled by a copula function. Balakrishnan (2011), Wang et al. (2015), and Pan and Sun (2014) also considered the degradation of two PCs, but used the gamma process to model the degradation of each PC and a copula function to model the dependence. Peng et al. (2016b) and Duan, Wang, and Wang (2018) instead proposed to use the IG process to model the degradation of each PC and used a copula function to model the dependence. Rodríguez-Picón et al. (2017) considered the scenario where the degradation of two PCs was modeled by marginally heterogeneous stochastic processes, including the gamma process, the IG process, the Wiener process and the geometric Brownian motion, and the dependence between the two PCs was modeled by a copula. Jin and Matthews (2014), Liu et al. (2017), and Dong, Cui, and Si (2020) used a bivariate Wiener process to explicitly model the dependent degradation processes. Xu et al. (2018) modeled the degradation of two PCs using two Wiener processes with a common random factor, where the dependence between the two degradation processes is naturally introduced by the common random factor.
Recent studies focused more on more than two dependent PCs. Sun et al. (2016) considered the degradation modeling of multiple dependent PCs, where each PC is modeled by a Wiener process and the dependence is modeled by a copula function. Peng et al. (2016a) proposed to model each of the multiple PCs by a Wiener process or an IG process, and use the copula to model the dependence between different PCs. Xu et al. (2017) exploited the Wiener process to model the degradation of each single PC and the vine copula to model the dependence. Sun, Ye, and Hong (2020) focused on the blocking effects in destructive degradation tests, and proposed a multivariate Wiener process for the multivariate degradation. Lu et al. (2021) considered the degradation of multiple dependent PCs using a general path model, and the dependence among different characteristics is modeled by a multivariate normal random effect.
The copula method is applicable only when all PCs are measured synchronously, while the general paths models and the multivariate Wiener process have quadratically increasing number of model parameters with the dimension. To address these issues, we propose a new Wiener process model for degradation modeling of multiple dependent PCs. The model characterizes the dependence between the degradation of multiple PCs by introducing a common underlying baseline effect, which captures the functional dependence between the PCs, and a common stochastic time scale that captures the dependence arising from the common dynamic environments and working loads. The model is motivated from the degradation mechanism, and it is parsimonious in that the number of parameters is linear in the dimension. We develop a simulation-based algorithm for reliability analysis and an Expectation-Maximization (EM) algorithm to obtain the maximum likelihood estimates for the model parameters. The appropriateness of the proposed model is demonstrated using the degradation data of coating materials (Gu et al. 2009).

Overview
The remainder of the article is organized as follows. Section 2 formulates the proposed model for multiple dependent degradation processes and discusses the properties of the proposed model. Section 3 discusses the reliability analysis based on the proposed model with a simulation-based algorithm. Section 4 develops an EM algorithm to obtain the MLE. The confidence intervals for model parameters are obtained through parametric bootstrap. Section 5 uses Monte Carlo simulations to validate the performance of the proposed model and the EM algorithm. In Section 6, the proposed model is applied to the coating degradation data, and the goodness of fit of the proposed model is validated. Section 7 concludes the study.

Model Formulation
Consider a product with K performance characteristics degrading over time. Following the convention, let the initial degradation of each PC at time t = 0 be zero. We propose to model each PC by a Wiener process as follows: with Here, H(t) is a monotonic increasing function of t denoting the time scale transformation, and B k (t), k = 0, . . . , K are independent standard Brownian motions. η and μ k , k = 1, . . . , K are the drift of the processes, while κ and σ k , k = 1, . . . , K are the diffusion coefficients. The formulation in (1) assumes that the degradation of each PC, Y k (t), contains two elements, that is, the common effect Z(t) and the individual effect X k (t). The common baseline effect Z(t) is shared among the K performance characteristics, which can be used to explain the functional dependence among the K performance characteristics. For example, the degradation of the rechargeable batteries is due to the irreversible wearout of the cathode, and this baseline phenomenon leads to a decrease in the battery capacity and other PCs (Lu, Tao, and Fan 2014). On the other hand, the individual effect X k (t) explains the particular degradation characteristic of each PC. To ensure the model identifiability, we constrain that η = 0 in the following discussion.
Conditional on the time scale H(t), the degradation of each PC is modeled as which is also a Wiener process such that On the other hand, the common effect Z(t) introduces dependence among the multiple degradation processes: (4) Consequently, at each given t the Pearson's correlation coefficient between Y k 1 (t) and Y k 2 (t) is .
It turns out that the Pearson's coefficient between the degradation of any two PCs is time-invariant. The correlation between the degradation of different PCs is dependent on the parameter κ 2 , which increases as κ 2 increases and decreases to zero as κ 2 approaches zero. When the product operates under a dynamic environment, the time-varying stresses would impose a common dynamic acceleration or deceleration effect on the degradation of different PCs. We use a common stochastic time scale H(t) to capture this dynamic effect. More specifically, we model H(t) with an IG process: The IG process is monotonically increasing, and at a given time t, H(t) follows an IG distribution with mean (t) and variance ζ (t). In the mathematics of probability, such a process is called a subordinator that models the random elapse of "virtual time" under a chronological time scale. The IG process was proposed as a subordinator to the Wiener process in Barndorff-Nielsen (1997), and the resulting compound process, called the normal-IG process was later found to be useful in modeling a variety of empirical data in physics and finance (Barndorff-Nielsen and Levendorskii 2001; Rolski et al. 2009). In the context of degradation modeling, Wang (2009) proposed a semiparametric degradation model based on the normal-IG process and developed a pseudo-likelihood method for parameter estimation. Zhai and Ye (2018) considered the grouped degradation test data and used the stochastic time scale to capture groupspecific blocking effects. The stochastic time scale is also used to model the variable aging process of products under dynamic environments, for example, Xu, Zhou, and Tang (2021). The function (t) is a determined and monotonically increasing function of t with (0) = 0, which is used to characterize nonlinear degradation trend. We assume that (t) has a predetermined parametric form with unknown parameters α, that is, (t) = (t; α), and the specific form can be determined through engineering experience or empirically (Ye et al. 2013). Commonly used forms include the power law form (t) = t α and the log-linear form (t) = exp(αt) − 1. The parameter ζ quantifies the volatility of the environment. When ζ approaches zero, H(t) degenerates to the deterministic (t) in probability.
Combining (1), (2), and (5), we model the degradation of each PC as the summation of two independent Wiener processes, where one represents the common effects shared by all the PCs, and the other represents degradation caused by randomness unique to this PC. The common effects can be understood as the overall health status of the system that governs the manifestation of all PCs. The other Wiener process captures the accumulation of damage independent of the systemic effect. In addition, we adopt a stochastic time scale to capture the influence of common working environments on these PCs. Such a decomposition of the overall degradation in each PC is motivated from the degradation mechanism of most physical systems. For the proposed model, we have the following proposition.
Proposition 1. The degradation Y k (t) has the following expectation and variance: Moreover, for two different PCs with k 1 = k 2 , the covariance and the Pearson's correlation coefficient between Y k 1 (t) and The proof is given in supplementary materials-S1.1. Proposition 1 shows that the variance of Y k (t) is attributed to three sources: the common baseline effect κ 2 (t), the individual effect σ 2 k (t) and the common dynamic time scale μ 2 k ζ (t). The correlation between the degradation of two PCs is attributed to the common effect and the common dynamic time scale. The Pearson's correlation efficient is time-invariant, which approaches 1 whenever ζ or κ 2 approaches ∞. On the contrary, the correlation approaches zero if both ζ and κ 2 approach zero. In particular, if ζ equals zero, the above coefficient degenerates to (4), where the randomness from the common dynamic time scale is removed. On the other hand, if we overlook the randomness in the baseline effect Z, then (8) degenerates to

Reliability Analysis
In multivariate degradation, a common definition of failure is to first associate the kth PC with a failure threshold D k , and then define a soft failure if any of the K performance characteristics exceeds the respective threshold (Peng et al. 2017;Lu et al. 2021). Without loss of generality, suppose the drift parameters μ k , k = 1, . . . , K are positive. Then, the functional region is S = (−∞, D 1 ] × · · · × (−∞, D K ], and the first exit time T f with respect to S can be expressed as The reliability at time τ can be defined as Since there is no closed-form expression or approximation for the distribution of the first exit time of such a multivariate stochastic process, a common approach to compute R(τ ) is to simulate a large number of sample paths over a given grid and use the proportion of success paths to estimate R(τ ). Given a mission time τ , we call a K-dimensional sample path a success path if it is always within the functional region S within [0, τ ], and a failure path otherwise. Monte Carlo simulation of a sample path might mistakenly classify a failure path as a success one, because it is possible that the path is within the functional region at the grid points but shifts outside the region in between the points. A simple Monte Carlo using fixed grid points does not have control over this misclassification probability (Dia and Lamberton 2011). To this end, we propose a dedicated adaptive sampling scheme with misclassification guarantee.
Our adaptive sampling scheme requires a tolerance limit on the misclassification probability. To begin with, we sample (Y 1 (τ ), . . . , Y K (τ )) from the degradation model (1) at τ . This can be done by first sampling H(τ ) from IG( (τ ), ζ −1 (τ ) 2 ), and then sampling Z(τ ) and X k (τ ) from N (0, κ 2 H(τ )) and exceeds the corresponding threshold D k , then we get a failure path. Otherwise, it is deemed as a success path if the misclassification probability is smaller than the given tolerance. Thus, we should assess the misclassification probability conditional on the simulated values H(τ ) and (Y 1 (τ ), . . . , Y K (τ )) at τ . The following proposition gives an upper bound regarding this misclassification probability.
The proof is given in supplementary materials-S1.2. Based on the proposition, if (Y 1 (τ ), . . . , Y K (τ )) is within S, then the probability that it is a success path conditional on H(τ ) and ing to the subadditivity of a probability measure. If this upper bound is larger than , we first identify the time interval over which the largest upper bound is attained (this interval equals [0, τ ] at the first iteration), add the mid-point to the mesh, and sample H, Z and {Y k , k = 1, . . . , K} at this new grid point conditional on all data we have generated so far. The sampling can be done by a carefully designed bridge sampling, and the details are provided in supplementary materials-S2. After the sampling, if all the simulated Y k values are within the threshold, we can reuse the above proposition to calculate an upper bound on the probability that Y k exceeds D k between two successive grid points, conditional on the simulated values of the path. The sum of all these upper bounds defines a new upper bound on the misclassification probability.
Then, we repeat the above procedure until either the path exceeds S at the newly added grid point, or the misclassification error falls below , whichever comes first. The proposed method is more effective than the brute force simulation in terms of the number of simulated points and the required time as it only samples at the necessary grid points. This is justified through simulation studies. To save space, the detailed results on the comparison are provided in supplementary materials-S2.3. The algorithm is summarized in Algorithm 1.

Else
While Bound of the exit probability conditional on the simulated values of the path is larger than 1 Find interval [t j−1 , t j ] with the maximum exit probability bound for some k and t j − t j−1 > τ ; 2 Bisect [t j−1 , t j ] into two equal-length subintervals and generate samples for H, Z and X k , k = 1, . . . , K at (t j−1 + t j )/2 by bridge sampling in supplementary materials-S2; Return: The path is a failure path.
End 3 Calculate the conditional exit probabilities over the two new subintervals for k = 1, . . . , K according to Proposition 2; 4 Calculate the bound of the exit probability conditional on all the simulated values of the path as the sum of the bounds for each k over all the subintervals.

End
Return: The path is a success path.

End
Output: The path is a success or failure path.
the model formulation in (1) and (2), it is ready to derive that conditional on var [ Y i,k,j where where 1 K is a K × K matrix with all elements equal to 1, and diag(σ 2 ) is a diagonal matrix with the diagonal σ 2 . Denote = κ 2 1 K + diag(σ 2 ) for simplicity.
Furthermore, we note that where i,j = (t i,j ) − (t i,j−1 ). We have the following result for the unconditional distribution of Y i,:,j . where and K v (z) is the modified Bessel function of the second kind (Abramowitz and Stegun 1972).
The proof is given in supplementary materials-S1.3. Because both the time scale H and the Wiener processes Z and X k have independent increments, Y i,:,j , j = 1, . . . , m i are independent. Therefore, the log-likelihood function can be readily obtained and the estimators for the model parameters can be obtained by maximizing the following log-likelihood function: Nevertheless, as p( Y i,:,j |θ) is a complex function of θ , the estimation for θ is not analytically tractable. Alternatively, we construct an EM algorithm to obtain the maximum likelihood estimates.

EM Algorithm for Parameter Estimation
Consider the unobserved baseline effect In addition, conditional on Z i and H i , we have that The EM algorithm iteratively updates the estimate for θ , where each iteration implements an E-step and an M-step in sequence. Suppose we have obtained an estimate θ (s) in the sth iteration. In the (s + 1)th iteration, we should first obtain the following Q-function in the E-step: which is the expectation of the complete log-likelihood with respect to the conditional distribution p(Z, H|Y, θ (s) ). The Here, we suppress the dependence on θ for notation simplicity. After some algebra (see supplementary materials-S4.1 for the detailed derivation), we have with a, b i,j and v given in (16). In addition, Once the Q-function is obtained, we update the estimates in the M-step as This can be done by taking the partial derivatives of the Qfunction with respect to θ , and letting the derivatives equal to 0. After some algebra, we have Thus, the EM algorithm can be run until convergence under some criterion, for example, the L 1 distance between the estimates in two consecutive iterations is below a predetermined threshold. Then, the maximum likelihood estimates for θ is obtained.
. ( H i,1 , . . . , H i,m i ) corresponds to a realization of the common dynamic time scale for unit i. 3. For j = 1, . . . , m i , generate a sample Z i,j from the normal distribution N (0,κ 2 H i,j ). ( Z i,1 , . . . , Z i,m i ) gives a realization of the baseline effect due to the functional dependence for unit i. 4. For j = 1, . . . , m i , generate X i,k,j from the normal distribution N (μ k H i,j ,σ 2 kH i,j ) for k = 1, . . . , K. 5. For j = 1, . . . , m i , obtain the resample for the degradation increment Ỹ i,k,j = Z i,j + X i,k,j for k = 1, . . . , K. 6. Run the above procedure for i = 1, . . . , n and obtain one resampleỸ for the degradation data. 7. Obtain the estimateθ * based on the resampleỸ using the proposed EM algorithm.

Interval Estimation
The confidence interval for θ can be obtained by the parametric bootstrap (Efron and Tibshirani 1993;DiCiccio and Efron 1996). With a given degradation dataset, we first obtain the point estimateθ based on the proposed EM algorithm. Then, we generate B bootstrap resamples based on the proposed model with the model parameters replaced byθ . Each resample is fitted by the proposed EM algorithm to obtain a bootstrap estimatê θ * . The sample standard deviations (SD) of the B bootstrap estimates {θ * 1 , . . . ,θ * B } are then used as estimates of the standard errors (SE) of the estimators, and their sample quantiles are used for interval estimation of the model parameters.
More specifically, the standard error for the estimate of a particular parameter θ ∈ θ , can be obtained as The (1 − p) × 100% confidence interval for θ can be obtained as where [x] denotes the integer that is closest to x, andθ (b) denotes the bth smallest value among {θ * 1 , . . . ,θ * B }. A detailed procedure to obtain the bootstrap estimates {θ * 1 , . . . ,θ * B } is given in Algorithm 2.

Simulation Study
In accordance with the real case in the following section, we consider the degradation with three PCs, that is, K = 3. The true model parameters are set as μ = (2.8, 3.3, 1.9) , σ = (3.8, 7.9, 2.8) , κ = 4.2, ζ = 4.8, and the nonlinear trend is modeled by the power law (t) = t α with α = 1.2. Assume that the degradation of n units are measured periodically with t = 1 and all the m i are the same, that is, m 1 = · · · = m n ≡ m. We examine four different combinations of n and m, that is, (n, m) = (10, 50), (10, 100), (50, 50), and (50, 100). For each combination, we run 1000 Monte Carlo replications. The proposed EM algorithm is applied to each synthetic dataset for point estimation. Table 1 presents the biases and the mean squared errors (MSEs) of the parameter estimators computed from the 1,000 replications for each of the four combinations of (n, m). As can be seen, the biases are small for all the parameters, and the MSEs decrease as the sample size n or the number of measurements m increases. This indicates that the model can provide a satisfactory fit with moderate n and m.

Validation of the Interval Estimation using Bootstrap
To validate the performance of the interval estimation by the parametric bootstrap, we apply the bootstrap procedure outlined in Section 4.2 with B = 1000 to each synthetic dataset in the simulation above. Table 2 gives the average of the standard errors and the empirical coverage probabilities of the 90% confidence intervals for the model parameters by the parametric bootstrap. The standard deviations of the point estimators computed based on the 1000 Monte Carlo replications are also provided for comparison. Clearly, the average of the standard errors agrees well with the standard deviation in all the four combinations (n, m). In addition, the coverage probabilities for all the parameters are close to the nominal value. This implies that the parametric bootstrap for interval estimation is satisfactory for moderate sample sizes.

Estimates of the Correlation Coefficients between PCs
According to (8), the correlation coefficients of the three pairs of the PCs can be computed as ρ(Y 1 , Y 2 ) = 0.6455, ρ(Y 1 , Y 3 ) = 0.7904, and ρ(Y 1 , Y 3 ) = 0.6343. In each replication, we can obtain the correlation coefficients based on the estimated model parameters. Table 3 shows the biases, MSEs and standard deviations for the three correlation coefficients based on 1000 replications. Similar to that for the parameter estimates, the biases of the estimates for the correlation coefficients are relatively small, and the MSEs and standard deviations of the estimates decrease with the increase of n and m. This shows that the proposed model can provide satisfactory estimates for the correlations between PCs under a moderate sample size. For each synthetic dataset, we also obtain the standard errors of the estimates and construct confidence intervals for the correlation coefficients by the parametric bootstrap with B = 1000. The average of the bootstrap standard errors and the empirical coverage probabilities of the 90% bootstrap confidence intervals are presented in Table 3. Again, the average of the standard Table 3. Biases, MSEs and Standard deviation (SD) of the estimators, average of the bootstrap standard error (SE) and coverage probabilities of the 90% bootstrap CI for the correlation coefficients under the four combinations of (n, m).

Data Description
We consider the chemical degradation data of the amine-cured epoxy coating material from the National Institute of Standards and Technology (Gu et al. 2009). The coating material would degrade under the UV radiation, and the weights of useful components will decrease. This process is monitored periodically in a degradation test with Fourier transform infrared spectroscopy, and the changes of the heights of the peaks at particular points represent the degradation of the amount of the corresponding chemical components. The damage at the following three peaks is of interest: • the peak at 1250 cm −1 from C-O stretching of aryl ether (Y 1 ), • the peak at 1510 cm −1 from benzene ring stretching (Y 2 ), and • the peak at 2925 cm −1 from CH 2 anti-symmetric stretching (Y 3 ).
Therefore, we have three PCs in this case, that is, K = 3. For demonstration, we consider degradation data from the test under the condition that the UV spectrum is centered at the wavelength of 353nm, the spectral intensity is 10%, the temperature is set at 25 • C and the relative humidity is 0%. There are four units in the test, that is, n = 4. The degradation observations for the three PCs are collected at m 1 = m 2 = 59 time points for the first two units and at m 3 = m 4 = 62 time points for the other two units. Figure 1 shows the degradation measurements of the four units. From the figure, we can observe significant difference in the degradation of the three PCs. The degradation at 2925  cm −1 is significantly slower than that at 1250 cm −1 and 1510 cm −1 , while the degradation at 1510 cm −1 is slightly faster than that at 1250 cm −1 . On the other hand, the degradation of the three PCs appears to share positive correlations. For example, the degradation of unit 4 appears to be faster than unit 2.

Model Fitting
To examine the correlation among the degradation of the three PCs, we apply the proposed model to the chemical degradation data. Since the degradation paths exhibit a nonlinear pattern, we explore the power law (t; α) = t α for the time scale function. The estimates for model parameters θ = {μ 1 , μ 2 , μ 3 , σ 2 1 , σ 2 2 , σ 2 3 , κ 2 , ζ , α} together with the 90% confidence intervals based on the parametric bootstrap are presented in Table 4. From the table, it can be noted that the degradation at 2925 cm −1 (Y 3 ) is the slowest among the three PCs, and the degradation at 1510 cm −1 (Y 2 ) is slightly faster than that at 1250 cm −1 (Y 1 ). This tallies with our observation from the degradation paths in Figure 1. Figure 2 gives the estimated mean degradation and the corresponding 90% point-wise prediction band for the degradation at three different peaks. As can be seen from the figure, the 90% prediction bands cover the actual degradation paths appropriately for all the three PCs, which justify the correctness of the proposed model.
The estimatedκ 2 andσ 2 k , k = 1, 2, 3 are of the same magnitude, which indicates that the dependence due to the common baseline effect is non-negligible. In fact, the correlation coefficients conditional on the common time scale H can be obtained according to (4) as (with the 90% confidence intervals in the parentheses): [0.328, 0.464]). As can be seen, the correlation between the degradation at 1250 cm −1 (Y 1 ) and 2925 cm −1 (Y 3 ) conditional on H is the highest. Moreover, the unconditional correlation coefficients between the degradation are (with the 90% confidence intervals in the parentheses): [0.578, 0.694]). Therefore, the degradation of the three PCs has a significant correlation between each other, especially at 1250 cm −1 (C-O stretching) and 2925 cm −1 (CH 2 anti-symmetric stretching).  Table 5. Estimation results for models when (1) the parameter κ is imposed to be 0; (2) the parameter ζ is imposed to be 0; and (3) both κ and ζ are imposed to be 0.
Proposed Imposing Imposing Imposing model  Table 4 are also listed for a clearer comparison. Shaded values highlights the best model.

Model Comparison
To validate the correlation, we consider the following three degenerated models of the proposed model, where (a) the parameter κ is imposed to be 0; (b) the parameter ζ is imposed to be 0; and (c) both κ and ζ are imposed to be 0. The estimation results for the three models are given in Table 5. It can be seen that the estimated parameters differ significantly in these three models when the dependence among the three PCs is overlooked. The log-likelihood considering two sources of correlations is much higher than the other three models. To make a fair comparison, we also exploit the Akaike information criterion (AIC), and the values are given in the last line of Table 5. The AIC values also suggest that the proposed model considering two sources of correlations is the best. This implies that there does exist considerable dependence among the degradation of the different PCs for the coating material, and the proposed model successfully captures the dependence. The estimatedα implies that the degradation trends of the three PCs are nonlinear. To justify the nonlinear degradation trend, we also fit the proposed model with (a) a log-linear linear (t), that is, (t) = exp(αt) − 1, and (b) a linear (t) that is, (t) = t to the data. The estimation results are given in Table 6. Comparing the log-likelihood values of the three models, we see that the model with the log-linear trend is slightly better Table 6. Estimation results for the proposed model under the power law (t), the log-linear (t) and the linear (t).

Power law (t)
Log-linear ( than the one with the power law trend, while both models are significantly better than the model with the linear trend. The AIC values also suggest that the models with nonlinear trends provide better fit to the coating degradation data, and the model with the log-linear trend gives the best fit. To further verify the proposed model, we also fit the dataset by the multivariate Wiener process (Sun, Ye, and Hong 2020). The multivariate Wiener process model under a time scale Table 7. Log-likelihood and AIC values for the proposed model and the three-dimensional Wiener process (3d Wiener) under the power law (t), the log-linear (t) and the linear (t).

Power law (t)
Log-linear (  transformation (t) has the following form: where Y(t) = (Y 1 (t), . . . , Y K (t)) , μ = (μ 1 , . . . , μ K ) , B(t) = (B 1 (t), . . . , B K (t)) represents the vector of K independent Wiener processes, and is the K × K covariance matrix. This model has K(K + 3)/2 unknown model parameters in addition to the parameters in (t). Note that the multivariate Wiener process model is equivalent to the model with the Wiener processes as marginals and with the Gaussian copula for the correlation (Peng et al. 2016a). Table 7 lists the loglikelihood and the AIC values for the three-dimensional Wiener process model with a power-law trend, a log-linear trend and a linear trend, respectively. As can be noticed, although the three-dimensional Wiener process model has one more parameters than the proposed model (the three-dimensional Wiener process model has nine parameters while the proposed model has eight parameters, in addition to the parameters in (t)), the respective log-likelihood values of the proposed model are much larger than that of the three-dimensional Wiener process model. Accordingly, the AIC values for the proposed model are much smaller than the three-dimensional Wiener process model, which suggests that the proposed model provides a better fit.

Reliability Analysis
Following Lu et al. (2021), suppose the thresholds for the three PCs are −0.4, −0.58, and −0.27, respectively. Figure 3 shows the estimated reliability (point estimates and the corresponding 90% confidence bands) based on the proposed model under the power law (t) and the log-linear (t). The reliability when the correlation among the degradation is ignored, that is, when the degradation of the three PCs is assumed independent, is also provided for comparison. As can be noticed, when the correlation is ignored, the reliability would be significantly underestimated. This would lead to a conservative assessment of the product reliability and may result in unnecessary maintenance actions. Therefore, it is important to take into account the dependence among the degradation of multiple PCs, to properly schedule maintenance actions.

Conclusions
Degradation modeling of multiple correlated performance characteristics (PCs) is of practical importance and yet challenging. This study proposes a Wiener process model for multidimensional PCs degradation modeling, where the correlation is characterized through a common stochastic time scale and a common baseline effect. The proposed model can account for the correlation from the functional dependence between different PCs, as well as the correlation incurred by the common dynamic environments. Meanwhile, the proposed model is also analytically tractable, and an EM algorithm is developed for model parameter estimation. An adaptive sampling algorithm is proposed for reliability analysis based on the proposed model. This sampling scheme is also applicable to more general multivariate degradation models. An extensive simulation study is used to verify the performance of the model for moderate sample sizes. We applied the proposed model to the degradation dataset of a polymer coating material, and the results show good applicability of the model in dealing with multiple dependent degradation.
The proposed model assumes that all the PCs share a common effect Z(t), which can be explained as the overall health status of the system that governs the degradation of all PCs. In practice, it is possible that the influences of the system health status on different PCs have different magnitudes. In this case, we can modify the summation in (1) into Y k (t) = β k Z(t) + X k (t), where β k is a scaling factor for the common effect with β 1 = 1 for identifiability. Parameter estimation for this modified model can be done using an EM algorithm similar to the proposed one in Section 4.1. The modified model would introduce K − 1 more parameters, which is linear in K and which increases the flexibility of the model. To check whether such complexity should be introduced, the likelihood ratio test can be used to test the null β 1 = β 2 = · · · = β K , where the null distribution is approximately χ 2 K−1 .

Supplementary Materials
Technical Materials: The supplementary file provides technical details of the Propositions, the adaptive sampling algorithm, additional simulation results in Section 5 and details about the EM algorithm (PDF file). Source Codes: The Matlab codes for the EM algorithm are provided. The coating material dataset used in this study is also included (ZIP file).