Computational methods for a copula-based Markov chain model with a binomial time series

Abstract A copula-based Markov chain model can flexibly capture serial dependence in a time series. However, the computational developments for copula-based Markov models remain insufficient for discrete marginal models compared with continuous ones. In this article, we develop computational methods for a binomial time series under the Clayton and Joe copulas. The methods include the data-generation, parameter estimation, model selection, and goodness-of-fit tests. We implement the methods in our R package Copula.Markov (https://CRAN.R-project.org/package=Copula.Markov). We conduct simulations to see the performance of the developed methods. Finally, the proposed method is illustrated by a real dataset.


Introduction
Let Y t be a random variable for a measurement of interest collected over time indices t ¼ 1, 2, :::, T, where T is a fixed integer.A time series is defined by fY t :t ¼ 1, 2, :::, Tg, where the independence among Y t s is not assumed.If the time series is stationary, the joint distribution of (Y tÀ1 , Y t ) is identical for t ¼ 2, 3, :::, T (Wieringa 1999).Also, there exists the identical marginal distribution of Y t , which can be a continuous or discrete distribution.
If a time series fY t :t ¼ 1, 2, :::, Tg exhibits serial dependence, the autoregressive type models have been applied to estimate the correlation between two adjacent measurements Y tÀ1 and Y t (Box, Jenkins, and Reinsel 2008).The first-order autoregressive model, known as the AR(1) model, is the simplest model that gives a stationary time series.However, the definition of the AR(1) model for a discrete-valued time series is not as straightforward as that for a continuousvalued time series (McKenzie 1985;Weiß 2018).
Copula-based Markov chain models for a time series can accommodate both continuous-valued and discrete-valued cases in a unified formulation (Joe 1997;Sun, Huang, et al. 2020).Darsow, Nguyen, and Olsen (1992) first introduced copula-based Markov chain models for serially correlated series fY t : t ¼ 1, 2, :::, Tg, where a copula defines a dependence structure through where C : ½0, 1 2 !½0, 1 is called a copula (Nelsen 2006).The resultant stationary time series has the marginal cumulative distribution function (CDF) GðyÞ PðY t yÞ for t ¼ 1, 2, :::, T (Chen and Fan 2006), which can be a continuous or discrete distribution.
To fit an observed time series to the copula model, researchers typically use a one-parameter copula that parametrizes the autocorrelation.The most common example is the Clayton copula , where a 2 ð À 1, 1Þnf0g and Ið Á Þ is the indicator function.Another example is the Joe copula where a !1: The Clayton copula has the lower tail dependence and is very popular in survival data analyses (Emura and Chen 2016;Emura, Matsui, and Rondeau 2019;Emura, Sofeu, et al. 2021).The Joe copula is also popular, and it has the upper tail dependence.The needs of modeling the lower or upper tail dependence via copulas are especially evident in analysis of financial time series (McNeil, Frey, and Embrechts 2005;Domma, Giordano, and Perri 2009;Dewick and Liu 2022).
Under the copula-based Markov chain model, likelihood-based estimation methods were proposed by Joe (1997), where the parametric forms of both the marginal and the copula are specified.Semi-parametric estimation methods were developed by Chen and Fan (2006), where the form of the marginal is unspecified; see also Li, Tang, and Wang (2019) and Zhang, Zhou, and Lin (2021) for the prediction and goodness-of-fit methods under the semi-parametric model.
To fit the copula-based Markov chain models to an observed time series, parametric models for the marginal have been considered with maximum likelihood estimation.Long and Emura (2014) considered the normal distribution GðyÞ ¼ U fðy À lÞ=rg, where l ¼ E½Y t , r 2 ¼ VarðY t Þ, and U ð Á Þ is the CDF of Nð0, 1Þ: Emura, Long, and Sun (2017) developed an R package Copula.Markov (https://CRAN.R-project.org/package=Copula.Markov) for the normal model under the Clayton and Joe copulas.Huang and Emura (2021) proposed a goodness-of-fit test and a model selection method for the normal model.Sun, Lee, et al. (2020) suggested a Bayesian inference for the t-distribution.Lin, Emura, and Sun (2021) proposed a normal mixture model.The book of Sun, Huang, et al. (2020) revisited these models with data examples, and described the package Copula.Markov.Recently, Huang, Wang, and Emura (2021) proposed a Weibull marginal model for modeling survival time series using an R package Copula.Markov.survival(https://CRAN.R-project.org/package=Copula.Markov.survival).All these developments were made for continuous marginal distributions.
The developments for the copula-based Markov models remain insufficient for the discrete marginal models compared with the continuous ones.Joe (1997) and Joe (2016) provide theoretical frameworks of a discrete-valued time series under copula-based Markov models.Escarela, Mena, and Castillo-Morales (2006) developed a copula-based Markov transition model for binary time series with medical applications.However, the computational tools, simulation studies, and case studies are largely missing for discrete models.Especially, software packages are required for these models to be reliably and routinely employed by users.Some developments of the Poissontype distributions (including the zero-inflated Poison and Conway-Maxwell-Poisson distributions) were seen from Alqawba, Diawara, and Rao Chaganty (2019;Alqawba, Fernando, and Diawara 2021) and Chapter 7 of Sun, Huang, et al. (2020).However, computational tools for binomial time series models have not been developed in the literature.
This article develops computational methods for copula-based Markov chain model to analyze binomial time series data.Section 2 introduces the copula-based model.Section 3 describes the proposed computational methods.Section 4 reviews the existing models to be compared with the copula-based model.Section 5 conducts simulation studies and Sec.6 analyzes a real data.Section 7 concludes the article.Appendices include some technical details.

Model and data generation
This section introduces our considered model for a stationary binomial time series.We also propose a data generation method that will be useful for conducting simulations and bootstrap procedures.
Binomial data arise in many scientific fields, where researchers observe the number of events (failures or successes) in a fixed number of trials n.Obviously, an observed time series takes Y t 2 f 0, 1, :::, n g for an integer value n and an unknown probability p 2 ð0, 1Þ of the event rate.It is critical to develop an adequate statistical model and inference method tailored to analyze the binomial time series that is remarkably different from continuous time series and Poisson time series.

Copula-based Markov chain model
We introduce a copula-based Markov chain model for a time series fY 1 , Y 2 , :::, Y T g through where C a : ½0, 1 2 !½0, 1 is a copula with a parameter a: As in Panagiotelis, Czado, and Joe (2012), the marginal follows Y t $ Binðn, pÞ, and the conditional CDF of Y t given Y tÀ1 ¼ y tÀ1 is Usually, the Pearson correlation, CorrðY t , Y tÀ1 Þ, is difficult to derive under the copula models.However, it is easier and more meaningful to derive Kendall's tau between Y t and Y tÀ1 as a function of a: For instance, under the Clayton copula, Kendall's tau is s ¼ a=ða þ 2Þ: To generate Y 1 , Y 2 , :::, Y T g, f we adopt the inverse method.That is, given Y tÀ1 ¼ y tÀ1 , we generate Y t by solving GðY t jy tÀ1 Þ ¼ U t , where U t $ unifð0, 1Þ: Thus, we derive the following: Algorithm 1: Data generation Step 1: Generate Y 1 $ Binðn, pÞ: Step 2: Generate U t $ unifð0, 1Þ for t ¼ 2, 3, :::, T: Given y tÀ1 , obtain Y t as the solution to Gðy tÀ1 À 1ÞÞ for y t , t ¼ 2, 3, :::, T: We implemented Algorithm 1 for the Clayton and Joe copulas in the Copula.Markov R package (https://CRAN.R-project.org/package=Copula.Markov).The solution in Step 2 is obtained as follows: First, we find 0 V t 1 that solves 0 ¼ C a ðV t , Gðy tÀ1 Þ Þ À C a ðV t , Gðy tÀ1 À 1ÞÞ À U t Â gðy tÀ1 Þ: Although there is no analytical solution, a numerical solution can be obtained, for example, by applying the R uniroot(.)function to the right-side of the equation.Next, we find Y t that satisfies GðY t Þ ¼ V t , which is Y t G À1 ðV t Þ, where G À1 ðvÞ inffx : GðxÞ !vg is the quantile function.The computation of G À1 ðÁÞ can be done, for instance, by using the R qbinom(.) function.
We end this section by emphasizing the importance of Algorithm 1. First, we will use Algorithm 1 to generate binomial time series in the subsequent simulation studies (Sec.5).Second, Algorithm 1 is necessary to perform a parametric bootstrap (Sec.3.5 for details).
Remark.If the number n is small, the choice of a copula family becomes less important since the model is sufficiently defined by a small number of transition probabilities.A concern is that there could be infinitely many copulas yielding the same transition probabilities (Genest and Ne slehov a 2007; Faugeras 2017; Genest, Ne slehov a, and R emillard 2017; Geenens 2019) for discrete margins.However, this problem is not relevant for computing the MLE under a one-parameter copula family, where one needs to identify the most probable parameter from observed data.Once the copula family (e.g., the Clayton copula) is parametrically specified, the identifiability issue does not arise.

Proposed computational methods
This section introduces our proposed computational methods of the maximum likelihood estimator (MLE) and the implementations of model selection and goodness-of-fit test procedures.

Maximum likelihood estimation
We describe the computation of the MLE under the Clayton copula model.However, the computation is similar for other copulas with trivial modifications.We write the Clayton copula as The conditional probability mass function of Y t given Y tÀ1 ¼ y tÀ1 is for y t ¼ 0, 1, :::, n: Under the Markovian assumption and given the data fy t :t ¼ 1, 2, :::, Tg, the log-likelihood function is derived as The MLE is defined by 'ðp, aÞ: When computing the MLE and its standard error (SE) numerically, one should regard the parameter constraints a 2 ð À 1, 1Þnf0g and 0 < p < 1: MacDonald (2014) suggested removing the constraints by transformation.We use transformations P ¼ log ð1=p À 1Þ and A ¼ log ða þ 1Þ such that P 2 ðÀ1, 1Þ and A 2 ðÀ1, 1Þnf0g: The transformed log-likelihood function is written as ' ðP, AÞ ¼ ' ð1=ð e P þ 1Þ, e A À 1Þ ¼ ' ðp, aÞ: The MLE of the transformed parameters are ' ðP, AÞ: By the invariance properties of the MLE, we obtain ð pML , âML Þ by the reverse transformations.
R users can apply our R package Copula.Markov to compute ð pML , âML Þ under the Clayton or Joe copula.For non-R users, we provide the detailed computational algorithms by the Newton-Raphson algorithm that can be programmed by any computer language in Appendix A.
For Excel users, one may use Excel's Solver function to compute the MLE (Supplementary Materials).
After ð pML , âML Þ are computed, one may define the estimates of the mean lML ¼ np ML and the standard deviation rML ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi np ML ð1 À pML Þ q : One can also define the 3-sigma control limits defined by the lower control limit LCL ¼ lML À 3r ML , and the upper control limit UCL ¼ lML þ 3r ML : These quantities are useful in statistical process control (Wieringa 1999;Montgomery 2009), where one often employs the control limits to check if the time series is stationary.The graphical presentation of a time series with these control limits is informative for industrial manufactures and financial engineers.See Sec. 6 for a real application.

Standard error and confidence interval
The large sample theory provides the SEs and the 95% confidence intervals (CIs) for p and a: By applying the delta method to Equation (1), we construct the 95%CI for p by where Similarly, we construct the 95% CI for the Clayton copula parameter a by where The CI and SE for other copulas can be obtained in a similar way by accounting for an adequate transformation.Supplementary Materials include the results of our simulations to show the satisfactory performance of the SE and 95%CI.
Remark.An alternative method for computing the SE and 95%CI is the parametric bootstrap (See Sec.3.5).While this method is computationally intensive, it may perform better in small samples.

Copula selection method
We propose a copula selection method by comparing the Clayton and Joe copulas via the maximized log-likelihood values.As mentioned, these two copulas capture different dependence structures.Even if the two copulas are incorrect, the copula with a higher likelihood value better captures the true dependence.A more extensive model selection, including more than two copulas, could be considered.However, the package Copula.Markov can currently implement the Clayton and Joe copulas only.

Goodness-of-fit test
After a suitable copula is selected, one might wish to check the goodness-of-fit for the binomial margin.This corresponds to a significance test for the null hypothesis H 0 : PðY t yÞ ¼ GðyÞ for 9p vs: H 1 : PðY t yÞ 6 ¼ GðyÞ for 8p: We consider two estimators for GðyÞ: The first one is a nonparametric (NP) estimator which is the empirical CDF.The second one is based on the MLE If H 0 is true, the two estimators converge to GðyÞ under some regularity conditions (Billingsley 1961;Chen and Fan 2006).If H 1 is true, the parametric estimator does not converge to GðyÞ, but the nonparametric estimator still converges to GðyÞ: Hence, we propose a test procedure for rejecting H 0 for a large value of a Kolmogorov-Smirnov distance The P-value of the test can be obtained by a parametric bootstrap method: The goodness-of-fit test with parametric bootstrap Step Step 3: The P-value of the test is calculated as IðCvM ðbÞ !CvMÞ=B: The null hypothesis H 0 is rejected if the P-value is less than a significance level (e.g., 0.05).
Step 1 is based on Algorithm 1.We suggest B ¼ 200 that will be used in the numerical studies.

Other models
This section introduces three existing models that can fit the binomial time series data.The first one is the nonparametric model that ignores dependence.The second one is based on the semiparametric method of Chen and Fan (2006) under the copula-based Markov chain model.The third one is the parametric likelihood-based method under the binomial AR(1) model.We introduce these models as the benchmark models to be compared with our proposed model.

The nonparametric model
We do not postulate any distributional assumption on Y t except for the stationarity GðyÞ PðY t yÞ for any t.Nothing is assumed about the correlation between Y t and Y tÀ1 : To estimate the mean and SD of Y t , one may use the moment estimators Two control limits can then be estimated as LCL¼ lNPE À 3r NPE and UCL¼l NPE þ 3r NPE : Such naive estimators were considered in Kramer and Schmid (2000) under a continuous AR(1) model.If Y t is found to be binomially distributed, one can set pNPE ¼ lNPE =n: Nonparametric inference for serial dependence is considered by forming the sequence Fergusonm et al. 2000).For discrete observations, Genest and Ne slehov a (2007) proposed the tie-adapted version of Kendall's tau, called tau-b, defined as Note that ŝNPE is not a consistent estimator due to the invalid assumption that the pairs ðY tÀ1 , Y t Þ, t ¼ 2, :::, T, are independent.
While we do not recommend the aforementioned nonparametric models, the estimators may serve as initial values for the MLE in parametric models.In the simulation studies, our interest is to study if the proposed estimator ð pML , âML Þ outperforms ð pNPE , âNPE Þ:

The semiparametric model
We do not impose any distributional assumption on Y t except for the stationarity.We only assume a parametric copula for the dependence between Y t and Y tÀ1 : The pseudo-likelihood estimator of Chen and Fan (2006) can be easily modified to the proposed log-likelihood derived in Sec.3.1.Specifically, we obtain the estimator âSMLE that maximizes where ÂÃ a ðy t , y tÀ1 Þ ¼ ĜT ðy t Þ Àa þ ĜT ðy tÀ1 Þ Àa À 1 and ĜT ðyÞ ¼ P T t¼1 IðY t yÞ=ðT þ 1Þ for y t ¼ 0, 1, :::, n: We will examine if the proposed estimator âML outperforms âSMLE : (1985) and Weiß (2009) proposed the binomial AR(1) model defined by

McKenzie
where b p ð1 À qÞ, a b þ q, p 2 ð0, 1Þ, q 2 ðmaxfÀp= ð1 À pÞ , ð À 1 þ pÞ =pg , 1Þ, and a y :¼ P y i¼1 X i , where X i $ Binð1, aÞ for a natural number y: In the binomial AR(1) model, the observations fY t :t ¼ 1, 2, :::, Tg follow a stationary Markov chain with the marginal distribution Y t $ Binðn, pÞ: Following Weiß (2009) and Weiß and Kim (2013a), the transition probability is À aÞ y tÀ1 Àk b y t Àk ð1 À bÞ nÀy tÀ1 þkÀy t , and the moments are As derived in Appendix B, the copula function is Â ð1 À qÞ lþsÀ2k ð1 À pÞ nÀk p lþsÀk f1 À pð1 À qÞg nþkÀðlþsÞ : Weiß and Kim (2013b) showed that there are four different estimators for ða, qÞ: They compared them and concluded that the MLE is the most efficient but occasionally has a trouble due to the parameter constraint q 2 ðmaxfÀp= ð1 À pÞ , ð À 1 þ pÞ =pg , 1Þ in small samples.We believe that this is a minor issue since the MLE always satisfies the constraint in our simulations and data analyses.Hence, we decide to focus on the MLE.

Simulation studies
We conducted Monte Carlo simulation studies to check the performance of the proposed methods.

Simulation designs
The first objective of the simulations is to assess the performance of the proposed method of estimating p and l þ 3r: For this purpose, we generated data under the Clayton copula model or the AR(1) model, and compared the performance of the three estimation methods defined by fitting: Copula (proposed) model (Sec.The last objective to see the consistency of the copula selection method.For this purpose, we assessed the rates of selecting the Clayton and Joe copulas when the true copula is known. Supplementary Materials include additional simulation results (not reported in this section) demonstrating the quality of the SE, 95%CI, and goodness-of-fit test.

Simulation results
Tables 1 and 2 compare the performance of the three methods (the proposed, AR(1), and nonparametric) based on 1000 repetitions.Generally speaking, all the three methods give unbiased estimates for the true p: Their mean squared errors (MSEs) get close to zero as the sample size increases from T ¼ 50 to T ¼ 200: When the data are generated from the copula model, the proposed method has smallest MSEs compared to other two methods.This result is exactly what we expect.Even when the data are generated from the AR(1) model, the proposed method gives unbiased estimates for p: However, the MSEs get larger than those under the AR(1) model.Hence, the proposed method loses some efficiency due to model misspecification.
Table 3 compares the performance of estimating the copula parameter a among the three methods (proposed, semiparametric, and nonparametric).The MSEs of the proposed method are, in general, the smallest among all the methods.Specifically, the MSEs of the proposed method are around onethird of those of the semiprametric method.This implies the large efficiency gain of the proposed method due to the binomial model assumption.The nonparametric model exhibits systematic biases.For all the methods, the estimation of a is not accurate when the sample size is small, the event rate is close to zero, and serial dependence is strong (see the case of T ¼ 50, a ¼ 8, p¼0.05).
Table 4 shows that the rates of selecting the Clayton and Joe copulas when the true copula is known.The rates of choosing the true copula are always greater than 90% even for the small sample size of T ¼ 50: They are even greater than 95% for T ¼100 and reaches almost 100% for T ¼200.This implies that the model selection can be accurately performed irrespective of the settings.

Data analysis
We analyze the jewelry manufacturing data obtained from the book of Burr (1979).This dataset counts the number of defective pieces in the jewelry manufacturing process.Let Y t be the number of defective jewelries among n ¼ 50 beads.Since 0 Y t n, they assumed Y t $ Bin ðn, pÞ: Following Emura, Lai, et al. (2021), we set T ¼ 43 and define the observations fY t : t ¼ 1, 2, :::, Tg as a stationary process.They identified an increasing change of p after T ¼ 43 so that the above observations are maximum available data to satisfy the stationary process.We first choose an appropriate copula for serial dependence by comparing between the Clayton copula and the Joe copula in terms of the log-likelihood (Sec.3.4).We obtained ' Clayton ¼ À76:304 > ' Joe ¼ À77:689, showing a better fit for the Clayton copula relative to the Joe copula.The P-value for the CvM distance for the binomial marginal is 0.07.Hence, we adopt the Clayton copula model with the binomial margin.
Table 5 compares the four different methods.The four methods produce similar estimates for p: Consequently, the estimates for l ¼ np and r ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi npð1 À pÞ p are similar among the four methods.For serial dependence parameters, the Clayton copula model gives âML ¼ 1:1412 (Kendall's tau is 0.3633), while the AR(1) model gives qAR1ML ¼0.2409 (the Pearson correlation).We cannot directly compare the two dependence measures since they estimate different measures.Kendall's tau is a rank-based measure that is free from the marginal distribution while the Pearson correlation depends on the marginal.We should adopt the Clayton copula model since the log-likelihood is higher than the AR(1) model.
Figure 2 shows the three-sigma control limits obtained by fitting the Clayton copula model.The center line is drawn by the estimated value of l ¼ np, namely lML ¼ np ML ¼ 3:2350: The upper and lower lines are the 3-sigma control limits, We compared these control limits with those obtained by fitting the AR(1) model, which are lAR1ML ¼ np AR1ML ¼ 3:6428, UCL ¼ 9:1561, and LCL ¼ À1:8705: The Clayton copula models gives a narrower width for the control limits than the AR(1) model does.Both models conclude that the jewelry manufacturing process is in-control as all the data points fall inside the control limits.
Throughout the data analysis, we learned that the proposed copula model provides an alternative model to the AR(1) model.In this data analysis, the Clayton copula model gives a slightly better fit than the AR(1) model according to the log-likelihood values (Table 5).However, we should always examine several alternative models for a single dataset.Making these computational analyses feasible, the proposed computational tools provide valuable benefits to users.

Conclusion
This article proposes computational methods for analyzing a stationary binomial time series under a copula-based Markov chain model.We describe the data generating method, maximum likelihood estimation method, model selection method, and goodness-of-fit method.These methods facilitate the practical applications of the general framework of the copula-based Markov model with discrete margins (Joe 1997;2016).One can easily apply the proposed methods with our developed R package Copula.Markov even if he/she does not wish to go through the computational details.
This article considers a copula-based Markov model that captures serial dependence for a univariate time series.We emphasize that this model differs from bivariate time series models, where a copula captures dependence between two time series (e.g., Karlis and Pedeli 2013).Recently, Kim, Baik, and Reller (2021) considered a copula directional dependence for bivariate time series while accounting for serial dependence by a copula-based Markov model: see Chapter 6 of Sun, Huang, et al. (2020) for the computational details for the directional dependence analysis of bivariate time series.
Naïve users of statistics tend to fit a normal distribution to binomial data.However, the normal approximation is acceptable only when the number of binomial trials is large and the event rate is not too small (or too large).Otherwise, the normal approximation can be quite poor (Xie, Xie, and Goh 1995;Duran and Albin 2009;Wang 2009;Emura and Lin 2015;Ali, Pievatolo, and G€ ob 2016;Emura and Liao 2018;Chukhrova and Johannssen 2019).Even more importantly, the event rate parameter cannot be parametrized by the normal distribution.If a continuous approximation is necessary for some reasons, an alternative is to fit a continuous distribution on (0,1) to proportions (Lee Ho, Fernandes, and Bourguignon 2019; de Araujo Lima-Filho and Mariano Bayer 2021; Fonseca et al. 2021).
In addition to the binomial model assumption, another important assumption made is the stationary distribution, i.e., the distribution of a time series does not change over time.Related to this assumption, we provide the three-sigma control limit to check if the data satisfy the stationary distribution.If the stationary condition does not hold, researchers are interested in the case where some change occurs.Besides the three-sigma control chart, another commonly used approach to find the change is the binomial CUSUM control charts for independent time series.See Perry, Pignatiello, and Simpson (2007), Assareh, Smith, and Mengersen (2015), Emura and Ho (2016), Wittenberg (2022), andHaridy et al. (2022) for the statistical methods for the binomial CUSUM.While there are a few copula-based methods in the presence of change points (Kim, Wang, and Liu 2020;Emura, Lai, et al. 2021), the computational tools have not been developed yet.However, we emphasize that proposed computational tools are still useful in the presence of parameter change: Supplementary Materials include some simulation results showing that the proposed method is robust against the presence of a change point for the copula parameter.
Step 3: Set pML ¼ 1=f exp ð PÞ þ 1g and âML ¼ exp ð ÂÞ À 1: In Step 1, a consistent estimator is used for p 0 : However, a 0 is not a consistent estimator (Sec.4.1).In Step 2, the formulas for the first and second derivatives under the Clayton copula are available in Supplementary Materials.Conditions 1-6 are interpreted as "divergence".Since p 0 is consistent, Condition 2 implies that the MLE should be found in p 0 60:15 (0.15 might be changed flexibly).If Condition 3, 4, or 5 occurs, Step 2 does not work.Condition 6 escapes infinite loops.Conditions 1-6 lead to the re-start of the NR algorithm after randomly chosen initial values.There could be a more elaborate randomization approach for large scale problems (Yen and Yen 2021).
While the above algorithm can be programmed in any computer language, efforts are necessary to input the derivatives of ' ðP, AÞ: Under many copulas, the derivatives of ' ðP, AÞ have to be approximated numerically as the expressions are too complicated.Alternatively, R users can simply apply our R package Copula.Markov without going through such details.

:
Supplementary Materials provide the expressions of the above matrix under the Clayton copula.The matrix Jð PML, ÂML Þ is usually computed from the converged step of a Newton-Raphson algorithm.We have made our R functions in the R package Copula.Markov in order to automatically compute the information matrix Jð PML , ÂML Þ under the Clayton copula and the Joe copula.

b
gives a rough estimate for serial dependence.It can be translated to the copula parameter by âNPE ¼ À2ŝ NPE b =ðŝ NPE b À 1Þ for the Clayton copula.However, ŝNPE b 3.1) Nonparametric model (Sect.4.1) Binomial AR(1) model (Sec.4.3) The second objective is to assess the performance of estimating a: For this purpose, we generated data from the Clayton copula model and compared the performance of the three estimation methods obtained by fitting: Proposed model (Sec.3.1) Nonparametric model (Sec.4.1) Semiparametric model (Sec.4.2)

Table 1 .
Simulation results for estimating p by the three methods (Proposed, AR(1), and Nonparametric) based on 1000 repetitions.

Table 3 .
Simulation results for estimating a by the three methods (Proposed, Semiparametric, and Nonparametric) based on 1000 repetitions.MSE stands for the mean squared error defined by MSEðâÞ ¼ E½ðâ À aÞ 2 :

Table 4 .
The rate of choosing the model between the Clayton copula and Joe copula models.The simulations are based on 1000 repetitions.

Table 5 .
Estimates based on the four methods using the jewelry manufacturing data.The Joe copula was not considered since it did not fit better than the Clayton copula according to the log-likelihood value ' Clayton ¼ À76:304 > ' Joe ¼ À77:689: