Estimation procedures and optimal censoring schemes for an improved adaptive progressively type-II censored Weibull distribution

This paper presents an effort to investigate the estimations of the Weibull distribution using an improved adaptive Type-II progressive censoring scheme. This scheme effectively guarantees that the experimental time will not exceed a pre-fixed time. The point and interval estimations using two classical estimation methods, namely maximum likelihood and maximum product of spacing, are considered to estimate the unknown parameters as well as the reliability and hazard rate functions. The approximate confidence intervals of these quantities are obtained based on the asymptotic normality of the maximum likelihood and maximum product of spacing methods. The Bayesian estimations are also considered using MCMC techniques based on the two classical approaches. An extensive simulation study is implemented to compare the performance of the different methods. Further, we propose the use of various optimality criteria to find the optimal sampling scheme. Finally, one real data set is applied to show how the proposed estimators and the optimality criteria work in real-life scenarios. The numerical outcomes demonstrated that the Bayesian estimates using the likelihood and product of spacing functions performed better than the classical estimates.


Introduction
In reliability studies, different censoring schemes have been proposed to strike a balance between (i) the number of items used in the test, (ii) the efficiency of statistical methods, and (iii) the total time spent on the test.However, experimenters prefer to deal with the progressive Type-II censoring scheme (PCS-TII) over other censoring schemes because it allows withdrawing the units during the experiment at any stage other than the terminal point.Today, due to the high lifespan of many products, especially electronics, the total experimental time can be very long if PCS-II is used.For this reason, the progressively Type-II hybrid censoring scheme (PHCS-TII) has been proposed by Kundu and Joarder (2006).The main limitation of PHCS-TII, similar to the conventional Type-I (or time) censoring, is that the number of observed failures is random and can turn out to be a very small number, thus, any inference procedure will be invalid or its accuracy will be extremely low.To overcome this drawback, an adaptive progressive Type-II censoring scheme (APCS-TII) has been proposed by Ng et al. [28], in which the effective number of failures m is predetermined and the test time can exceed a prefixed time T. Unfortunately, if the test units have an extreme lifespan, it is not guaranteed a satisfactory total test duration because the experiment time will be very long.Recently, to solve this problem, Yan et al. [36] proposed an improved adaptive progressive Type-II censoring scheme (IAPCS-TII) which is a generalization of two popular censoring schemes, namely: APCS-TII and PHCS-TII.They stated that the IAPCS-TII can effectively guarantee that the experiment stops within prespecified times such as T 1 and T 2 .They also discussed the estimating problem of the unknown parameters of the two-parameter Burr-XII distribution to illustrate the effects of the proposed censoring scheme.
The procedure of IAPCS-TII can be described as follows: Suppose an independent and identically random sample of n units is placed on a life-testing experiment at time zero, the effective number of failures m(< n) is predetermined, and the progressive censoring R = (R 1 , R 2 , . . ., R m ) with R i 0 is also fixed in advance, but some values of R i at the time of the i − th failure may change during the experiment.Let T 1 , T 2 ∈ (0, ∞), where T 1 < T 2 , be the two threshold time points that are determined according to the reliability information on the product of interest.Suppose d 1 (< d 2 ) and d 2 (d 2 < m) are the number of failures that occur before times T 1 and T 2 , respectively.At the time of the first failure (say X 1:m:n ), R 1 items from the experiment are randomly withdrawn from n−1 live items.Similarly, at the time of the second failure (say X 2:m:n ), R 2 of n − R 1 − 2 items are randomly removed from the experiment, and so on.If X m:m:n occurs first before time T 1 , i.e.X m:m:n < T 1 (Case-I), the experiment stops at the time of m − th failure with the censoring scheme R = (R 1 , R 2 , . . ., R m ).If X d 1 :m:n < T 1 < X d 1 +1:m:n (Case-II), where d 1 > 0 and d 1 + 1 < m, the experiment stops at the time of m − th failure with censoring scheme R = (R 1 , R 2 , . . ., R d 1 , 0, . . ., 0, R * ), where R * = n − m − d 1 i=1 R i , then no live units will be removed from the experiment by setting R i = 0 for i = d 1 + 1, . . ., m − 1 and at the time of the m − th failure all remaining units are removed.On the other hand, if X m:m:n is not fail before time T 2 , i.e.T 2 < X m:m:n (Case-III), the experiment stops at T 2 with censoring scheme R i = 0 for i = d 1 + 1, . . ., d 2 , and at T 2 all the remaining units are removed, i.e.R * = n − d 2 − d 1 i=1 R i .A diagrammatic representation of IAPCS-TII is depicted in Figure 1.Therefore, in this situation, we have one of the following observations: • {X 1:m:n , X 2:m:n , . . ., X m:m:n }, if X m:m:n < T 1 < T 2 (Case-I); • {X 1:m:n , . . ., X d 1 :m:n , . . ., X m:m:n }, if T 1 < X m:m:n < T 2 (Case-II); • {X 1:m:n , . . ., X d 1 :m:n , . . ., X d 2 :m:n }, if T 1 < T 2 < X m:m:n (Case-III).
Suppose we have an IAPC-TII sample taken from a continuous population with cumulative distribution function (CDF) F(•) and probability density function (PDF) f (•), the joint Table 1.Various choices of J 1 , J 2 , R * , T * and (x i , R i ).
), (x d1+1 , 0), . . ., (x d2−1 , 0), (x d2 , 0)} likelihood function (LF) of the observed data can be written as follows where C is a constant which does not depend on the parameters and is the vector of the unknown parameters.Let x i = x i:m:n for simplicity, then from (1), the quantities J 1 , J 2 , R i , R * and T * for the different cases are presented in Table 1.
Clearly; T 1 indicates a warning about the experimental time, while T 2 means that the experiment needs to be speeded up; in other words, it represents the maximum time that the experimenter can afford to continue.If the experiment reaches the specified time T 2 and the number of desired failures m does not occur, the life-test must be stopped at T 2 .This improvement overcomes the drawback of APCS-TII, which cannot guarantee the test time, by guaranteeing that the total test time will not exceed the time T 2 .From (1), several censoring plans may be obtained as special cases, such as APCS-TII by setting T 2 → ∞.
The Weibull distribution (WD) is one of the most widely used distributions in reliability studies due to its flexible hazard rat function (HRF).For more details about the Weibull distribution, see [21].Let X be a random variable follows the WD, denoted by X ∼ WD(θ , λ), where θ and λ are the shape and scale parameters, respectively.Then, the PDF, f (•), and CDF F(•), of X are given, respectively, by and respectively.At a mission time t, the reliability function (RF) and HRF, are given, respectively, by and The WD has been studied by many researchers based on different censoring schemes.Pareek et al. [29] obtained the maximum likelihood and approximate maximum likelihood in the presence of competing risks data from WD under PCS-TII.Valiollahi et al. [35] investigated the estimation of the stress-strength reliability of WD using PCS-TII.Kaushik et al. [22] considered classical and Bayesian methods to estimate the parameters of WD under a progressive type-I interval censoring scheme with beta-binomial removals.
Nassar et al. (2017) obtained the maximum likelihood and Bayesian estimates (BEs) of the WD using APCS-TII.Seo et al. [33] obtained the estimates for the unknown of WD using the regression approach under PCS-TII.Ashour et al. [3] obtained the Bayesian and non-Bayesian estimations of WD based on progressive first-failure censored data with binomial removals.Recently, Elshahhat and Nassar [16] developed different inference procedures for WD parameters when data are collected under adaptive Type-II progressively hybrid censoring with binomial removals.
To our knowledge, we have not come across any work related to the estimation of model parameters and/or reliability characteristics of the WD under IAPCS-TII.So, our objectives in this study to close this gap are: First, estimating the parameters, RF and HRF of the WD using two frequentist estimation approaches, namely, conventional maximum likelihood and maximum product of spacing (MPS) methods.Using the normal approximation, for both frequentists approaches, approximate confidence intervals (ACIs) for any function of Weibull parameters are obtained.The second objective is to obtain the BEs of the unknown Weibull parameters, as well as the associated RF and HRF, through the LF and product of spacing (PS) function under the squared-error (SE) loss function using independent gamma priors.Since the BEs cannot be obtained in closed expressions, Markov chain Monte Carlo (MCMC) techniques are considered to compute the complex posterior functions and, in turn, calculate the BEs and the associated highest posterior density (HPD) credible intervals.The third objective is to obtain an optimal censoring scheme using different optimality criteria.Using various choices of the effective sample size, the performance of the proposed methods is compared through an extensive simulation study in terms of their simulated root mean square error (RMSE), relative absolute bias (MRAB), and average confidence lengths (ACLs).Real-life data from the engineering field is also analyzed.
The rest of the paper is handled as follows: Section 2 discusses point and interval estimations via the maximum likelihood approach.In Section 3, the point and interval estimations are investigated using the MPS method.Bayesian MCMC estimates using both proposed frequentist functions are provided in Section 4. The simulation results are presented in Section 5.In Section 6, optimality criteria are presented.An application using a real data set is provided for illustration purposes in Section 7. Finally, we conclude the paper in Section 8.

Likelihood estimation
In this section, we consider the maximum likelihood estimation method to estimate the unknown parameters of WD based on IAPC-TII data.Besides the maximum likelihood estimates (MLEs) of the parameters, the MLEs of RF and HRF as well as the corresponding ACIs are also obtained.
(7) For fixed θ and by differentiating (7) with respect to λ and equating the result to zero, the MLE λ of λ can be expressed as follows where ϑ(x, θ) given by ( 8) in (7), the profile log-likelihood of θ can obtained as follows By maximizing the profile log-likelihood with respect to θ, the MLE of θ, denoted by θ , can be obtained.Following this approach, we can obtain the estimator of θ as θ = ξ(θ) in the following form where ϑ (x, θ) Once the MLE θ is obtained from (10) by considering any iterative procedure for this purpose, the MLE of λ can be obtained directly from (8).

Remark 2.1:
The MLEs of the unknown parameters θ and λ always exist for Cases I and II, because these two cases are the conventional progressive Type-II and adaptive progressive Type-II censoring schemes, respectively.On the other hand, for Case-III they exist if It is observed from (8) that the MLE of λ can be acquired in an explicit form based on the MLE of θ.Then, λ is exist and unique if θ exist and unique.Taking the derivative of (9) with respect to θ , we have It should be observed here that there exists a finite upper limit for i=1 log(T * /x i )] and lim θ→∞ Q 2 (θ ) > 0. Combined this result with the truth that Q 1 (θ ) is monotone decreasing (to 0) and Q 2 (θ ) is monotone increasing, guarantees the existence and uniqueness of the MLE of θ.
Using the invariance property of the MLEs, we can obtain the MLEs R(t) and ĥ(t) (for any given time t > 0) of the RF and HRF by substituting the MLEs θ and λ into (4) and (5), respectively.

ACIs via LF-based
In this subsection, we obtain the 100(1 − τ ) ACIs for θ, λ, R(t) and h(t).These ACIs are obtained based on the asymptotic normality of the MLEs.To construct such ACIs, we need to compute the asymptotic variance-covariance (VC) matrix which can be obtained by inverting the Fisher information (FI) matrix.For this purpose, we first find the second derivatives of (7) with respect to θ and λ as and The FI matrix is then obtained by taking expectations of minus ( 11)- (13).It is not easy to obtain the exact expressions of ( 11)- (13).Hence, the observed FI matrix is used and the approximate asymptotic VC is obtained as follows where V θ and V λ are the approximate variances of the MLEs of θ and λ, respectively, and C θλ = C λθ is the covariance between them.According to the asymptotic normality of the MLE, it is known that where I −1 ( θ , λ) is given by ( 14).Therefore, the 100(1 − τ ) ACIs for θ and λ can be expressed, respectively, as follow where V θ and V λ are obtained from ( 14) and z τ/2 is the upper (τ/2) th percentile point of a standard normal distribution.
To construct the ACIs of R(t) and h(t), we need to obtain the variances of R(t) and ĥ(t).Here, we propose to use the delta method to approximate these variances.The delta method is an approach to approximating the standard errors for complicated statistical estimates.It constructs a linear approximation of a complicated function and then gets the variance of the simpler linear function that can be used for large sample inference, see for more details about the delta method [18].Let where Then, the approximated variances of R(t) and ĥ(t) can be obtained, respectively, as follow subsequently, the ACIs of R(t) and h(t) can be constructed as follow

Remark 2.2:
To avoid negative values for the lower bounds of the ACIs of θ , λ, R(t) and h(t), one can acquire the ACIs based on log-transformed (LT) MLEs as reported by Meeker and Escobar [24].So, the ACI of θ, λ, R(t) or h(t) (say ϕ) based on LT can be obtained as

Product of spacing estimation
The method of MPS was introduced independently by Cheng and Amin [8] and Ranneby [32] as an alternative to the method of maximum likelihood.Anatolyev and Kosenok [1] stated that the maximum estimators (MPSEs) perform better than the MLEs in the case of small samples for heavy-tailed or skewed distributions.The MPS method possesses nearly all the large sample properties possessed by the method of maximum likelihood, and it retains most of the properties of the maximum likelihood method under more general conditions; see for more details [9,10].The MPSEs are obtained by choosing the parameter values that maximize the product of the distances between the values of the distribution function at adjacent ordered points.Here, we use the MPS estimation method to estimate the parameters, reliability, and hazard rate functions of the Weibull distribution based on IAPC-TII data.Also, using the asymptotic normality of the MPSEs, the ACIs of the unknown quantities are obtained.

MPSEs
Let x = (x 1:m:n , . . ., x J 1 :m:n , . . ., x J 2 :m:n ) be an IAPC-TII sample with a censoring scheme R = (R 1 , . . ., R J 1 , 0, . . ., 0, R * ), then the MPSEs are obtained by maximizing the geometric mean of the spacings, or equivalently, the PS function expressed as follows: where J 2 = J 2 + 1.Using ( 2), ( 3) and ( 16), the PS function can be written as follows The natural logarithm of ( 17), without the normalized constant, can be written as follows Differentiating ( 18) with respect to θ and λ we get the following and where The MPSEs of the parameters θ and λ, denoted by θ and λ, can be obtained by solving simultaneously (19) and (20) after equating them to zero.These equations cannot be solved explicitly; therefore, we resort to any numerical method to get θ and λ.Cheng and Traylor [10] demonstrated that the MPSEs are consistent and have similar asymptotic properties to the MLEs.They also possess the same invariance property as the MLEs.So, based on the invariance principle, the MPSEs of R(t) and h(t) can be easily obtained.

ACIs via PS-based
Here, we obtain the 100(1 − τ ) ACIs for θ , λ, R(t) and h(t).As in the case of the MLEs, the ACIs are obtained using the asymptotic normality of the MPSEs.To obtain the approximate asymptotic VC matrix for the MPSEs, we need the second derivatives obtained from (18) with respect to θ and λ as follow and where From the expressions ( 21)- (23), the approximate asymptotic VC can be obtained as follows where V θ and V λ are the approximate variances of the MPSEs of θ and λ, respectively, and C θλ = C λθ is their covariance.It follows from the asymptotic normality of the MPSEs that ( θ, λ) ∼ N[(θ , λ), I −1 ( θ, λ)], where I −1 ( α, λ) is as in (24).Now the 100(1 − τ ) ACIs for θ and λ can be obtained, respectively, as follow Similar to the MLEs case, we use the delta method to approximate the variances of R(t) and ĥ(t).Applying the delta method, we can obtain the approximated variances as follow where D R and D h (locally at θ and λ) are obtained in (15).Hence, the ACIs of R(t) and h(t) can be obtained, respectively, as follow As in the case of the MLEs, one can obtain the ACIs based on the LT MPSEs to avoid negative lower bounds for θ , λ, R(t) and h(t).To evaluate both point and interval classical inferences of the unknown parameters, we recommend implementing the 'maxLik' package, proposed by Henningsen and Toomet [20], in R software that employs the 'maxNR()' function of maximization.

Bayesian estimation
In this section, we consider the Bayesian estimation method to estimate the parameters, RF and HRF of the WD based on IAPC-TII data.The Bayesian estimation method is extensively used in many studies in the literature to estimate the parameters related to various lifetime models; see, for example, [14,26].It may be mentioned here that most of these studies are confined to obtaining BEs based on the LF.Besides this usual approach, we obtain the BEs using the PS function.These estimators are obtained by considering the SE loss function, which is most widely used as a symmetrical loss function.Using this loss function, the BE is the posterior mean.We assume that the parameters θ and λ are independent and are a priori distributed as gamma distributions.The joint prior distribution of θ and λ can be written in the following form: Here, we assume the gamma priors, which adapt the support of the parameters of the WD and are considered to be more flexible than other prior distributions.Also, the independent gamma priors are relatively easy, which may not produce complex posterior expression and computational issues.Next, we derive the joint posterior distribution and obtain the BEs of θ, λ, R(t) and h(t) using the LF and PS function approaches.

Bayes estimators via LF-based
Using ( 6) and ( 25), we can write the joint posterior distribution of θ and λ as follows where Let φ(θ, λ) be any function of parameters θ and λ.Then, the BE of φ(θ, λ) using the SE loss function, denoted by φB (θ , λ), is obtained as follows It is clear that the BE φB (θ , λ) in ( 27) is expressed as the ratio of two integrals, which cannot be obtained analytically.Therefore, we consider using the MCMC technique to compute the BEs of θ, λ, R(t) and h(t) and the corresponding HPD credible intervals.To obtain these estimates, we first need to obtain the conditional posterior distributions.From (26), the conditional posterior distributions of θ and λ can be written, respectively, as follows: and It is to be noted from (29) that the conditional posterior distribution of λ is gamma distribution with shape parameter (c + J 2 ) and scale parameter (ϑ(x, θ) Therefore, samples of λ can be easily generated.On the other hand, the conditional posterior distribution of θ given by ( 28) cannot be reduced to any well-known distribution.To overcome this problem, we consider using Metropolis-Hastings (M-H) within Gibbs sampling to generate random samples of θ and λ from ( 28) and ( 29), respectively, following the steps described as follows: Step 1. Set j = 1 and start with initial values (θ (0) , λ (0) ) = ( θ , λ).

Bayes estimators via PS-based
Cheng and Amin [8] began by attempting to replace the LF with the PS function to introduce the MPS method as an alternative that retained many of the useful properties of the method of maximum likelihood.Due to some shortcomings of the LF, such as (1) in some cases, the LF is unbounded; (2) the sensitivity of the LF to outliers, Coolen and Newby [11] suggested the PS function as an alternative to the LF in Bayesian inference, ascribing the asymptotic equivalence of P(θ , λ | x) to L(θ , λ | x).They showed that the PS function can be employed in place of the LF in Bayesian inference without losing the structure and properties of the Bayesian procedure.Coolen and Newby [12] showed that no serious practical issues are associated with using the PS function in place of LF in Bayesian estimation.They stated that although the posterior distribution acquired based on the PS function is different from that determined based on the LF, it is asymptotically equivalent to the posterior obtained in the usual way, see also Singh et al. [34].Recently, many authors investigated Bayesian estimations of some lifetime distributions utilizing the PS function; see, for example, [5,6,27].Here, we obtain the BEs of θ , λ, R(t) and h(t) by using the PS function as an alternative to the LF.Instead of deriving the joint posterior distribution based on the LF, the joint posterior distribution is derived using the PS function in (17).Combining (17) and (25), we can write the joint posterior distribution of θ and λ as follows where Similar to the usual Bayesian estimation method, the BE of φ(θ, λ) based on the SE loss function, denoted by φB (θ , λ), is obtained as follows: Again, (31) cannot be obtained analytically, therefore, the MCMC technique can be implemented to obtain the BEs in this case.The conditional posterior distributions of θ and λ can be expressed, respectively, as and It is to be observed that the conditional posterior distributions in (32) and (33) do not match any well-known distributions.Then, the M-H sampling technique is considered to generate samples for θ and λ.The required samples are generated as follows: Step 1. Set j = 1 and start with initial values (θ (0) , λ (0) ) = ( θ , λ).

Election of prior-parameter value
The elicitation process employed to identify the hyper-parameter value is the major issue in Bayesian analysis.In literature, this problem is investigated by many authors, see, for instance, [13,23].In this respect, we suggest the following steps of the past samples algorithm to determine the values of the hyper-parameters (a 1 , b 1 ) and (a 2 , b 2 ) of θ and λ, respectively, as Step 1: Set the parameter value of θ and λ.
Step 3: Draw a random sample of size n from WD(θ , λ).
Step 7: Equating the mean and the variance of θ (j) and λ(j) for j = 1, 2, . . ., B to the mean and variance of the corresponding gamma density priors, respectively, as Step 8: Solving (34), the estimated hyper-parameter values a 1 and b 1 of a 1 and b 1 for θ can be obtained, respectively, by 2 , and Step 9: Solving (35), the estimated hyper-parameter values a 2 and b 2 of a 2 and b 2 for λ can be obtained, respectively, by 2 , and Similarly, one can redo the above steps using the MPSEs of θ and λ to get the values of the hyper-parameters.

Monte Carlo simulation
To evaluate the behavior of the theoretical results obtained in the previous sections, including the classical and BEs and the associated confidence/credible intervals, an extensive Monte Carlo simulation study is performed.For this purpose, we utilize a large 1,000 IAPC-TII samples for θ = λ = 0.75.For the mission time t = 0.25, the R(t) and h(t) are 0.767079 and 0.795495, respectively.To run the experiment according to an IAPC-TII sampling from the proposed model, we propose the following algorithm: Step 1. Set the parameter values of θ and λ.
Step 3. Determine d 1 at given time T 1 , and discard the remaining sample Step 5.Under an IAPCS-TII, the sample data will consist of one of the following cases such as: (a) If X m < T 1 , the experiment stops at X m with failure times X i , i = 1, 2, . . ., m, and the remaining units n − m − m−1 i=1 R i are removed from the test, that is Case-I.(b) If T 1 < X m < T 2 , the experiment stops at X m with failure times X i , i = 1, 2, . . ., m, and the remaining units In the Bayesian paradigm, to assign values for the hyper-parameters a i , b i , i = 1, 2, of the gamma prior in (25), we propose to use the procedure of past sample data.Thus, one can easily generate 10,000 complete samples each of size 50 (say) from WD(θ , λ) as past samples for each plausible value of the unknown model parameters θ and λ, then subsequently get the hyper-parameter values from (34) and (35).Using the hybrid Gibbs within the M-H sampler algorithm described in Section 4, 12,000 MCMC samples, with 2,000 burn-in period, are generated.Hence, the average Bayes MCMC estimates and 95% two-sided HPD credible intervals are computed based on 10,000 MCMC samples.To run the MCMC sampler algorithm, the initial values of the unknown parameters were taken to be their frequentist estimates.For each setting, we compute the average estimates (say φη , η = 1, 2, 3, 4) of the unknown parameters θ, λ, R(t) and h(t) say (ϕ η ), RMSEs and MRABs, using the following formulas: and where φ(j) η denotes the obtained classical or Bayes estimate at the j − th sample of the unknown parameter ϕ η , G is the number of generated sequence data, η 1 = θ , η 2 = λ, η 3 = R(t) and η 4 = h(t).Further, the corresponding ACLs and average coverage probabilities (ACPs) related to the ACI/HPD credible intervals of the unknown parameter ϕ η for η = 1, 2, 3, 4 are obtained using the following formulas: and respectively, where 1(•) is the indicator function, L(•) and U(•) denote the lower and upper bounds, respectively, of (1 − τ )% asymptotic (or credible) interval of ϕ η .Comparison between different point estimates is made based on their RMSE and MRAB values.Also, the performances of 95% two-sided ACI/HPD credible interval estimates are compared by using their ACLs and ACPs.The average point estimates (with their RMSEs and MRABs) and ACLs (with their ACPs) of θ, λ, R(t) and h(t) are calculated and reported in Tables, which are provided as supplementary materials.All numerical computations were performed using R statistical programming language software version 4.0.4 with two useful packages namely 'coda' package proposed by Plummer et al. [30] and 'maxLik' package proposed by Henningsen and Toomet [20].All R code scripts are available upon request from the corresponding author.From the simulation results, we can make the following observations: Using the LF and PS function, it is noted that the frequentist and BEs of the unknown parameters and the related reliability characteristics are very satisfactory in terms of minimum RMSEs and MRABs.In addition, when the failure percentage m/n increases, the point estimates become even better.Thus, in order to get more accurate estimation results, one may tend to increase the effective sample size.For fixed n and m, when the total number of progressive censoring, R, decreases, the RMSEs and MRABs of all unknown parameters are reduced significantly.In most cases, when the values of the thresholds T i , i = 1, 2, increase, the RMSEs and MRABs associated with all unknown parameters decrease.Due to the fact that the BEs include more prior information about the unknown parameter of interest compared to other estimates, the Bayes MCMC estimates using the LF (or PS function) are better than the MLEs (or MPSEs) with respect to the smallest RMSEs and MRABs.
Comparing the three different censoring schemes, it is clear that the RMSEs and MRABs associated with the unknown parameters θ, λ, R(t) and h(t) for scheme 3 are smaller than those based on other schemes.The simulation results showed that the MLEs of the unknown parameters θ , λ and R(t) perform better than the MPSEs, while the MPSE of h(t) acts better than the MLE.But, when T i , i = 1, 2, increases, the MPSE of λ becomes even better than the MLE.It is also observed that the BEs using the LF of the unknown parameters θ and λ as well as R(t) and h(t) have performed better than the BEs using the PS function, and both are more informative than the MLEs and MPSEs.The ACLs of ACI/HPD credible intervals are narrowed down while the corresponding ACPs increase when (n/m)%) increases, as expected.Moreover, as T i , i = 1, 2, increases, the ACLs of ACI/HPD credible intervals tend to decrease while the associated ACPs tends to increase.In addition, the HPD credible intervals for all unknown parameters perform better than the ACIs with respect to the shortest ACLs and highest ACPs.
It is also noted that the ACLs of ACI/HPD credible intervals for the shape parameter θ are narrow down for scheme 3 than other competing censoring schemes, while those associated with λ, R(t) and h(t) are narrow down for scheme 2 than any other scheme.In some cases, the ACPs of ACI/HPD credible intervals for the unknown parameters θ and λ, R(t) and h(t) are mostly close (or greater) the specified nominal level.Furthermore, in some cases, it is clear that the HPD credible intervals (using LF) of θ , λ, R(t) and h(t) are even better compared to those obtained based on PS function with respect of their ACLs and ACPs.To sum up, it is seen from the simulation results that the performance of both Bayes point and credible interval estimates is preferable to those obtained under the classical approach in terms of their RMSEs and MRABs (for point estimates) and ACLs and ACPs (for interval estimates).Finally, Bayesian MCMC estimation using the hybrid Gibbs within the M-H algorithm sampler for the unknown parameters of the WD under an

Optimum progressive censoring
To conduct an experiment using an IAPCS-TII, the choice of n, m, T i , i = 1, 2, and R i , i = 1, 2, . . ., m, must be known in advance.But selecting the optimum progressive censoring scheme, from a set of all possible schemes that contain a great deal of information regarding the model parameter(s) of interest is a significant objective for reliability practitioners.Recently, the problem of comparing two (or more) different competing censoring plans has acquired awareness among many authors, see, for example, [2,15,17,31], among others.In the case of single-parameter distributions, the variance optimality criterion is widely employed while for distributions with multiple parameters, the trace and determinant optimality criteria are used.To decide the optimum progressive censoring scheme, some widespread criteria for selected values of n, m, T i , i = 1, 2, and R i , i = 1, 2, . . ., m, are considered in Table 2.It is clear that, regarding criteria I and II, our objective is to minimize the determinant and trace of the VC matrices with respect to the MLEs and MPSEs, respectively.Comparison of the two observed VC matrices is not a trivial task because both criteria I and II are not scale-invariant; for details, see [19].Nevertheless, one can select the optimal censoring scheme for multi-parameter distributions using criteria III and IV, which are scale-invariant.
It is important to mention here that the breve symbol in Table 2 refers to the MLE or the MPSE of the unknown parameter.The different criteria in Table 2 are obtained based on the MLEs (or MPSEs).Minimizing the associated variance of the logarithmic p − th quantile, log( Tp ), where 0 < p < 1, is depend on the choice of p as in Criterion-III.Also, for Criterion-IV, the weight w(p) ≥ 0 is a non-negative function satisfying 1 0 w(p) dp = 1.Hence, the logarithmic for T p of the WD is given by Using (36), the delta method is used to approximate the variance estimate of log( Tp ) as where  is the gradient of log( Tp ) with respect to θ and λ.Obviously, the optimized progressive censoring scheme corresponds to the lowest value of the criteria presented in Table 2.

Data analysis
To show the adaptability of methodologies proposed to a real phenomenon, an engineering application using a real-life data set is analyzed.Here, we shall use a real-life data set originally reported by Murthy et al. [25].This data set shows the time between failures for 30 items of repairable mechanical equipment (RME) and is presented in Table 3.Before analyzing this data, we need to check whether the proposed model is appropriate to fit this data or not.For this purpose, using the complete RME data, the MLEs of the unknown parameters θ and λ are calculated and used to compute the Kolmogorov-Smirnov (K-S) distance along with the associated p-value.The MLEs (along with their standard errors (SEs)) of the Weibull parameters θ and λ are 1.4633(0.2029)and 0.4561(0.1141),respectively, while the K-S is 0.075 with p-value 0.996.We also examine the validity of the Burr-XII distribution as a competitive model with CDF F(x) = 1 − (1 + x α ) −β , x > 0, α, β > 0, where α and β are shape parameters, to fit the RME data set.The MLEs of α and β are α = 2.37 and β = 0.809.The K-S and the corresponding p-value are 0.116 and 0.813, respectively.This result indicates that the WD fits the RME data set quite well.
The different estimates of θ, λ, R(t) and h(t) are calculated and reported in Table 4.For more illustrations, using the complete RME sample based on the four proposed methods, the relative histograms of the RME data set and the fitted PDF, as well as the plot of the fitted and empirical RF of the WD, are displayed in Figure 2. It shows that the graphical presentations support the numerical findings.Now, from the complete RME data set, three IAPC-TI samples are generated with m = 15 and different choices of R and T i , i = 1, 2. The generated samples and the corresponding censored schemes are reported in Table 5.For brevity, the censoring scheme R = (2, 0, 0, 0, 2) is denoted by R = (2, 0 * 3, 2).
Using the generated samples presented in Table 5, the BEs and the associated HPD credible intervals of the unknown parameters θ and λ as well as the reliability characteristics R(t) and h(t) at distinct mission time t = 1 are calculated by running the chain of MCMC sampler for 30,000 iterations and discarding the first 5,000 values as a burn-in period.Because we lack prior information about the unknown parameters θ and λ, we prefer to use the gamma improper, i.e. set all hyper-parameters a i , b i , i = 1, 2, equal to zero.However, due to calculation reasons, we take 0.0001 for all hyper-parameters.In Table 6, the classical estimates (including MLEs and MPSEs) and BEs of the unknown parameters θ, λ, R(t) and h(t) with their standard errors are computed.Also, the 95% two-sided ACI/HPD credible intervals of θ, λ, R(t) and h(t) with their lengths are calculated and listed in Table 7.It can be seen from Tables 6-7 that the point and interval estimates using MCMC via the LF and PS function perform better than the other classical estimates in terms of minimum standard errors and interval lengths.To show the uniqueness and existence of the MLEs and MPSEs for θ and λ, the contour plots of the natural logarithm of the LF and PS function based on sample 1 are displayed in Figure 3.It shows that the MLEs and MPSEs of θ and λ exist and are unique.
Moreover, using sample 1, the trace plots based on 25,000 chain values of θ , λ, R(t) and h(t) are plotted to assess the convergence of the MCMC procedure.In each trace plot, the arithmetic sample mean and two bounds of 95% HPD credible intervals are expressed by soled (-) and dashed (---) lines, respectively.Additionally, it implies that the MCMC algorithm converges quite well, and it also indicates that discarding the first 5,000 samples is an appropriate size to erase the influence of the starting guesses.Furthermore, based on the MCMC samples of size 25,000, the marginal PDFs of θ , λ, R(t) and h(t) with their histograms using the Gaussian kernel are plotted.It is clear that the generated posteriors of  θ and R(t) are fairly symmetric, while the generated posteriors of λ and h(t) are positive quite skewed, for both types of MCMC samples.In addition, some important characteristics for the MCMC outputs of θ , λ, R(t) and h(t) after burn-in, namely: mean, median, mode, standard deviation (SD) and skewness (Sk) are calculated and delivered in Table 8.
For brevity, the MCMC plots are available as supplementary materials.
To explain the idea of an optimal censoring scheme, the various criteria mentioned in Table 2 are acquired based on the three generated samples.Criteria I and II can be developed using both the determinant and trace of the observed VC matrices of the MLEs and MPSEs, respectively.Further, by assuming the different quantiles, namely: p = (0.3, 0.6, 0.9), criteria III and IV are also calculated based on the MLEs and MPSEs.Without losing information, the weight function w(p), 0 < p < 1 is considered to equal one, see for more details [17].Based on the values of MLEs and MPSEs of θ and λ, the optimum criteria based on the three generated samples are computed and declared in Table 9. Obviously, the optimal censoring scheme has the lowest value of the given criteria.Consequently, from Table 9, it follows that both LF and PS function approaches have identical behavior for each criterion.Also, it is clear that R = (0 * 5, 3 * 5, 0 * 5) is the optimal scheme based on criteria I, II and IV, while R = (3 * 5, 0 * 10) is the optimal scheme using criterion-III.It is also observed that the values of criteria III and IV when using the PS function approach have smaller values than those based on the LF approach.Clearly, the optimum progressive censoring plans proposed here support our findings in Section 5.

Conclusion
In this paper, the estimation problems of the unknown parameters, reliability, and hazard rate functions of the Weibull distribution are investigated based on an improved adaptive progressive Type-II censoring scheme.The maximum likelihood and maximum product of spacing as classical estimation methods are employed for this purpose.The Bayesian estimations based on these two approaches are also considered.The Bayesian estimates are obtained under the assumption of independent gamma priors and by considering the squared error loss function.The asymptotic properties of the maximum likelihood and maximum product of spacing estimates are used to construct the approximate confidence intervals of the unknown parameters as well as the reliability and hazard rate functions.
In the Bayesian paradigm, the point estimates are obtained using MCMC techniques, and the highest posterior density credible intervals are also computed using the same procedure.To compare the behavior of the different estimates, a simulation study is conducted under different scenarios.The simulation results indicated that the Bayesian estimates performed better than the classical estimates in terms of minimum root mean square error, relative absolute bias, and average confidence length.Furthermore, we have provided the optimal censoring scheme based on different information measures.Finally, one real data set is analyzed to show the applicability of the proposed methods.It will be interesting to investigate the different methods discussed in this paper in the presence of accelerated life tests for the Weibull distribution based on an improved adaptive Type-II progressive censoring scheme.The work is in progress, and it will be reported later.
Scheme-3: R = (3 * ( m 2 ), 0 * ( m 2 )) and R = (1 * (n − m), 0 * (2m − n)), for m n =40 and 80%, respectively.In the Bayesian paradigm, to assign values for the hyper-parameters a i , b i , i = 1, 2, of the gamma prior in(25), we propose to use the procedure of past sample data.Thus, one can easily generate 10,000 complete samples each of size 50 (say) from WD(θ , λ) as past samples for each plausible value of the unknown model parameters θ and λ, then subsequently get the hyper-parameter values from(34) and(35).Consequently, the values of a i , b i , i = 1, 2, to get the desired BEs through LF and PS function are: (a 1 , b 1 ) = (77.666,100.56), (a 2 , b 2 ) = (36.691,48.576) and (a 1 , b 1 ) = (71.986,87.974), (a 2 , b 2 ) = (32.288,42.943), respectively.If one does not have prior information on the unknown parameters, it is preferable to use frequentist estimates instead of BEs because the latter are computationally more expensive.Using the hybrid Gibbs within the M-H sampler algorithm described in Section 4, 12,000 MCMC samples, with 2,000 burn-in period, are generated.Hence, the average Bayes MCMC estimates and 95% two-sided HPD credible intervals are computed based on 10,000 MCMC samples.To run the MCMC sampler algorithm, the initial values of the unknown parameters were taken to be their frequentist estimates.For each setting, we compute the average estimates (say φη , η = 1, 2, 3, 4) of the unknown parameters θ, λ, R(t) and h(t) say (ϕ η ), RMSEs and MRABs, using the following formulas: Scheme-3: R = (3 * ( m 2 ), 0 * ( m 2 )) and R = (1 * (n − m), 0 * (2m − n)), for m n =40 and 80%, respectively.In the Bayesian paradigm, to assign values for the hyper-parameters a i , b i , i = 1, 2, of the gamma prior in(25), we propose to use the procedure of past sample data.Thus, one can easily generate 10,000 complete samples each of size 50 (say) from WD(θ , λ) as past samples for each plausible value of the unknown model parameters θ and λ, then subsequently get the hyper-parameter values from(34) and(35).Consequently, the values of a i , b i , i = 1, 2, to get the desired BEs through LF and PS function are: (a 1 , b 1 ) = (77.666,100.56), (a 2 , b 2 ) = (36.691,48.576) and (a 1 , b 1 ) = (71.986,87.974), (a 2 , b 2 ) = (32.288,42.943), respectively.If one does not have prior information on the unknown parameters, it is preferable to use frequentist estimates instead of BEs because the latter are computationally more expensive.Using the hybrid Gibbs within the M-H sampler algorithm described in Section 4, 12,000 MCMC samples, with 2,000 burn-in period, are generated.Hence, the average Bayes MCMC estimates and 95% two-sided HPD credible intervals are computed based on 10,000 MCMC samples.To run the MCMC sampler algorithm, the initial values of the unknown parameters were taken to be their frequentist estimates.For each setting, we compute the average estimates (say φη , η = 1, 2, 3, 4) of the unknown parameters θ, λ, R(t) and h(t) say (ϕ η ), RMSEs and MRABs, using the following formulas:

Figure 2 .
Figure 2. (a) The relative histogram and the fitted Weibull density, (b) the empirical and the fitted Weibull reliability function.

Figure 3 .
Figure 3. Contour plots of the log-LF and log-PS function of θ and λ using sample 1.

Table 2 .
Different optimality criteria of progressive censoring plan.

Table 4 .
Different estimates (with their SEs) of θ, λ, R(t) and h(t) for RME data.

Table 6 .
Point estimates with their SEs (in parenthesis) for the generated samples.

Table 7 .
The 95% ACI/HPD credible intervals (first-line) with their lengths (second-line) from the generated samples.

Table 8 .
Vital statistics of MCMC outputs under Sample 1.

Table 9 .
Optimum censoring schemes under different criteria for the generated samples.