Bayesian Inference for Concomitants Based on Weibull Subfamily of Morgenstern Family under Generalized Order Statistics

In this paper, for Weibull subfamily of Morgenstern family, the joint density of the concomitants of generalized order statistics GOS ( ’s) is used to obtain the maximum likelihood estimates (MLE) and Bayes estimates for the distribution parameters. Applications of these results for concomitants of order statistics are presented.


Introduction
Because of its various shapes of the probability density function ( pdf ) and its convenient representation of the distribution and survival function, the Weibull distribution has been used very effectively for analyzing lifetime data in the applied and engineering sciences.In this study consideration is given to the estimation of the twoparameter Weibull subfamily of Morgenstern family by using maximum likelihood estimation (MLE) and approximate Bayesian estimation under different types of loss functions, namely squared error (SE) loss function also known as quadratic loss function, linear exponential (LINEX) loss function and general entropy (GE) loss function and under the assumption of a bivariate informative (IP) and non-informative (NIP) priors based on concomitants of generalized order statistics ( GOS 's).The pdf and cumulative distribution function ( cdf ) of Weibull distribution are given by, respectively: The general theory of concomitants of order statistics has originally studied by David et al. (1977) Y .Some times exact information are available only on the concomitants variable since the other variable is only ranked and not measured exactly, consider for example a group of patients ranked according to the value of their response to a treatment and subsequently the values of their blood test are observed only on those patients whose initial value exceeds a threshold, in this situation we have information only on the concomitants variable.For  is given by: , The GOS 's has introduced by Kamps (1995), the joint density function of the first r , , , is given by: , ...
The Morgenstern family discussed by Johnson and Kotz (1975) provides a flexible family that can be used in such contexts, which is specified by cdf and pdf , respectively, as follows: The conditional pdf of Y given X is given by: 1 Mohie El-Din et al. (2015) have proposed the joint density of the concomitants of GOS 's for Morgenstern family, from (3), ( 4) and (7), the joint density of the first r concomitants of , for Morgenstern family is given by: where all the constants C 's are constant functions of  's.
In Bayesian approach, the SE loss function which is classified as a symmetric function and associates equal importance to the losses for overestimation and underestimation of equal magnitude, many authors study the SE loss in Bayesian inference, see for example, Calabria and Pulcini (1996), Singh et al. (2002) and Jaheen ((2004a) and (2004b)).The LINEX loss function which is asymmetric, was first introduced by Varian (1975) and was widely used by several authors.This function rises approximately exponentially on one side of zero and approximately linearly on the other side.The GE loss function is also asymmetric loss function which is used in several papers, for example, Dey et al. (1987), Dey and Liu (1992) and Soliman ((2005) and ( 2006)).
In this article, we consider the classical and Bayesian inference of the distribution parameters for concomitants of GOS 's based on two-parameter Weibull subfamily of Morgenstern family.Approximate Bayesian estimation are obtained under symmetric SE, asymmetric LINEX and GE loss functions for IP and NIP using Lindley's approximation and Markov chain Monte Carlo (MCMC) method.The organization of the paper is as follows: In Section 2, joint density for concomitants of GOS 's based on two-parameter Weibull subfamily of Morgenstern family is derived to obtain MLE.Also, the Bayes estimates of model parameters using Lindley's approximation and MCMC method are obtained.Section 3, contains the simulation results and real life data example based on order statistics as a special case of GOS 's.Conclusion is made in Section 4.

Classical and Bayesian estimation for concomitants of
GOS 's In this section, we study classical estimation such as ML estimation with its approximate confidence intervals and obtain Bayesian estimation using informative and noninformative priors under SE, LINEX and GE loss functions for two-parameter Weibull subfamily of Morgenstern family.

Maximum Likelihood Estimation
Suppose that ) , , , ( = 2 1 r y y y y  is a concomitants of GOS 's sample.From (1), ( 2) and ( 8), the log-likelihood function for Weibull subfamily of Morgenstern family is given by: .To derive the ML estimation of the unknown parameters  , say ML  ˆ, and  , say ML  ˆ, we differentiate (9) twice with respect to  and  and then solve the following non- linear equation numerically by using Newton-Raphson method.

Bayesian estimation
If we have enough information about the parameter we should use informative prior (IP), otherwise it is better to consider non-informative prior (NIP).In this section, we want to obtain Bayesian estimation of the model parameters  ,  .Unfortunately, in many cases the Bayesian estimation can't be expressed in explicit forms.So, approximate Bayesian estimation are obtained under IP and NIP using Lindley's approximation and Markov chain Monte Carlo (MCMC) method.

Non-informative prior
For Weibull subfamily of Morgenstern family, suppose that  and  are independent and each of them have the Jeffreys vague prior, respectively: then, the joint NIP of the parameters is given by 0 The joint posterior density function of the parameters  and  can be written from (1), (2), ( 8) and (10) as follow: ,

Bayes estimation using Lindley's approximation for NIP
Now, we want to obtain the approximate Bayes estimation of  ,  under different types of loss functions.According to Lindley (1980), any ratio of the integral of the form is log of joint prior of  ,  , can be approximated asymptotically by the following: where Now we can obtain the values of the Bayes estimates of various parameters: a) Case of the SE loss function: Case of the LINEX loss function: Case of the GE loss function: then from ( 14),

Bayesian estimation using MCMC method for NIP
MCMC method is considered to generate samples from the posterior distribution and then compute the Bayes estimation of  ,  .From the joint posterior density function in (11), the conditional posterior distributions of  ,  can be written, respectively, as: where * Q is defined in (12).The conditional posterior distributions of  and  in ( 15) and ( 16) can't be reduced analytically to well known distribution, but the plot of them shows that they are similar to normal distribution.So, to generate random samples from this distribution, we use the Metropolis method with normal proposal distribution, see Metropolis et al. (1953).The following algorithm is proposed to generate  and  from the posterior distribution and then obtain the Bayes estimation: Step 1. Start with Step 4. Calculate the acceptance probability , accept the proposal distribution and set Otherwise, reject the proposal distribution and set 1) ( ) Step 7. To generate  do the Steps 6 2  for  not  .
Step 11.Obtain the Bayes estimation of  and  using MCMC under SE loss function as: Step 12. Obtain the Bayes estimation of  and  using MCMC under LINEX loss function as: Step 13.Obtain the Bayes estimation of  and  using MCMC under GE loss function as: where M is the burn-in period.

Informative prior
Assume that the parameters  and  are independent each have gamma prior, with hyperparameters 1 a , 1 b and 2 a , 2 b , respectively: then, the joint IP of the parameters is given by: , ) , ( The joint posterior density function of the parameters  and  can be written from ( 1), (2), ( 8) and (19) as follow: where * Q is defined in (12).

Bayes estimation using Lindley's approximation for IP
For obtaining the approximate Bayes estimation of  ,  under different types of loss functions for IP, it will be as same as the previous procedures.

Bayesian estimation using MCMC method for IP
From the joint posterior density function in (20), the conditional posterior distributions of  ,  can be written, respectively, as: where * Q is defined in (12).The conditional posterior distributions of  and  in ( 21) and ( 22) can't be reduced analytically to well known distribution, but the plot of them shows that they are similar to normal distribution.So, to generate random samples from this distribution, we use the Metropolis method with normal proposal distribution.We will use the same algorithm that used in the previous procedures but for IP to obtain the Bayes estimation.

Numerical results
In this section, for type-II censored sample (with 0 = m and 1 = k ), in order to illustrate all the inferential results established in the preceding sections, we use simulation and real life data for Weibull subfamily of Morgenstern family, which are conducted to investigate the performances of the MLE and Bayes estimation in terms of their values, average values and mean square errors (MSE) as follows: ) and non-informative priors, with repetition 1000 times, see Tables (3.2) and (3.3).

Real life data example
The main idea of applying a real life data example is to determine the value of the association parameter  .We consider the data given in Nelson (1982) for weibull subfamily of Morgenstern family.The original data consists of 60 times to breakdown in minutes of an insulating fluid subjected to high-voltage stress.The data is partitioned by Nelson (1982) into six groups, each with ten insulating fluids.These data have been analyzed by Balakrishnan et al. ( 2004) by assuming two-parameter exponential distribution.We introduce here the data from groups 4 (group X ) and 6 (group Y ), as shown in Table (3.1).Based on this data, we computed the ML and Bayesian estimates of  and  under the SE, LINEX and GE loss functions using informative (with ) and non-informative priors.We have fitted the weibull distribution to this data, the results are as follows: 2.5 =       Values of the different estimators for Lindley's approximation.Values of the different estimators for MCMC method.

Conclusion and comments
Based on two-parameter Weibull distribution the joint density of the concomitants of GOS 's for this subfamily of Morgenstern family have been discussed.The statistical inference procedure for the unknown parameters of the given distribution such as MLE and approximate Bayesian estimation are presented.Our applications of these results in this model are given for concomitants of order statistics, and the estimations are conducted on the basis of type-II censored samples.From the results in Tables (3.2) to (3.5), we observe the following: 1.
Increasing the value of the estimated parameters does not depend on the increasing of r .

At 1 =  h
, we find that the Bayes estimators of the considered parameters under GE loss functions equal to those under SE loss function.

3.
The Bayes estimators of the considered parameters under LINEX and GE loss functions give more accurate results than MLE estimator and those under SE loss function by increasing and decreasing the values of t and h respectively.4.
For considering various values for the hyperparameters in the informative priors of Bayes estimators, the results did not change the obtained conclusions.

5.
The Bayes estimators of the considered parameters obtained from MCMC method give more accurate results than Lindley's approximation, with considering that Lindley's approximation give in some results negative and complex values.

6.
In the simulation study, the results obtained from IP are more accurate than NIP for the Bayes estimators of  and  .

7.
In the real life data study, the results obtained from NIP are more accurate than IP for the Bayes estimators of  and  .
The proposed procedures for the estimation problems may be considered for other models and distributions.

For a bivariateY
sample X and Y we choose the values of the parameters 1from weibull distribution, then obtain the average values and MSE of MLE and Bayesian estimates of  and  under the SE, LINEX and GE loss functions using informative