We propose new scoring rules based on conditional and censored likelihood for assessing the predictive accuracy of competing density forecasts over a specific region of interest, such as the left ...tail in financial risk management. These scoring rules can be interpreted in terms of Kullback–Leibler divergence between weighted versions of the density forecast and the true density. Existing scoring rules based on weighted likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased toward such densities. Using our novel likelihood-based scoring rules avoids this problem.
Designing more efficient control charts is critical to enhance product lifetime management and improve product stability in many research fields and industrial productions. To overcome the limitation ...that censored data frequently appears in lifetime experiments, we propose three monitoring schemes to detect the Weibull scale parameter under progressive Type II censored data based on the likelihood ratio test, maximum likelihood estimation and a novel weighted likelihood ratio test, respectively. Furthermore, the proposed schemes enable extension to joint monitor both the scale parameter and shape parameter of the Weibull distributed process. These control charts are improved with the corresponding self-starting schemes when there are not adequate in-control datasets in Phase I. Numerous simulation experiments and a real dataset on the breaking strength of carbon fibers are applied to illustrate the excellent performance and practical application of our methods separately.
•The problem of monitoring complex censored lifetime data is fully explored.•Three novel EWMA control charts are designed for monitoring progressive type II censoring lifetime data based on the likelihood ratio test, maximum likelihood estimation and an innovative weighted likelihood ratio test, respectively.•The proposed control charts are improved as self-starting control charts to compensate for the parameter estimation uncertainty due to insufficient sample size in Phase I.•The proposed control charts are proven to be robust and efficient.•An application of carbon fiber breaking strength dataset is provided to illustrate effectiveness.
Bayesian Bootstrap Spike-and-Slab LASSO Nie, Lizhen; Ročková, Veronika
Journal of the American Statistical Association,
07/2023, Letnik:
118, Številka:
543
Journal Article
Recenzirano
Odprti dostop
The impracticality of posterior sampling has prevented the widespread adoption of spike-and-slab priors in high-dimensional applications. To alleviate the computational burden, optimization ...strategies have been proposed that quickly find local posterior modes. Trading off uncertainty quantification for computational speed, these strategies have enabled spike-and-slab deployments at scales that would be previously unfeasible. We build on one recent development in this strand of work: the Spike-and-Slab LASSO procedure. Instead of optimization, however, we explore multiple avenues for posterior sampling, some traditional and some new. Intrigued by the speed of Spike-and-Slab LASSO mode detection, we explore the possibility of sampling from an approximate posterior by performing MAP optimization on many independently perturbed datasets. To this end, we explore Bayesian bootstrap ideas and introduce a new class of jittered Spike-and-Slab LASSO priors with random shrinkage targets. These priors are a key constituent of the Bayesian Bootstrap Spike-and-Slab LASSO (BB-SSL) method proposed here. BB-SSL turns fast optimization into approximate posterior sampling. Beyond its scalability, we show that BB-SSL has a strong theoretical support. Indeed, we find that the induced pseudo-posteriors contract around the truth at a near-optimal rate in sparse normal-means and in high-dimensional regression. We compare our algorithm to the traditional Stochastic Search Variable Selection (under Laplace priors) as well as many state-of-the-art methods for shrinkage priors. We show, both in simulations and on real data, that our method fares very well in these comparisons, often providing substantial computational gains.
Supplementary materials
for this article are available online.
Blockmodeling linked networks aims to simultaneously cluster two or more sets of units into clusters based on a network where ties are possible both between units from the same set as well as between ...units of different sets. While this has already been developed for generalized and k-means blockmodeling, our approach is based on the well-known stochastic blockmodeling technique, utilizing a mixture model. Estimation is performed using the CEM algorithm, which iteratively estimates the parameters by maximizing a suitable likelihood function and reclusters the units according to the parameters. The steps are repeated until the likelihood function ceases to improve.
A key drawback of the basic algorithm is that it treats all units equally, consequently yielding higher influence to larger parts of the data. The greater size, however, does not necessarily imply higher importance. To mitigate this asymmetry, we propose a solution where underrepresented parts of the data are given more influence through an appropriate weighting. This idea leads to the so-called weighted likelihood approach, where the ordinary likelihood function is replaced by a weighted likelihood.
The efficiency of the different approaches is tested via simulations. It is shown through simulations that the weighted likelihood approach performs better for larger networks and a clearer blockmodel structure, especially when the one-mode blockmodels within the smaller sets are clearer.
•Linked networks contain two or more sets of units and subnetworks.•Subnetworks contain ties among the units of one set or between units of two sets.•Examples of linked networks are also dynamic networks and multilevel networks.•Blockmodeling linked networks jointly partitions all sets of units.•A stochastic blockmodeling approach is utilized to blockmodeling linked networks.•Weighted likelihood is used to balance the impact of different subnetworks.
Interval-censored failure time data frequently occur in many areas and a great deal of literature on their analyses has been established. In this article, we discuss the situation where one faces ...bivariate interval-censored data arising from case-cohort studies, which are commonly used as a tool to save costs when disease incidence is low and covariates are difficult to obtain. For this problem, a class of copula-based semi-parametric models is presented and for estimation, a sieve weighted maximum likelihood estimation procedure is developed. The resulting estimators of regression parameters are shown to be strongly consistent and asymptotically normal. Furthermore, the proposed method is generalized to the situation of non rare diseases. A simulation study is conducted to assess the finite sample performance of the proposed method and suggests that it performs well in practice.
Summary
In this paper we revisit the weighted likelihood bootstrap, a method that generates samples from an approximate Bayesian posterior of a parametric model. We show that the same method can be ...derived, without approximation, under a Bayesian nonparametric model with the parameter of interest defined through minimizing an expected negative loglikelihood under an unknown sampling distribution. This interpretation enables us to extend the weighted likelihood bootstrap to posterior sampling for parameters minimizing an expected loss. We call this method the loss-likelihood bootstrap, and we make a connection between it and general Bayesian updating, which is a way of updating prior belief distributions that does not need the construction of a global probability model, yet requires the calibration of two forms of loss function. The loss-likelihood bootstrap is used to calibrate the general Bayesian posterior by matching asymptotic Fisher information. We demonstrate the proposed method on a number of examples.
In modern radar detection systems, constant false alarm rate (CFAR) control is a key technique for automatic target detection in clutter level unknown environments. The maximum likelihood criterion ...is often used to design the CFAR detector for which it has the best detection performance with the expected probability of false alarm. However, the heterogeneous environments, viz., multiple-target scenario and clutter power transition, will deteriorate the performance of the optimal CFAR detector. In this paper, we propose a novel CFAR detector for Weibull background with known shape parameter. The new CFAR detector is based on robust weighted likelihood estimator, with robustness to interferences. In addition, to prove the CFAR property of it, we introduce the invariant theory. Computation analysis and simulation results verify the effectiveness and superiority of our proposed CFAR detector.
We present 10 different strength-based statistical models that we use to model soccer match outcomes with the aim of producing a new ranking. The models are of four main types: Thurstone–Mosteller, ...Bradley–Terry, independent Poisson and bivariate Poisson, and their common aspect is that the parameters are estimated via weighted maximum likelihood, the weights being a match importance factor and a time depreciation factor giving less weight to matches that are played a long time ago. Since our goal is to build a ranking reflecting the teams’ current strengths, we compare the 10 models on the basis of their predictive performance via the Rank Probability Score at the level of both domestic leagues and national teams. We find that the best models are the bivariate and independent Poisson models. We then illustrate the versatility and usefulness of our new rankings by means of three examples where the existing rankings fail to provide enough information or lead to peculiar results.
In this paper, a new item-weighted scheme is proposed to assess examinees’ growth in longitudinal analysis. A multidimensional Rasch model for measuring learning and change (MRMLC) and its polytomous ...extension is used to fit the longitudinal item response data. In fact, the new item-weighted likelihood estimation method is not only suitable for complex longitudinal IRT models, but also it can be used to estimate the unidimensional IRT models. For example, the combination of the two-parameter logistic (2PL) model and the partial credit model (PCM,
Masters, 1982
) with a varying number of categories. Two simulation studies are carried out to further illustrate the advantages of the item-weighted likelihood estimation method compared to the traditional Maximum a Posteriori (MAP) estimation method, Maximum likelihood estimation method (MLE),
Warm’s (1989)
weighted likelihood estimation (WLE) method, and type-weighted maximum likelihood estimation (TWLE) method. Simulation results indicate that the improved item-weighted likelihood estimation method better recover examinees’ true ability level for both complex longitudinal IRT models and unidimensional IRT models compared to the existing likelihood estimation (MLE, WLE and TWLE) methods and MAP estimation method, with smaller bias, root-mean-square errors, and root-mean-square difference especially at the low-and high-ability levels.