Due to the increased use of areal measurement techniques, such as laser scanning in geodetic monitoring tasks, areal analysis strategies have considerably gained in importance over the last decade. ...Although a variety of approaches that quasi-continuously model deformations are already proposed in the literature, there are still a multitude of challenges to solve. One of the major interests of engineering geodesy within monitoring tasks is the detection of absolute distortions with respect to a stable reference frame. Determining distortions and simultaneously establishing the joint geodetic datum can be realised by modelling the differences between point clouds acquired in different measuring epochs by means of a rigid body movement that is superimposed by distortions. In a previous study, we discussed the possibility of estimating these rigid body movements from the control points of B-spline surfaces approximating the acquired point clouds. Alternatively, we focus on estimating them by means of constructed points on B-spline surfaces in this study. This strategy has the advantage of larger redundancy compared to the control point–based strategy. Furthermore, the strategy introduced allows for the detection of rigid body movements between point clouds of different epochs and for the simultaneous localisation of areas in which the rigid body movement is superimposed by distortions. The developed approach is based on B-spline models of epoch-wise acquired point clouds, the surface parameters of which define point correspondences on different B-spline surfaces. Using these point correspondences, a RANSAC-approach is used to robustly estimate the parameters of the rigid body movement. The resulting consensus set initially defines the non-distorted areas of the object under investigation, which are extended and statistically verified in a second step. The developed approach is applied to simulated data sets, revealing that distorted areas can be reliably detected and that the parameters of the rigid body movement can be precisely and accurately determined by means of the strategy.
In this work we fit an epidemiological model SEIAQR (Susceptible - Exposed - Infectious - Asymptomatic - Quarantined - Removed) to the data of the first COVID-19 outbreak in Rio de Janeiro, Brazil. ...Particular emphasis is given to the unreported rate, that is, the proportion of infected individuals that is not detected by the health system. The evaluation of the parameters of the model is based on a combination of error-weighted least squares method and appropriate B-splines. The structural and practical identifiability is analyzed to support the feasibility and robustness of the parameters’ estimation. We use the Bootstrap method to quantify the uncertainty of the estimates. For the outbreak of March–July 2020 in Rio de Janeiro, we estimate about 90% of unreported cases, with a 95% confidence interval (85%, 93%).
Methods for fitting nonhomogeneous Markov models to panel‐observed data using direct numerical solution to the Kolmogorov Forward equations are developed. Nonhomogeneous Markov models occur most ...commonly when baseline transition intensities depend on calendar time, but may also occur with deterministic time‐dependent covariates such as age. We propose transition intensities based on B‐splines as a smooth alternative to piecewise constant intensities and also as a generalization of time transformation models. An expansion of the system of differential equations allows first derivatives of the likelihood to be obtained, which can be used in a Fisher scoring algorithm for maximum likelihood estimation. The method is evaluated through a small simulation study and demonstrated on data relating to the development of cardiac allograft vasculopathy in posttransplantation patients.
Type-I censored sequential k-out-of-n systems Burkschat, M.; Cramer, E.; Górny, J.
Applied mathematical modelling,
October 2016, 2016-10-00, Letnik:
40, Številka:
19-20
Journal Article
Recenzirano
Odprti dostop
•Type-I censored sequential order statistics are introduced.•The case of underlying exponential distributions is treated in detail.•The exact conditional distribution of the MLE for a scale parameter ...is derived.•Monotonicity of the survival function of the MLE w.r.t. the scale parameter is shown.•Illustrative examples of the results are given for different settings.
Type-I censored sequential order statistics are introduced and their distribution is examined. In the case of underlying exponential distributions, the conditional joint and marginal distributions of the normalized spacings given the number of observed failure times are obtained. Furthermore, the conditional distribution of the maximum likelihood estimator of a scale parameter based on a Type-I censored sample of sequential order statistics from exponential distributions is derived. A monotonicity property and limits of the survival function of this estimator with respect to the scale parameter are shown. These properties are important for the construction of an exact confidence interval for the scale parameter.
We consider simultaneous semiparametric estimation of conditional quantiles for multiple responses using a dynamic single-index structure. Motivated by a financial application, a market factor index ...is constructed that is shared among different portfolios which results in a more interpretable and efficient model, compared to separately building multiple conditional quantiles. On the other hand, the link functions are allowed to be different across portfolios. The asymptotic normality of the index parameter is established, as well as the convergence rate of the nonparametric functions. Monte Carlo studies demonstrated the advantages of the proposed estimator and an application to financial data is used to illustrate the method.
Regression of a scalar response on signal predictors, such as near-infrared (NIR) spectra of chemical samples, presents a major challenge when, as is typically the case, the dimension of the signals ...far exceeds their number. Most solutions to this problem reduce the dimension of the predictors either by regressing on components e.g., principal component regression (PCR) and partial least squares (PLS) or by smoothing methods, which restrict the coefficient function to the span of a spline basis. This article introduces functional versions of PCR and PLS, which combine both of the foregoing dimension-reduction approaches. Two versions of functional PCR are developed, both using B-splines and roughness penalties. The regularized-components version applies such a penalty to the construction of the principal components (i.e., it uses functional principal components), whereas the regularized-regression version incorporates a penalty in the regression. For the latter form of functional PCR, the penalty parameter may be selected by generalized cross-validation, restricted maximum likelihood (REML), or a minimum mean integrated squared error criterion. Proceeding similarly, we develop two versions of functional PLS. Asymptotic convergence properties of regularized-regression functional PCR are demonstrated. A simulation study and split-sample validation with several NIR spectroscopy data sets indicate that functional PCR and functional PLS, especially the regularized-regression versions with REML, offer advantages over existing methods in terms of both estimation of the coefficient function and prediction of future observations.
Enriching tensor-product B-spline control nets by allowing T-gons (where strips of quadrilaterals start or end) and irregular nodes (where n≠4 quadrilaterals meet) reduces the requirements on ...quad-meshing and increases the flexibility for polyhedral design with associated smooth surfaces. This paper introduces a family of piecewise polynomial, geometrically continuous surface constructions that yield good highlight line distributions also in the presence of irregular nodes next to a T-gon. Such tight juxtaposition can further reduce the quad-meshing requirements and increase the space of polyhedral design control structures. The surfaces can be chosen to cover T-gons with G1 caps of degree bi-4 – or with caps of degree bi-3 that are almost G1 and preserve the good highlight line distribution of the bi-4 G1 surfaces.
Overdispersed count data arise commonly in disease mapping and infectious disease studies. Typically, the level of overdispersion is assumed to be constant over time and space. In some applications, ...however, this assumption is violated, and in such cases, it is necessary to model the dispersion as a function of time and space in order to obtain valid inferences. Motivated by a study examining spatiotemporal patterns in COVID-19 incidence, we develop a Bayesian negative binomial model that accounts for heterogeneity in both the incidence rate and degree of overdispersion. To fully capture the heterogeneity in the data, we introduce region-level covariates, smooth temporal effects, and spatially correlated random effects in both the mean and dispersion components of the model. The random effects are assigned bivariate intrinsic conditionally autoregressive priors that promote spatial smoothing and permit the model components to borrow information, which is appealing when the mean and dispersion are spatially correlated. Through simulation studies, we show that ignoring heterogeneity in the dispersion can lead to biased and imprecise estimates. For estimation, we adopt a Bayesian approach that combines full-conditional Gibbs sampling and Metropolis–Hastings steps. We apply the model to a study of COVID-19 incidence in the state of Georgia, USA from March 15 to December 31, 2020.
Bayesian simultaneous estimation of nonparametric quantile curves is a challenging problem, requiring a flexible and robust data model whilst satisfying the monotonicity or noncrossing constraints on ...the quantiles. The pyramid quantile regression method for simultaneous linear quantile fitting is adapted for the spline regression setting. In high dimensional problems, the choice of the pyramid locations becomes crucial for a robust parameter estimation. The optimal pyramid locations are derived which then allows for an efficient adaptive block-update MCMC scheme to be proposed for posterior computation. Simulation studies show that the proposed method provides estimates with significantly smaller errors and better empirical coverage probability when compared to existing alternative approaches. The method is illustrated with three real applications.