Tailoring fiber orientation has been a very interesting approach to improve the efficiency of composite structures. For the discrete angle selection approach, previous methods use formulations that ...requires many variables, increasing the computational cost, and they cannot guarantee total fiber convergence (which is the selection of only one candidate angle). This paper proposes a novel fiber orientation optimization method based on the optimized selection of discrete angles, commonly used to avoid the multiple local minima problem found in fiber orientation optimization methods that consider the fiber angle as the design variable. The proposed method uses the normal distribution function as the angle selection function, which requires only one variable to select the optimized angle among any number of discrete candidate angles. By adjusting a parameter in the normal distribution function, total fiber convergence can be achieved. In addition, a usual problem in fiber angle optimization methods is that because fibers can be arbitrarily oriented, structural problems may exist at the intersection of discontinuous fiber paths. Besides, composite manufacturing technologies, such as Advanced Fiber Placement (AFP), produce better results when fiber paths are continuous. These problems can be avoided by considering continuously varying fiber paths. In the proposed method, fiber continuity is also achieved by using a spatial filter, which improves the fiber path and avoids structural problems. Numerical examples are presented to illustrate the proposed method.
I start by providing an updated summary of the penalized pixel-fitting (ppxf) method that is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full ...spectrum fitting. I then focus on the problem of extracting the kinematics when the velocity dispersion ... is smaller than the velocity sampling Delta V that is generally, by design, close to the instrumental dispersion sigma sub( inst). The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when sigma ... Delta V/2, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the undersampled kernel and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available ppxf software. The key advantage of the new approach is that it provides accurate velocities regardless of ... This is important e.g. for spectroscopic surveys targeting galaxies with sigma << sigma sub( inst), for galaxy redshift determinations or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages. (ProQuest: ... denotes formulae/symbols omitted.)
In this paper, a new family of continuous random variables with non-necessarily symmetric densities is introduced. Its density function can incorporate unimodality and bimodality features. Special ...attention is paid to the normal distribution which is included as a particular case. Its density function is given in closed-form which allows to easily compute probabilities, moments and other related measures such as skewness and kurtosis coefficients. Also, a stochastic representation of the family that enables us to generate random variates of this model is also presented. This new family of distributions is applied to explain the incidence of Hodgkin’s disease by age. Other applications include the implications of bimodality in geoscience. Finally, the multivariate counterpart of this distribution is briefly discussed.
Fusion excitation functions of stable odd-A targets 147,149Sm with 16O projectile are theoretically analyzed within view of the symmetric-asymmetric Gaussian barrier distribution (SAGBD) formalism. ...For the purposes of this study, we assumed that bombardment energies of the 16O + 147,149Sm reactions would be around nominal barrier. For these reactions, Wong based computations fails to retrace the fusion data whereas SAGBD predictions fairly retrace the fusion data in entire bombarding energy range. The evaluated values of channel coupling parameter (λ) and VCBRED from SAGBD outcomes are found larger for heavier (16O + 149Sm) over lighter (16O + 149Sm) system, which suggests that heavier system possess extra fusion enhancement in sub-barrier domain. Present theoretical investigation highlights the significant contributions of intrinsic channels that emerges due to structure of reacting nuclei and such effects are empirically included in SAGBD method.
Monocular 3D object detection is an important task for autonomous driving considering its advantage of low cost. It is much more challenging than conventional 2D cases due to its inherent ill-posed ...property, which is mainly reflected in the lack of depth information. Recent progress on 2D detection offers opportunities to better solving this problem. However, it is non-trivial to make a general adapted 2D detector work in this 3D task. In this paper, we study this problem with a practice built on a fully convolutional single-stage detector and propose a general framework FCOS3D. Specifically, we first transform the commonly defined 7-DoF 3D targets to the image domain and decouple them as 2D and 3D attributes. Then the objects are distributed to different feature levels with consideration of their 2D scales and assigned only according to the projected 3D-center for the training procedure. Furthermore, the center-ness is redefined with a 2D Gaussian distribution based on the 3D-center to fit the 3D target formulation. All of these make this framework simple yet effective, getting rid of any 2D detection or 2D-3D correspondence priors. Our solution achieves 1st place out of all the vision-only methods in the nuScenes 3D detection challenge of NeurIPS 2020. Code and models are released at https://github.com/open-mmlab/mmdetection3d.
Simulation from the truncated multivariate normal distribution in high dimensions is a recurrent problem in statistical computing and is typically only feasible by using approximate Markov chain ...Monte Carlo sampling. We propose a minimax tilting method for exact independently and identically distributed data simulation from the truncated multivariate normal distribution. The new methodology provides both a method for simulation and an efficient estimator to hitherto intractable Gaussian integrals. We prove that the estimator has a rare vanishing relative error asymptotic property. Numerical experiments suggest that the scheme proposed is accurate in a wide range of set-ups for which competing estimation schemes fail. We give an application to exact independently and identically distributed data simulation from the Bayesian posterior of the probit regression model.
Purpose
To survey the use of Pearson's correlation coefficient (r) and related statistical methods in the ophthalmic literature, to consider the limitations of r, and to suggest suitable alternative ...methods of analysis.
Recent findings
Searching Ophthalmic and Physiological Optics (OPO), Optometry and Vision Science (OVS), and Clinical and Experimental Optometry (CXO) online archives using correlation and Pearson's r as search terms resulted in 4057 and 281 hits respectively. Coefficient of determination, r square, or r squared received fewer hits (65, 8, and 22 hits respectively). The assumption that r follows a bivariate normal distribution was rarely encountered (3 hits) although several studies applied Spearman's rank correlation (70 hits). The intra‐class correlation coefficient (ICC) was widely used (178 hits), but fewer hits were recorded for partial correlation (43 hits) and multiple correlation (13) hits. There was little evidence that the problem of sample size was addressed in correlation studies.
Summary
Investigators should be alert to whether: (1) the relationship between two variables could be non‐linear, (2) the data are bivariate normal, (3) r accounts for a significant proportion of the variance in Y, (4) outliers are present, the data are clustered, or have a restricted range, (5) the sample size is appropriate, and (6) a significant correlation indicates causality. In addition, the number of significant digits used to express r and the problems of multiple testing should be addressed. The problems and limitations of r suggest a more cautious approach regarding its use and the application of alternative methods where appropriate.
Plane segmentation is a basic task in the automatic reconstruction of indoor and urban environments from unorganized point clouds acquired by laser scanners. As one of the most common ...plane-segmentation methods, standard Random Sample Consensus (RANSAC) is often used to continually detect planes one after another. However, it suffers from the spurious-plane problem when noise and outliers exist due to the uncertainty of randomly sampling the minimum subset with 3 points. An improved RANSAC method based on Normal Distribution Transformation (NDT) cells is proposed in this study to avoid spurious planes for 3D point-cloud plane segmentation. A planar NDT cell is selected as a minimal sample in each iteration to ensure the correctness of sampling on the same plane surface. The 3D NDT represents the point cloud with a set of NDT cells and models the observed points with a normal distribution within each cell. The geometric appearances of NDT cells are used to classify the NDT cells into planar and non-planar cells. The proposed method is verified on three indoor scenes. The experimental results show that the correctness exceeds 88.5% and the completeness exceeds 85.0%, which indicates that the proposed method identifies more reliable and accurate planes than standard RANSAC. It also executes faster. These results validate the suitability of the method.
In this paper, we discuss the distribution of the t-statistic under the assumption of normal autoregressive distribution for the underlying discrete time process. This result generalizes the ...classical result of the traditional t-distribution where the underlying discrete time process follows an uncorrelated normal distribution. However, for AR(1), the underlying process is correlated. All traditional results break down and the resulting t-statistic is a new distribution that converges asymptotically to a normal. We give an explicit formula for this new distribution obtained as the ratio of two dependent distribution (a normal and the distribution of the norm of another independent normal distribution). We also provide a modified statistic that is follows a non central t-distribution. Its derivation comes from finding an orthogonal basis for the the initial circulant Toeplitz covariance matrix. Our findings are consistent with the asymptotic distribution for the t-statistic derived for the asympotic case of large number of observations or zero correlation. This exact finding of this distribution has applications in multiple fields and in particular provides a way to derive the exact distribution of the Sharpe ratio under normal AR(1) assumptions. Mathematics Subject Classification: 62E10; 62E15