The usefulness of a hidden truncated Pareto (type II) model along with its' inference under both the classical and Bayesian paradigm have been discussed in the literature in great details. In the ...multivariate set-up, some discussions are made that are primarily based on constructing a multivariate hidden truncated Pareto (type II) models with - single variable truncation or more than one variable truncation. However, in all such previous discussions regarding bivariate hidden truncated Pareto models, in the classical estimation set-up, large bias and standard error values for the truncation parameter(s) as well as for the other parameters have been observed, and no discussion was made to address this issue. In this article, we try to address this issue of large bias values by considering constrained optimization via linear/non-linear transformation of the parameters following the strategy as proposed (the reference is given in Section 3), in efficiently implementing Newton-Raphson optimization algorithm in R. This plays a major motivation for the present paper. We also derive the observed Fisher Information Matrix. For illustrative purposes, we provide a simulation study to address this issue. A real-life data set is also re-analyzed to study the utility of such two-sided hidden truncation Pareto (type II) models.
In this paper, we focus on the coherently coupled nonlinear Schrödinger (NLS) equations with variable coefficient including four-wave mixing, external potential and gain/loss terms, and derive three ...types of similarity transformations under different constraint conditions. Based on these transformations, one can transform the variable coefficient coupled NLS equations into constant coefficient coupled NLS equations that can be decoupled by using of a linear transformation. Thus, various composite waves superposed by different nonlinear waves can be investigated in homogeneous and inhomogeneous systems with the aid of the solutions of NLS equation and these transformations. The diversity of various composite waves and the energy exchange between two coherently coupled components in homogeneous fiber system are demonstrated. Furthermore, based on the obtained three types of similarity transformations, the characteristics of the composite waves are investigated in tunneling system and periodic perturbation system, respectively. These results could be helpful to explore the diverse dynamics of the composite waves in birefringent fiber.
Over the past few decades, a lot of new neural network architectures and deep learning (DL)-based models have been developed to tackle problems more efficiently, rapidly, and accurately. For ...classification problems, it is typical to utilize fully connected layers as the network head. These dense layers used in such architectures have always remained the same – they use a linear transformation function that is a sum of the product of output vectors with weight vectors, and a trainable linear bias. In this study, we explore a different mechanism for the computation of a neuron’s output. By adding a new feature, involving a product of higher order output vectors with their respective weight vectors, we transform the conventional linear function to higher order functions, involving powers over two. We compare and analyze the results obtained from six different transformation functions in terms of training and validation accuracies, on a custom neural network architecture, and with two benchmark datasets for image classification (CIFAR-10 and CIFAR-100). While the dense layers perform better in all epochs with the new functions, the best performance is observed with a quadratic transformation function. Although the final accuracy achieved by the existing and new models remain the same, initial convergence to higher accuracies is always much faster in the proposed approach, thus significantly reducing the computational time and the computational resources required. This model can improve the performance of every DL architecture that uses a dense layer, with remarkably higher improvement in larger architectures that incorporate a very high number of parameters and output classes.
Creating inclusive cities requires meaningful responses to inequality and segregation. We build an agent-based model of interactions between wealth and ethnicity of agents to investigate 'dual' ...segregations-due to ethnicity and due to wealth. As agents are initially allowed to move into neighbourhoods they cannot afford, we find a regime where there is marginal increase in both wealth segregation and ethnic segregation. However, as more agents are progressively allowed entry into unaffordable neighbourhoods, we find that both wealth and ethnic segregations undergo sharp, non-linear transformations, but in opposite directions-wealth segregation shows a dramatic decline, while ethnic segregation an equally sharp upsurge. We argue that the decrease in wealth segregation does not merely accompany, but actually drives the increase in ethnic segregation. Essentially, as agents are progressively allowed into neighbourhoods in contravention of affordability, they create wealth configurations that enable a sharp decline in wealth segregation, which at the same time allow co-ethnics to spatially congregate despite differences in wealth, resulting in the abrupt worsening of ethnic segregation.
Nitrate transfer from agricultural sources via river networks remains a serious unresolved and complex issue. This article proposes an economic analysis of the optimal reduction of this nitrate. A ...linear transformation and transport model of nitrogen inputs from agricultural sources in the form of nitrate from five agricultural areas towards a hydrographic network in France is used to calculate the optimal effort to reduce nitrogen inputs on the basis of a cost-benefit analysis (CBA). A sensitivity study is implemented with different damage scenarios. In addition, efforts to reduce uniform and spatialized inputs are compared. In particular, our results show the determining role of the magnitude of the damage. The ratio of 1 to 3 between the low and high range of its estimation would make it possible to attain good status, as specified by the Water Framework Directive (WFD), without having to resort to the exemption procedure, decreasing the average optimal nitrate concentration from 47 mg/l to 42 mg/l. Moreover, this would increase the absolute and relative benefits of spatialization by a factor of 9 and 2, respectively.
•Optimal nitrate reduction for a catchment is calculated using a cost-benefit-analysis.•Uniform and discriminating reductions of agricultural nitrogen are compared.•The impact on the results of the different damage-related hypotheses is discussed.•The hypothesis concerning the global damage level plays a crucial role in the results.•Greater damage magnitude tends to increase the advantage of spatialization.
The ongoing digital transformation is being undertaken by the financial institutions on the upgrade in China. The risks are accumulated synchronously in the middle of establishing differentiated ...competitive advantages through information technology innovations by the new fintech institutions. In this study, a fuzzy analytical hierarchy process is adopted to figure out the risk evaluation of the new fintech institutions in China, identifying those at risk as early as possible. Firstly, several level 1 indicators of the risk evaluation system of the new fintech institutions and corresponding subordinate level 2 indicators are determined, followed by rating the level 2 indicators of each new fintech institution ready for risk evaluation ranking, which leads to the risk evaluation matrix of each level 1 indicator. Secondly, the new fintech institutions are classified into the theoretically ideal “optimal,” “medium,” and “worst” categories by establishing the membership matrix of each level 1 indicator in the application of linear transformation formula. Thirdly, the degree of proximity is exploited in comparison of the fuzzy sets in pairs to form the fuzzy recognition model of each level 1 indicator in pursuit of the new fintech institutions least risky regarding each level 1 indicator. Finally, the fuzzy recognition models of each level 1 indicator are integrated into the construction of the fuzzy recognition model regarding the whole risk evaluation system to achieve the risk ranking of the new fintech institutions. This study aimed to provide a theoretical ground and an applied method for national regulators to monitor the fintech risks, which are prone to be avoided by the enterprises and individuals.
Breast cancer is one of the major causes of death in women. Computer Aided Diagnosis (CAD) systems are being developed to assist radiologists in early diagnosis. Micro-calcifications can be an early ...symptom of breast cancer. Besides detection, classification of micro-calcification as benign or malignant is essential in a complete CAD system. We have developed a novel method for the classification of benign and malignant micro-calcification using an improved Fisher Linear Discriminant Analysis (LDA) approach for the linear transformation of segmented micro-calcification data in combination with a Support Vector Machine (SVM) variant to classify between the two classes. The results indicate an average accuracy equal to 96% which is comparable to state-of-the art methods in the literature.
Graphical Abstract
Classification of Micro-calcification in Mammograms using Scalable Linear Fisher Discriminant Analysis
A POSTERIORI ERROR CONTROL FOR DPG METHODS CARSTENSEN, CARSTEN; DEMKOWICZ, LESZEK; GOPALAKRISHNAN, JAY
SIAM journal on numerical analysis,
01/2014, Volume:
52, Issue:
3
Journal Article
Peer reviewed
A combination of ideas in least-squares finite element methods with those of hybridized methods recently led to discontinuous Petrov–Galerkin (DPG) finite element methods. They minimize a residual ...inherited from a piecewise ultraweak formulation in a nonstandard, locally computable, dual norm. This paper establishes a general a posteriori error analysis for the natural norms of the DPG schemes under conditions equivalent to a priori stability estimates. It is proven that the locally computable residual norm of any discrete function is a lower and an upper error bound up to explicit data approximation errors. The presented abstract framework for a posteriori error analysis applies to known DPG discretizations of Laplace and Lamé equations and to a novel DPG method for the stress-velocity formulation of Stokes flow with symmetric stress approximations. Since the error control does not rely on the discrete equations, it applies to inexactly computed or otherwise perturbed solutions within the discrete spaces of the functional framework. Numerical illustrations show that the error control is practically feasible.
This paper considers a new generalization of cumulative residual extropy (CRJ) introduced by Jahanshahi et al. On cumulative residual extropy. Probab Eng Inf Sci. 2019. DOI:10.1017/S0269964819000196, ...called weighted cumulative residual extropy (WCRJ). This paper studies some properties of WCRJ of continuous lifetime distributions. Several results including various bounds, inequalities, and effects of linear transformations are obtained. Conditional WCRJ and some of its properties are discussed. Related studies of survival analysis are covered. Also, we propose an empirical version of the WCRJ to estimate this measure of uncertainty. Based on the asymptotic distribution of empirical WCRJ, a new test statistic is given for testing the equality of two cumulative distribution functions. The power of the proposed test statistic is compared to other traditional and new competing approaches. Some simulations are carried out to show how this newly proposed method is more powerful than the others for moderate to large sample sizes.
•Fractional Fourier Transform are introduced as sparsifying transforms.•Linear Canonical Transforms are introduced as sparsifying transforms.•Various approaches for compressing three-dimensional ...images are suggested.
Display omitted
Sparse recovery aims to reconstruct signals that are sparse in a linear transform domain from a heavily underdetermined set of measurements. The success of sparse recovery relies critically on the knowledge of transform domains that give compressible representations of the signal of interest. Here we consider two- and three-dimensional images, and investigate various multi-dimensional transforms in terms of the compressibility of the resultant coefficients. Specifically, we compare the fractional Fourier (FRT) and linear canonical transforms (LCT), which are generalized versions of the Fourier transform (FT), as well as Hartley and simplified fractional Hartley transforms, which differ from corresponding Fourier transforms in that they produce real outputs for real inputs. We also examine a cascade approach to improve transform-domain sparsity, where the Haar wavelet transform is applied following an initial Hartley transform. To compare the various methods, images are recovered from a subset of coefficients in the respective transform domains. The number of coefficients that are retained in the subset are varied systematically to examine the level of signal sparsity in each transform domain. Recovery performance is assessed via the structural similarity index (SSIM) and mean squared error (MSE) in reference to original images. Our analyses show that FRT and LCT transform yield the most sparse representations among the tested transforms as dictated by the improved quality of the recovered images. Furthermore, the cascade approach improves transform-domain sparsity among techniques applied on small image patches.