Display omitted
Microstructures are critical to the physical properties of materials. Stochastic microstructures are commonly observed in many kinds of materials (e.g., composite polymers, multiphase ...alloys, ceramics, etc.) and traditional descriptor-based image analysis of them can be challenging. In this paper, we introduce a powerful and versatile score-based framework for analyzing nonstationarity in stochastic materials microstructures. The framework involves training a parametric supervised learning model to predict a pixel value using neighboring pixels in images of microstructures (as known as micrographs), and this predictive model provides an implicit characterization of the stochastic nature of the microstructure. The basis for our approach is the Fisher score vector, defined as the gradient of the log-likelihood with respect to the parameters of the predictive model, at each micrograph pixel. A fundamental property of the score vector is that it is zero-mean if the predictive relationship in the vicinity of that pixel remains unchanged, which we equate with the local stochastic nature of the microstructure remaining unchanged. Conversely, if the local stochastic nature changes, then the mean of the score vector generally differs from zero. In light of this, our framework analyzes how the local mean of the score vector varies across one or more image samples to: (1) monitor for nonstationarity by indicating whether new samples are statistically different than reference samples and where they may differ and (2) diagnose nonstationarity by identifying the distinct types of stochastic microstructures that are present over a set of samples and labeling accordingly the corresponding regions of the samples. Unlike feature-based methods, our approach is almost completely general and requires no prior knowledge of the nature of the nonstationarities or the microstructure itself. Using a number of real and simulated micrographs, including polymer composites and multiphase alloys, we demonstrate the power and versatility of the approach.
Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ ...damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively.
This paper considers the on-line detection problem for the parameter change in linear regression model. A procedure based on the efficient score vector is proposed in this paper. Under the null ...hypothesis, it is proved that the detector sequence converges to a Brownian motion. Under the alternative hypothesis, taking the coefficient as an example, this paper proves that the detector sequence converges to a Wiener process with a drift term. The simulation results demonstrate performances of the empirical level, the empirical power and stopping time, and further indicate the efficiency of our approach.
•The monitoring procedures for changes in linear regression model are proposed.•The limiting distributions are obtained under null and alternative hypothesis.•An real example is given to prove the applicability of the method.
Choquet Integral is a powerful aggregation function especially in merging finite real inputs. However in real life, many inputs exist in continuum, e.g., the Riemann Integrable functions. The ...standard Choquet Integral formulas can not accommodate such inputs. This study proposes a new expression which enables merging Riemann Integrable inputs using a discrete Choquet integral. Relevant properties arising therein are discussed. A few application domains are identified which include time-dependent multicriteria decision aid and dynamic fuzzy cooperative games, etc.
Image classification is a multi-class problem that is usually tackled with ensembles of binary classifiers. Furthermore, one of the most important challenges in this field is to find a set of highly ...discriminative image features for reaching a good performance in image classification. In this work we propose to use weighted ensembles as a method for feature combination. First, a set of binary classifiers are trained with a set of features and then, the scores are weighted with distances obtained from another set of feature vectors. We present two different approaches to weight the score vector: (1) directly multiplying each score by the weights and (2) fusing the scores values and the distances through a Neural Network. The experiments have shown that the proposed methodology improves classification accuracy of simple ensembles and even more it obtains similar classification accuracy than state-of-the-art methods, but using much less parameters.
•Feature combination with weighted ensembles avoiding multi-class ensemble weaknesses•Extension of distance-based combination, based on Dynamic Classifier Weighting.•Weights can be obtained by expressions obtained experimentally or learned.•Our methodology enhances the results w.r.t other feature combination methods.
Autoregressive time series models of order
p have
p
+
2
parameters, the mean, the variance of the white noise and the
p autoregressive parameters. Change in any of these over time is a sign of ...disturbance that is important to detect. The methods of this paper can test for change in any one of these
p
+
2
parameters separately, or in any collection of them. They are available in forms that make one-sided tests possible, furthermore, they can be used to test for a temporary change. The test statistics are based on the efficient score vector. The large sample properties of the change-point estimator are also explored.
A recursive formula for computing the exact value of score vectors is proposed for a general form of the linear Gaussian state space model, which is more desirable than approximate values in some ...statistical analyses. Unlike most extant methods, our formula calculates all components of the score vector simultaneously. This approach significantly simplifies its programing, in particular, with some matrix-oriented programing languages, such as MATLAB. We also consider a way of handling initial conditions that depend on unknown parameters. This issue has not yet been explicitly addressed in the existing literature in the context of exact score computing for a general case, such as the one that we consider in this paper. It is also shown that our formula is especially useful for calculating score tests with an outer product of gradient asymptotic covariance matrix estimator.
This study considers the residual-based CUSUM test for location-scale time series models with heteroscedasticity. The estimates- and score vector-based CUSUM tests are widely used for detecting ...abrupt changes in time series models. However, their performance is often unsatisfactory with severe size distortions when the underlying model is complicated and the sample size is small. To circumvent this defect, the residual-based CUSUM test is suggested as an alternative. However, this test can only detect scale parameter changes and suffers severe power loss against location parameter changes. To remedy this, we introduce a modified residual-based CUSUM test and demonstrate its validity for both location and scale parameter changes. We conduct a simulation study and data analysis for illustration.
An assistive system for persons with vocal impairment due to dysarthria converts dysarthric speech to normal speech or text. Because of the articulatory deficits, dysarthric speech recognition needs ...a robust learning technique. Representation learning is significant for complex tasks such as dysarthric speech recognition. We focus on robust representation for dysarthric speech recognition that involves recognizing sequential patterns of varying length utterances. We propose a hybrid framework that uses a generative learning based data representation with a discriminative learning based classifier. In this hybrid framework, we propose to use Example Specific Hidden Markov Models (ESHMMs) to obtain log-likelihood scores for a dysarthric speech utterance to form fixed dimensional score vector representation. This representation is used as an input to discriminative classifier such as support vector machine. The performance of the proposed approach is evaluated using UA-Speech database.The recognition accuracy is much better than the conventional hidden Markov model based approach and Deep Neural Network-Hidden Markov Model (DNN-HMM). The efficiency of the discriminative nature of score vector representation is proved for "very low" intelligibility words.
The Industrial Internet of Things (IIoT) has become very popular in recent years. However, IIoT is still an attractive and vulnerable target for attackers to exploit and experiment with different ...types of attacks. To confront this problem, the research community began exploring novel systems to protect the network. However, there are concerns related to some of theses systems regarding the increasing levels of required human interaction, which impact their efficiency. Recently, machine learning techniques are gaining much interest in security applications as they exhibit fast processing capabilities with real-time predictions. One of the significant challenges in the implementation of these techniques is the available training data for each new potential attack category, which is most of the time, unfeasible. Hence, these techniques might suffer from low detection rates for the attacks with relatively small training data (minority classes). In this article, we propose a novel algorithm based on machine learning to alleviate the class imbalance problem by computing an optimized weight for each machine learning-based decision. In particular, a supervised machine learning algorithm is first used to classify the attack categories for each node. The decisions made by the machine learning classifier are then stored in a private database. A specially designed best effort iterative weighted attack classification algorithm exploits this collected data to enhance the accuracy of the rarely detectable attack types. For each class, the weight that maximizes the diagonal to maximum ratios of the confusion matrix is iteratively computed. Such approach is shown to enhance the overall classification performance and detection accuracy even for the rarely detectable classes. Both the UNSW and NSL-KDD datasets are used in this article to validate the proposed model and verify its efficiency in detecting intrusions. The simulation results show that the proposed model can effectively detect intrusion attacks with a higher detection rate and the lowest false alarm rate compared to the state-of-the-art techniques.