The Savage-Dickey density ratio is a specific expression of the Bayes factor when testing a precise (equality constrained) hypothesis against an unrestricted alternative. The expression greatly ...simplifies the computation of the Bayes factor at the cost of assuming a specific form of the prior under the precise hypothesis as a function of the unrestricted prior. A generalization was proposed by Verdinelli and Wasserman such that the priors can be freely specified under both hypotheses while keeping the computational advantage. This article presents an extension of this generalization when the hypothesis has equality as well as order constraints on the parameters of interest. The methodology is used for a constrained multivariate t-test using the JZS Bayes factor and a constrained hypothesis test under the multinomial model.
The need for automatic text summarization is natural: there is a huge volume of information available online, which prompts for a widespread interest in extracting relevant information in a concise ...and understandable manner. Here, automated text summarization has been treated as an extractive single-document summarization problem in the proposed system. To solve this problem, a particle swarm optimisation (PSO) algorithm-based approach is suggested, with the goal of producing good summaries in terms of content coverage, informativeness, and readability.
This paper introduces XSumm-PSO: a new approach based on PSO optimization technique in a supervised manner for extractive summarization. Further, this paper also contributes a new feature “incorrect word” that captures misspelled words in the candidate sentences. This feature is combined with nine existing features used by proposed model to generate error free summaries. As a result, the proposed XSumm-PSO framework produces superior performance achieving improvements of +2.7%, +0.8%, and +0.8% for ROUGE-1, ROUGE-2, and ROUGE-L scores, respectively, on DUC 2002 dataset, over state-of-the-art techniques. The corresponding improvements on the CNN/DailyMail dataset are +0.97%, +0.25%, and +0.49%.
We also performed sample t-test, showing the proposed approach is statistically consistent across various runs.
•A PSO-based technique optimized in a supervised manner using ROGUE-1 is proposed.•The suggested model solves a single-document extractive text summarization task.•A new feature “incorrect word” is also introduced in this work.•We evaluate our proposed model on DUC-2002 and CNN/DailyMail benchmark datasets.•The suggested model generalizes better and produces better accuracy than SOTA.
•We introduce a moderated t-statistic for performing group-level fMRI analysis.•The approach helps alleviate problems related to small sample sizes.•The approach outperforms several standard ...approaches.•An R-package is introduced for application of the method to fMRI data.
In recent years, there has been significant criticism of functional magnetic resonance imaging (fMRI) studies with small sample sizes. The argument is that such studies have low statistical power, as well as reduced likelihood for statistically significant results to be true effects. The prevalence of these studies has led to a situation where a large number of published results are not replicable and likely false. Despite this growing body of evidence, small sample fMRI studies continue to be regularly performed; likely due to the high cost of scanning. In this report we investigate the use of a moderated t-statistic for performing group-level fMRI analysis to help alleviate problems related to small sample sizes. The proposed approach, implemented in the popular R-package LIMMA (linear models for microarray data), has found wide usage in the genomics literature for dealing with similar issues. Utilizing task-based fMRI data from the Human Connectome Project (HCP), we compare the performance of the moderated t-statistic with the standard t-statistic, as well as the pseudo t-statistic commonly used in non-parametric fMRI analysis. We find that the moderated t-test significantly outperforms both alternative approaches for studies with sample sizes less than 40 subjects. Further, we find that the results were consistent both when using voxel-based and cluster-based thresholding. We also introduce an R-package, LIMMI (linear models for medical images), that provides a quick and convenient way to apply the method to fMRI data.
We study the solar flare index (SFI) for the Solar Cycles 18 – 24. We find that SFI has deeper Gnevyshev gap (GG) in its first principal component than other atmospheric parameters. The GG is ...extremely clear especially in the even cycles.
The GG of the SFI appears about a half year later as a drop in the interplanetary magnetic field near the Earth and in the geomagnetic Ap-index. The instantaneous response of the magnetic field to solar flares, however, shows about two to three days after the eruption as a high, sharp peak in the cross-correlation of the SFI and Ap-index and as a lower peak in SFI vs. IMF B cross-correlation. We confirm these rapid responses using superposed-epoch analysis.
The most active flare cycles during 1944 – 2020 are Cycles 19 and 21. Cycle 18 has very strong SFI days as many as Cycle 22, but it has the least nonzero SFI days in the whole interval. Interestingly, Cycle 20 can be compared to Cycles 23 and 24 in its low flare activity, although it is located between the most active SFI cycles.
Bayesian estimation for 2 groups provides complete distributions of credible values for the effect size, group means and their difference, standard deviations and their difference, and the normality ...of the data. The method handles outliers. The decision rule can accept the null value (unlike traditional "t" tests) when certainty in the estimate is high (unlike Bayesian model comparison using Bayes factors). The method also yields precise estimates of statistical power for various research goals. The software and programs are free and run on Macintosh, Windows, and Linux platforms. (Contains 2 tables, 14 figures, and 1 footnote.)
To improve the competitiveness of air transport, this study proposed a method for devising a differentiated comfort enhancement strategy and compared the comfort of aircraft and high‐speed trains ...(HST). Structural equation modeling (SEM) was used to analyze the similarities and differences in the factors influencing comfort in aircraft cabins and HST compartments. The results showed that comfort in wide‐body aircraft cabins was mainly influenced by “food & beverage” and “personal in‐flight entertainment” (personal IFE), whereas comfort in narrow‐body aircraft and HSTs was mainly influenced by the “passenger interface.” The results also showed no significant difference in the overall comfort evaluation of the passenger interface between narrow‐body aircraft and HSTs. Furthermore, aircraft passenger ratings were significantly lower than those of HST passengers with regard to “spaciousness,” “seat comfort,” and “seat and cabin and compartment aesthetics.” The study results suggest that given competition from HSTs, airlines should use different comfort enhancement strategies for wide‐body and narrow‐body aircraft cabins.
Load event detection is fundamental to event-based nonintrusive load monitoring (NILM) solution. It is directly related to whether the transient and steady-state signatures of appliances can be ...accurately extracted. In the real world, the load composition is random and complicated, the power consumption patterns of appliances are diverse, and multiple events may occur simultaneously or near each other, all these make it difficult for any one single event detection method to achieve satisfactory robustness. In coping with this, an adaptive two-stage event detection method is proposed in this article. First, an adaptive detection threshold whose value can be adjusted adaptively according to the load fluctuations is adopted, thus improving the ability to detect events of different amplitudes. Then, considering the different event geometric features, an improved edge detection method and a window-based detection method combining moving average with moving t-test are proposed for step-like events and long-transient events, respectively, and they are executed consecutively to achieve the effective capture of the complete transient process of appliances. Furthermore, an event separation step is taken to locate and separate the near-simultaneous events which often appear in low-frequency data. Specific event detection performance evaluation metrics are also designed. Comparison test results on private and public datasets show that the proposed method achieves high detection accuracy for different events of various appliances and maintains strong robustness in different operation scenarios.