In many experiments and especially in translational and preclinical research, sample sizes are (very) small. In addition, data designs are often high dimensional, i.e. more dependent than independent ...replications of the trial are observed. The present paper discusses the applicability of max t-test-type statistics (multiple contrast tests) in high-dimensional designs (repeated measures or multivariate) with small sample sizes. A randomization-based approach is developed to approximate the distribution of the maximum statistic. Extensive simulation studies confirm that the new method is particularly suitable for analyzing data sets with small sample sizes. A real data set illustrates the application of the methods.
Rhubarb is one of the most popular traditional Chinese medicines and has been used for thousands of years in many Asian countries. Prepared rhubarb is obtained by steaming raw rhubarb with glutinous ...rice wine until it turned black in appearance both inside and outside. After processing, the therapeutic effects of prepared rhubarb change a lot. To find out the exact compound changes of the chemical profile in a decoction of rhubarb after processing and to clarify the material basis of the changed therapeutic effects, an ultra‐high performance liquid chromatography with quadrupole time‐of‐flight mass spectrometry method coupled with automated data analysis software and statistical strategy was developed. As a result, 63 peaks in raw rhubarb and 54 peaks in prepared rhubarb were detected, and a total of 45 chemical compounds were identified. The analysis data were subjected to a principle component analysis and a t‐test. Based on the results, 16 peaks were found to be the main contributors to the significant difference (p < 0.05) between raw and prepared rhubarb. Compared with raw rhubarb, the content of 15 components in prepared rhubarb was lower, while only rhein (1,8‐dihydroxy‐3‐carboxy anthraquinone) showed a higher intensity.
Only a few studies have investigated the long-term health-related quality of life (HRQoL) in patients with an idiopathic scoliosis. The aim of this study was to investigate the overall HRQoL and ...employment status of patients with an idiopathic scoliosis 40 years after diagnosis, to compare it with that of the normal population, and to identify possible predictors for a better long-term HRQoL.
We reviewed the full medical records and radiological reports of patients referred to our hospital with a scoliosis of childhood between April 1972 and April 1982. Of 129 eligible patients with a juvenile or adolescent idiopathic scoliosis, 91 took part in the study (71%). They were evaluated with full-spine radiographs and HRQoL questionnaires and compared with normative data. We compared the HRQoL between observation (n = 27), bracing (n = 46), and surgical treatment (n = 18), and between thoracic and thoracolumbar/lumbar (TL/L) curves.
The mean time to follow-up was 40.8 years (SD 2.6) and the mean age of patients was 54.0 years (SD 2.7). Of the 91 patients, 86 were female (95%) and 51 had a main thoracic curve (53%). We found a significantly lower HRQoL measured on all the Scoliosis Research Society 22r instrument (SRS-22r) subdomains (p < 0.001) with the exception of mental health, than in an age-matched normal population. Incapacity to work was more prevalent in scoliosis patients (21%) than in the normal population (11%). The median SRS-22r subscore was 4.0 (interquartile range (IQR) 3.3 to 4.4) for TL/L curves and 4.1 (IQR 3.8 to 4.4) for thoracic curves (p = 0.300). We found a significantly lower self-image score for braced (median 3.6 (IQR 3.0 to 4.0)) and surgically treated patients (median 3.6 (IQR 3.2 to 4.3)) than for those treated by observation (median 4.0 (IQR 4.1 to 4.8); p = 0.010), but no statistically significant differences were found for the remaining subdomains.
In this long-term follow-up study, we found a significantly decreased HRQoL and capacity to work in patients with an idiopathic scoliosis 40 years after diagnosis.Cite this article:
2023;105-B(2):166-171.
Summary
Despite being an area of cancer with highest worldwide incidence, oral cancer yet remains to be widely researched. Studies on computer‐aided analysis of pathological slides of oral cancer ...contribute a lot to the diagnosis and treatment of the disease. Some researches in this direction have been carried out on oral submucous fibrosis. In this work an approach for analysing abnormality based on textural features present in squamous cell carcinoma histological slides have been considered. Histogram and grey‐level co‐occurrence matrix approaches for extraction of textural features from biopsy images with normal and malignant cells are used here. Further, we have used linear support vector machine classifier for automated diagnosis of the oral cancer, which gives 100% accuracy.
Lay description
Despite being an area of cancer with highest worldwide incidence, oral cancer yet remains to be widely researched. Studies on computer‐aided analysis of pathological slides of oral cancer contribute a lot to the diagnosis and treatment of the disease. Some researches in this direction have been carried out on oral submucous fibrosis. In this work an approach for analysing abnormality based on textural features present in squamous cell carcinoma histological slides have been considered. Histogram and grey‐level co‐occurrence Matrix approaches for extraction of textural features from biopsy images with normal and malignant cells are used here. Further, we have used linear support vector machine classifier for automated diagnosis of the oral cancer, which gives 100% accuracy.
This study compares the original pair‐matching osteometric sorting model (J Forensic Sci 2003;48:717) against two new models providing validation and performance testing across three samples. The ...samples include the Forensic Data Bank, USS Oklahoma, and the osteometric sorting reference used within the Defense POW/MIA Accounting Agency. A computer science solution to generating dynamic statistical models across a commingled assemblage is presented. The issue of normality is investigated showing the relative robustness against non‐normality and a data transformation to control for normality. A case study is provided showing the relative exclusion power of all three models from an active commingled case within the Defense POW/MIA Accounting Agency. In total, 14,357,220 osteometric t‐tests were conducted. The results indicate that osteometric sorting performs as expected despite reference samples deviating from normality. The two new models outperform the original, and one of those is recommended to supersede the original for future osteometric sorting work.
The effects of Open Access (OA) upon journal performance are investigated. The key research question holds: How does the citation impact and publication output of journals switching (“flipping”) from ...non-OA to Gold-OA develop after their switch to Gold-OA? A review is given of the literature, with an emphasis on studies dealing with flipping journals. Two study sets with 119 and 100 flipping journals, derived from two different OA data sources (DOAJ and OAD), are compared with two control groups, one based on a standard bibliometric criterion, and a second controlling for a journal’s national orientation. Comparing post-switch indicators with pre-switch ones in paired T-tests, evidence was obtained of an OA
Citation
advantage but
not
of an OA
Publication
Advantage. Shifts in the affiliation countries of publishing and citing authors are characterized in terms of countries’ income class and geographical world region. Suggestions are made for qualitative follow-up studies to obtain more insight into OA flipping or reverse-flipping.
Community surveys have been widely used to investigate local residents' perceptions and behaviors related to natural resource issues. However, most existing community survey research relies on ...cross-sectional data and is thus unable to capture the temporal dynamics of community processes. Longitudinal analysis has received increasing interest in recent natural resource social science literature. Trend and panel studies are two typical approaches in longitudinal community survey research. Due to limited sampling frames, research design, and respondent attrition, longitudinal community surveys often involve both paired and independent observations across different survey waves. Using previous survey data on community responses to forest insect disturbance in Alaska as an example, this research note shows that the corrected z-test is a more appropriate approach to analyze partially correlated longitudinal data than conventional statistical techniques such as the paired and independent t-tests.
Informed Bayesian t-Tests Gronau, Quentin F.; Ly, Alexander; Wagenmakers, Eric-Jan
The American statistician,
04/2020, Volume:
74, Issue:
2
Journal Article
Peer reviewed
Open access
Across the empirical sciences, few statistical procedures rival the popularity of the frequentist
-test. In contrast, the Bayesian versions of the
-test have languished in obscurity. In recent years, ...however, the theoretical and practical advantages of the Bayesian
-test have become increasingly apparent and various Bayesian t-tests have been proposed, both objective ones (based on general desiderata) and subjective ones (based on expert knowledge). Here, we propose a flexible t-prior for standardized effect size that allows computation of the Bayes factor by evaluating a single numerical integral. This specification contains previous objective and subjective t-test Bayes factors as special cases. Furthermore, we propose two measures for informed prior distributions that quantify the departure from the objective Bayes factor desiderata of predictive matching and information consistency. We illustrate the use of informed prior distributions based on an expert prior elicitation effort.
Supplementary materials
for this article are available online.
In this article, I introduce a new package with five commands to perform
econometric convergence analysis and club clustering as proposed by Phillips and
Sul (2007, Econometrica 75: 1771–1855). The
...logtreg command performs the log t
regression test. The psecta command implements the
clustering algorithm to identify convergence clubs. The
scheckmerge command conducts the log
t regression test for all pairs of adjacent clubs. The
imergeclub command tries to iteratively merge
adjacent clubs. The pfilter command extracts the trend
and cyclical components of a time series of each individual in panel data. I
provide an example from Phillips and Sul (2009, Journal of Applied
Econometrics 24: 1153–1185) to illustrate the use of these
commands. Additionally, I use Monte Carlo simulations to exemplify the
effectiveness of the clustering algorithm.