Bifactor measurement models are increasingly being applied to personality and psychopathology measures (Reise, 2012). In this work, authors generally have emphasized model fit, and their typical ...conclusion is that a bifactor model provides a superior fit relative to alternative subordinate models. Often unexplored, however, are important statistical indices that can substantially improve the psychometric analysis of a measure. We provide a review of the particularly valuable statistical indices one can derive from bifactor models. They include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations. We describe how these indices can be calculated and used to inform: (a) the quality of unit-weighted total and subscale score composites, as well as factor score estimates, and (b) the specification and quality of a measurement model in structural equation modeling.
The Pearson product-moment correlation coefficient (rp) and the Spearman rank correlation coefficient (rs) are widely used in psychological research. We compare rp and rs on 3 criteria: variability, ...bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, rp and rs have similar expected values but rs is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, rp is more variable than rs. Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, rp had lower variability than rs in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, rs had lower variability than rp, and often corresponded more accurately to the population Pearson correlation coefficient (Rp) than rp did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing rs instead of rp. In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of rs and rp. In conclusion, rp is suitable for light-tailed distributions, whereas rs is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research.
Antibodies against epitopes in S1 give the most accurate CoP against infection by the SARS‐CoV‐2 coronavirus. Measurement of those antibodies by neutralization or binding assays both have predictive ...value, with binding antibody titers giving the highest statistical correlation. However, the protective functions of antibodies are multiple. Antibodies with multiple functions other than neutralization influence efficacy. The role of cellular responses can be discerned with respect to CD4+ T cells and their augmentation of antibodies, and with respect to CD8+ cells with regard to control of viral replication, particularly in the presence of insufficient antibody. More information is needed on mucosal responses.
An Empirical Kaiser Criterion Braeken, Johan; van Assen, Marcel A. L. M.
Psychological methods,
09/2017, Letnik:
22, Številka:
3
Journal Article
Recenzirano
Odprti dostop
In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on ...eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA.
Abstract Introduction: Extensively researched in the realm of education, the involvement of parents in their offspring’s academic performance has been the subject of increased attention. This article ...aims to examine the impact of parental contribution on their children’s scholastic accomplishments, concentrating on the statistical correlation between the two. Methods: This study used qualitative and quantitative methods to examine the association between parents’ involvement and academic results. Results: Results show that parental involvement boosts academic performance. Discussion: Understanding the impact of parental education on children’s academic performance is essential for educators, policymakers, and families alike, as it highlights the importance of fostering an educationally rich environment for children to thrive. Limitations: The data of this study were collected from a survey of 356 parents from different schools in Saudi Arabia in 2023. Conclusions: The study’s findings show that parental involvement positively impacts students’ academic outcomes by 42.1%.
We demonstrate that all conventional meta-analyses of correlation coefficients are biased, explain why, and offer solutions. Because the standard errors of the correlation coefficients depend on the ...size of the coefficient, inverse-variance weighted averages will be biased even under ideal meta-analytical conditions (i.e., absence of publication bias,
-hacking, or other biases). Transformation to Fisher's
often greatly reduces these biases but still does not mitigate them entirely. Although all are small-sample biases (
< 200), they will often have practical consequences in psychology where the typical sample size of correlational studies is 86. We offer two solutions: the well-known Fisher's z-transformation and new small-sample adjustment of Fisher's that renders any remaining bias scientifically trivial. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Effect size information is essential for the scientific enterprise and plays an increasingly central role in the scientific process. We extracted 147,328 correlations and developed a hierarchical ...taxonomy of variables reported in Journal of Applied Psychology and Personnel Psychology from 1980 to 2010 to produce empirical effect size benchmarks at the omnibus level, for 20 common research domains, and for an even finer grained level of generality. Results indicate that the usual interpretation and classification of effect sizes as small, medium, and large bear almost no resemblance to findings in the field, because distributions of effect sizes exhibit tertile partitions at values approximately one-half to one-third those intuited by Cohen (1988). Our results offer information that can be used for research planning and design purposes, such as producing better informed non-nil hypotheses and estimating statistical power and planning sample size accordingly. We also offer information useful for understanding the relative importance of the effect sizes found in a particular study in relationship to others and which research domains have advanced more or less, given that larger effect sizes indicate a better understanding of a phenomenon. Also, our study offers information about research domains for which the investigation of moderating effects may be more fruitful and provide information that is likely to facilitate the implementation of Bayesian analysis. Finally, our study offers information that practitioners can use to evaluate the relative effectiveness of various types of interventions.
Using a database of 378 hail days between 1981 and 2020, the climatic characteristics of 23 convective parameters from sounding data and ERA5 data were statistically analysed. The goal of this work ...is to evaluate the usefulness and representativeness of convective parameters derived from sounding data and reanalysis data for the operational forecast of the hail phenomenon. As a result, the average values from 12:00 UTC were 433 J/kg for CAPE in the case of data from ERA5 and 505 J/kg from rawinsonde, respectively. The Spearman correlation coefficient matrix between the values of the parameters indicates high correlations among the parameters calculated based on the parcel theory, humidity indices, and the complex indices. The probability for large hail increases with high values of low-level and boundary-layer moisture, high CAPE, and a high lifting condensation level (LCL) height.