Nowadays, exploratory and confirmatory factor analyses are two important consecutive steps in an overall analysis process. The overall analysis should start with an exploratory factor analysis that ...explores the data and establishes a hypothesis for the factor model in the population. Then, the analysis process should be continued with a confirmatory factor analysis to assess whether the hypothesis proposed in the exploratory step is plausible in the population. To carry out the analysis, researchers usually collect a single sample, and then split it into two halves. As no specific splitting methods have been proposed to date in the context of factor analysis, researchers use a random split approach. In this paper we propose a method to split samples into equivalent subsamples similar to one that has already been proposed in the context of multivariate regression analysis. The method was tested in simulation studies and in real datasets.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Exploratory graph analysis (EGA) is a new technique that was recently proposed within the framework of network psychometrics to estimate the number of factors underlying multivariate data. Unlike ...other methods, EGA produces a visual guide-network plot-that not only indicates the number of dimensions to retain, but also which items cluster together and their level of association. Although previous studies have found EGA to be superior to traditional methods, they are limited in the conditions considered. These issues are addressed through an extensive simulation study that incorporates a wide range of plausible structures that may be found in practice, including continuous and dichotomous data, and unidimensional and multidimensional structures. Additionally, two new EGA techniques are presented: one that extends EGA to also deal with unidimensional structures, and the other based on the triangulated maximally filtered graph approach (EGAtmfg). Both EGA techniques are compared with 5 widely used factor analytic techniques. Overall, EGA and EGAtmfg are found to perform as well as the most accurate traditional method, parallel analysis, and to produce the best large-sample properties of all the methods evaluated. To facilitate the use and application of EGA, we present a straightforward R tutorial on how to apply and interpret EGA, using scores from a well-known psychological instrument: the Marlowe-Crowne Social Desirability Scale.
Translational Abstract
Understanding the structure and composition of data is an important undertaking for a wide range of scientific domains. An initial step in this endeavor is to determine how the data can be summarized into a smaller set of meaningful variables (i.e., dimensions). In this article, we extend a state-of-the-art network science approach, called exploratory graph analysis (EGA), used to identify the dimensions that exist in multivariate data. Using Monte Carlo methods, we compared EGA with several traditional eigenvalue-based approaches that are commonly used in the psychological literature including parallel analysis. Additionally, the simulation study evaluated the performance of new variants of the EGA method and considered a wider set of realistic conditions, such as unidimensional structures and variables of continuous and categorical levels of measurement. We found that EGA performed as well as or better than the most accurate traditional method (i.e., parallel analysis). Importantly, EGA offers a few advantages over traditional methods: (a) it provides an intuitive visual representation of the results, (b) this representation offers a more complex understanding of the data's structure, and (c) the algorithm is deterministic meaning there are fewer researcher degrees of freedom. In sum, our study demonstrates that EGA can accurately identify the underlying structure of multivariate data, while retaining the complexity of the data's structure. This implies that researchers can meaningfully summarize their data without sacrificing the finer details.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
We aim to provide a conceptual view of the origins, development and future directions of FACTOR, a popular free program for fitting the factor analysis (FA) model.
The study is organized into three ...parts. In the first part we discuss FACTOR in its initial period (2006-2012) as a traditional FA program with many new and cutting-edge features. The second part discusses the present period (2013-2016) in which FACTOR has developed into a comprehensive program embedded in the framework of structural equation modelling and item response theory. The third part discusses expected future developments.
at present FACTOR has attained a degree of technical development comparable to commercial software, and offers options not available elsewhere.
We discuss several shortcomings as well as points that require changes or improvements. We also discuss the functioning of FACTOR within its community of users.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
There currently exist no self-report measures of social camouflaging behaviours (strategies used to compensate for or mask autistic characteristics during social interactions). The Camouflaging ...Autistic Traits Questionnaire (CAT-Q) was developed from autistic adults’ experiences of camouflaging, and was administered online to 354 autistic and 478 non-autistic adults. Exploratory factor analysis suggested three factors, comprising of 25 items in total. Good model fit was demonstrated through confirmatory factor analysis, with measurement invariance analyses demonstrating equivalent factor structures across gender and diagnostic group. Internal consistency (α = 0.94) and preliminary test–retest reliability (r = 0.77) were acceptable. Convergent validity was demonstrated through comparison with measures of autistic traits, wellbeing, anxiety, and depression. The present study provides robust psychometric support for the CAT-Q.
Full text
Available for:
DOBA, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, IZUM, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, ODKLJ, OILJ, PILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UILJ, UKNU, UL, UM, UPUK, VKSCE, VSZLJ, ZAGLJ
Exploratory factor analyses are commonly used to determine the underlying factors of multiple observed variables. Many criteria have been suggested to determine how many factors should be retained. ...In this study, we present an extensive Monte Carlo simulation to investigate the performance of extraction criteria under varying sample sizes, numbers of indicators per factor, loading magnitudes, underlying multivariate distributions of observed variables, as well as how the performance of the extraction criteria are influenced by the presence of cross-loadings and minor factors for unidimensional, orthogonal, and correlated factor models. We compared several variants of traditional parallel analysis (PA), the Kaiser-Guttman Criterion, and sequential χ2 model tests (SMT) with 4 recently suggested methods: revised PA, comparison data (CD), the Hull method, and the Empirical Kaiser Criterion (EKC). No single extraction criterion performed best for every factor model. In unidimensional and orthogonal models, traditional PA, EKC, and Hull consistently displayed high hit rates even in small samples. Models with correlated factors were more challenging, where CD and SMT outperformed other methods, especially for shorter scales. Whereas the presence of cross-loadings generally increased accuracy, non-normality had virtually no effect on most criteria. We suggest researchers use a combination of SMT and either Hull, the EKC, or traditional PA, because the number of factors was almost always correctly retrieved if those methods converged. When the results of this combination rule are inconclusive, traditional PA, CD, and the EKC performed comparatively well. However, disagreement also suggests that factors will be harder to detect, increasing sample size requirements to N ≥ 500.
Translational Abstract
Exploratory factor analysis (EFA) is a statistical tool commonly used in psychological research to determine the underlying factors of questionnaire items. One of the key issues in EFA is deciding how many underlying factors researchers need to assume to account for different responses to these items. In this simulation study, we compared different extraction criteria, designed to determine this number, under conditions that are realistic in empirical practice. We investigated conditions with one underlying factor, multiple uncorrelated factors, and multiple correlated factors. In addition, we also violated two assumptions of the extraction criteria. First, we included conditions with minor underlying factors that represent systematic measurement errors, for example, when different questionnaire items are phrased in a similar way. Second, many extraction criteria assume a normal distribution of responses to the questionnaire items and we included conditions where this distribution was non-normal. We found that (1) some criteria perform better in conditions with one factor or multiple uncorrelated factors, whereas other criteria perform well in conditions with multiple correlated factors, (2) the latter criteria perform worse when minor factors are present, and (3) non-normality did not impact the performance of most criteria. We suggest that researchers use two criteria in conjunction, one suited for single/uncorrelated factors and one suited for correlated factors. If both criteria suggest the same number of factors, the result is likely correct. Otherwise, the sample size should be at least 500 because the number of underlying factors is harder to detect.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
A goal of developmental research is to examine individual changes in constructs over time. The accuracy of the models answering such research questions hinges on the assumption of longitudinal ...measurement invariance: The repeatedly measured variables need to represent the same construct in the same metric over time. Measurement invariance can be studied through factor models examining the relations between the observed indicators and the latent constructs. In longitudinal research, ordered-categorical indicators such as self- or observer-report Likert scales are commonly used, and these measures often do not approximate continuous normal distributions. The present didactic article extends previous work on measurement invariance to the longitudinal case for ordered-categorical indicators. We address a number of problems that commonly arise in testing measurement invariance with longitudinal data, including model identification and interpretation, sparse data, missing data, and estimation issues. We also develop a procedure and associated R program for gauging the practical significance of the violations of invariance. We illustrate these issues with an empirical example using a subscale from the Mexican American Cultural Values scale. Finally, we provide comparisons of the current capabilities of 3 major latent variable programs (lavaan, Mplus, OpenMx) and computer scripts for addressing longitudinal measurement invariance.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
An Empirical Kaiser Criterion Braeken, Johan; van Assen, Marcel A. L. M.
Psychological methods,
09/2017, Volume:
22, Issue:
3
Journal Article
Peer reviewed
Open access
In exploratory factor analysis (EFA), most popular methods for dimensionality assessment such as the screeplot, the Kaiser criterion, or-the current gold standard-parallel analysis, are based on ...eigenvalues of the correlation matrix. To further understanding and development of factor retention methods, results on population and sample eigenvalue distributions are introduced based on random matrix theory and Monte Carlo simulations. These results are used to develop a new factor retention method, the Empirical Kaiser Criterion. The performance of the Empirical Kaiser Criterion and parallel analysis is examined in typical research settings, with multiple scales that are desired to be relatively short, but still reliable. Theoretical and simulation results illustrate that the new Empirical Kaiser Criterion performs as well as parallel analysis in typical research settings with uncorrelated scales, but much better when scales are both correlated and short. We conclude that the Empirical Kaiser Criterion is a powerful and promising factor retention method, because it is based on distribution theory of eigenvalues, shows good performance, is easily visualized and computed, and is useful for power analysis and sample size planning for EFA.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
This article provides a summary and discussion of major challenges and pitfalls in factor analysis as observed in psychological assessment research, as well as our recommendations within each of ...these areas. More specifically, we discuss a need to be more careful about item distribution properties in light of their potential impact on model estimation as well as providing a very strong caution against item parceling in the evaluation of psychological test instruments. Moreover, we consider the important issue of estimation, with a particular emphasis on selecting the most appropriate estimator to match the scaling properties of test item indicators. Next, we turn our attention to the issues of model fit and comparison of alternative models with the strong recommendation to allow for theoretical guidance rather than being overly influenced by model fit indices. In addition, since most models in psychological assessment research involve multidimensional items that often do not map neatly onto a priori confirmatory models, we provide recommendations about model respecification. Finally, we end our article with a discussion of alternative forms of model specification that have become particularly popular recently: exploratory structural equation modeling (ESEM) and bifactor modeling. We discuss various important areas of consideration for the applied use of these model specifications, with a conclusion that, whereas ESEM models can offer a useful avenue for the evaluation of internal structure of test items, researchers should be very careful about using bifactor models for this purpose. Instead, we highlight other, more appropriate applications of such models.
Public Significance Statement
This article discusses important issues for psychological assessment research using a specific form of statistical analysis: factor analysis. Researchers and the practitioners who consume the research are made aware of common challenges and potential pitfalls in using factor analysis in psychological assessment research as well as recommendations for appropriate decision making.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
Confirmatory factor analysis (CFA) has been frequently applied to executive function measurement since first used to identify a three-factor model of inhibition, updating, and shifting; however, ...subsequent CFAs have supported inconsistent models across the life span, ranging from unidimensional to nested-factor models (i.e., bifactor without inhibition). This systematic review summarized CFAs on performance-based tests of executive functions and reanalyzed summary data to identify best-fitting models. Eligible CFAs involved 46 samples (N = 9,756). The most frequently accepted models varied by age (i.e., preschool = one/two-factor; school-age = three-factor; adolescent/adult = three/nested-factor; older adult = two/three-factor), and most often included updating/working memory, inhibition, and shifting factors. A bootstrap reanalysis simulated 5,000 samples from 21 correlation matrices (11 child/adolescent; 10 adult) from studies including the three most common factors, fitting seven competing models. Model results were summarized as the mean percent accepted (i.e., average rate at which models converged and met fit thresholds: CFI ≥ .90/RMSEA ≤ .08) and mean percent selected (i.e., average rate at which a model showed superior fit to other models: ΔCFI ≥ .005/.010/ΔRMSEA ≤ −.010/−.015). No model consistently converged and met fit criteria in all samples. Among adult samples, the nested-factor was accepted (41-42%) and selected (8-30%) most often. Among child/adolescent samples, the unidimensional model was accepted (32-36%) and selected (21-53%) most often, with some support for two-factor models without a differentiated shifting factor. Results show some evidence for greater unidimensionality of executive function among child/adolescent samples and both unity and diversity among adult samples. However, low rates of model acceptance/selection suggest possible bias toward the publication of well-fitting but potentially nonreplicable models with underpowered samples.
Public Significance Statement
Previous research has explored whether executive functions are best described as a single self-regulatory ability (i.e., unity) or a diverse set of abilities related to control over thoughts and behaviors (i.e., diversity). This systematic review identified three abilities most frequently evaluated in psychological research (i.e., inhibition, shifting, and updating/working memory), and a reanalysis of previous studies identified greater unity of executive functions during childhood and greater diversity arising from adolescence into adulthood.
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK