During heat treatment and other production processes, gradients of temperature and other observables may vary rapidly in narrow regions, while in other parts of the workpiece the behaviour of these ...quantities is quite smooth. Nevertheless, it is important to capture these fine structures during numerical simulations. Local mesh refinement in these regions is needed in order to resolve the behaviour in a sufficient way. On the other hand, these regions of special interest are changing during the process, making it necessary to move also the regions of refined meshes. Adaptive finite element methods present a tool to automatically give criteria for a local mesh refinement, based on the computed solution (and not only on a priori knowledge of an expected behaviour).
We present examples from heat treatment of steel, including phase transitions with transformation induced plasticity and stress dependent phase transformations. On a mesoscopic scale of grains, similar methods can be used to efficiently and accurately compute phase field models for phase transformations.
Adaptive Finite‐Elemente‐Simulationen für makroskopische und mesoskopische Modelle von Stahl
Bei der Wärmebehandlung und anderen Produktionsschritten können die Gradienten der Temperatur und anderer Größen in schmalen (Rand‐) Bereichen eines Werkstücks stark variieren, während sie in großen Bereichen relativ glatt sind. Trotzdem ist die Auflösung dieser feiner Strukturen in einer numerischen Simulation sehr wichtig. Eine lokale Gitterverfeinerung ist notwendig, um das Verhalten genügend genau aufzulösen. Darüber hinaus ändern sich diese Regionen während des Prozesses, was auch eine Veränderung der verfeinerten Gitterregionen notwendig macht. Adaptive Finite‐Elemente‐Methoden sind ein Werkzeug, das, basierend auf der berechneten Lösung (und nicht auf a priori Wissen über das Verhalten) automatische Kriterien für lokale Gitterverfeinerungen liefert.
Wir geben Beispiele aus der Wärmebehandlung von Stahl mit Phasenumwandlungen, Umwandlungsplastizität und spannungsabhängigem Umwandlungsverhalten. Auf der mesoskopischen Skala von Körnern können ähnliche Methoden benutzt werden, um Phasenfeld‐Modelle für Phasenumwandlungen effizient und hinreichend genau zu berechnen.
Multinomial processing tree (MPT) models are a family of stochastic models for psychology and related sciences that can be used to model observed categorical frequencies as a function of a sequence ...of latent states. For the analysis of such models, the present article presents a platform-independent computer program called multiTree, which simplifies the creation and the analysis of MPT models. This makes them more convenient to implement and analyze. Also, multiTree offers advanced modeling features. It provides estimates of the parameters and their variability, goodness-of-fit statistics, hypothesis testing, checks for identifiability, parametric and nonparametric bootstrapping, and power analyses. In this article, the algorithms underlying multiTree are given, and a user guide is provided. The multiTree program can be downloaded from http://psycho3.uni-mannheim.de/multitree.
The size of a model has been shown to critically affect the goodness of approximation of the model fit statistic T to the asymptotic chi-square distribution in finite samples. It is not clear, ...however, whether this "model size effect" is a function of the number of manifest variables, the number of free parameters, or both. It is demonstrated by means of 2 Monte Carlo computer simulation studies that neither the number of free parameters to be estimated nor the model degrees of freedom systematically affect the T statistic when the number of manifest variables is held constant. Increasing the number of manifest variables, however, is associated with a severe bias. These results imply that model fit drastically depends on the size of the covariance matrix and that future studies involving goodness-of-fit statistics should always consider the number of manifest variables, but can safely neglect the influence of particular model specifications.
Many constructs in personality psychology assume a hierarchical structure positing a general factor along with several narrower subdimensions or facets. Different approaches are commonly used to ...model such a structure, including higher-order factor models, bifactor models, single-factor models based on the responses on the observed items, and single-factor models based on parcels computed from the mean observed scores on the subdimensions. The present article investigates the consequences of adopting a certain approach for the validity of conclusions derived from the thereby obtained correlation of the most general factor to a covariate. Any of the considered approaches may closely approximate the true correlation when its underlying assumptions are met or when model misspecifications only pertain to the measurement model of the hierarchical construct. However, when misspecifications involve nonmodeled covariances between parts of the hierarchically structured construct and the covariate, higher-order models, single-factor representations, and facet-parcel approaches can yield severely biased estimates sometimes grossly misrepresenting the true correlation and even incurring sign changes. In contrast, a bifactor approach proved to be most robust and to provide rather unbiased results under all conditions. The implications are discussed and recommendations are provided.
Exploratory factor analyses are commonly used to determine the underlying factors of multiple observed variables. Many criteria have been suggested to determine how many factors should be retained. ...In this study, we present an extensive Monte Carlo simulation to investigate the performance of extraction criteria under varying sample sizes, numbers of indicators per factor, loading magnitudes, underlying multivariate distributions of observed variables, as well as how the performance of the extraction criteria are influenced by the presence of cross-loadings and minor factors for unidimensional, orthogonal, and correlated factor models. We compared several variants of traditional parallel analysis (PA), the Kaiser-Guttman Criterion, and sequential χ2 model tests (SMT) with 4 recently suggested methods: revised PA, comparison data (CD), the Hull method, and the Empirical Kaiser Criterion (EKC). No single extraction criterion performed best for every factor model. In unidimensional and orthogonal models, traditional PA, EKC, and Hull consistently displayed high hit rates even in small samples. Models with correlated factors were more challenging, where CD and SMT outperformed other methods, especially for shorter scales. Whereas the presence of cross-loadings generally increased accuracy, non-normality had virtually no effect on most criteria. We suggest researchers use a combination of SMT and either Hull, the EKC, or traditional PA, because the number of factors was almost always correctly retrieved if those methods converged. When the results of this combination rule are inconclusive, traditional PA, CD, and the EKC performed comparatively well. However, disagreement also suggests that factors will be harder to detect, increasing sample size requirements to N ≥ 500.
Translational Abstract
Exploratory factor analysis (EFA) is a statistical tool commonly used in psychological research to determine the underlying factors of questionnaire items. One of the key issues in EFA is deciding how many underlying factors researchers need to assume to account for different responses to these items. In this simulation study, we compared different extraction criteria, designed to determine this number, under conditions that are realistic in empirical practice. We investigated conditions with one underlying factor, multiple uncorrelated factors, and multiple correlated factors. In addition, we also violated two assumptions of the extraction criteria. First, we included conditions with minor underlying factors that represent systematic measurement errors, for example, when different questionnaire items are phrased in a similar way. Second, many extraction criteria assume a normal distribution of responses to the questionnaire items and we included conditions where this distribution was non-normal. We found that (1) some criteria perform better in conditions with one factor or multiple uncorrelated factors, whereas other criteria perform well in conditions with multiple correlated factors, (2) the latter criteria perform worse when minor factors are present, and (3) non-normality did not impact the performance of most criteria. We suggest that researchers use two criteria in conjunction, one suited for single/uncorrelated factors and one suited for correlated factors. If both criteria suggest the same number of factors, the result is likely correct. Otherwise, the sample size should be at least 500 because the number of underlying factors is harder to detect.
One of the most important issues in structural equation modeling concerns testing model fit. We propose to retain the likelihood ratio test in combination with decision criteria that increase with ...sample size. Specifically, rooted in Neyman-Pearson hypothesis testing, we advocate balancing α- and β-error risks. This strategy has a number of desirable consequences and addresses several objections that have been raised against the likelihood ratio test in model evaluation. First, balancing error risks avoids logical problems with Fisher-type hypotheses tests when predicting the null hypothesis (i.e., model fit). Second, both types of statistical decision errors are controlled. Third, larger samples are encouraged (rather than penalized) because both error risks diminish as the sample size increases. Finally, the strategy addresses the concern that structural equation models cannot necessarily be expected to provide an exact description of real-world phenomena.
Structural equation modeling (SEM) is a widespread and commonly used approach to test substantive hypotheses in the social and behavioral sciences. When performing hypothesis tests, it is vital to ...rely on a sufficiently large sample size to achieve an adequate degree of statistical power to detect the hypothesized effect. However, applications of SEM rarely consider statistical power in informing sample size considerations or determine the statistical power for the focal hypothesis tests performed. One reason is the difficulty in translating substantive hypotheses into specific effect size values required to perform power analyses, as well as the lack of user-friendly software to automate this process. The present paper presents the second version of the R package semPower which includes comprehensive functionality for various types of power analyses in SEM. Specifically, semPower 2 allows one to perform both analytical and simulated a priori, post hoc, and compromise power analysis for structural equation models with or without latent variables, and also supports multigroup settings and provides user-friendly convenience functions for many common model types (e.g., standard confirmatory factor analysis CFA models, regression models, autoregressive moving average ARMA models, cross-lagged panel models) to simplify power analyses when a model-based definition of the effect in terms of model parameters is desired.
Model selection is an omnipresent issue in structural equation modeling (SEM). When deciding among competing theories instantiated as formal statistical models, a trade-off is often sought between ...goodness-of-fit and model parsimony. Whereas traditional fit assessment in SEM quantifies parsimony solely as the number of free parameters, the ability of a model to account for diverse data patterns-known as fitting propensity-also depends on the functional form of a model. The present investigation provides a systematic assessment of the fitting propensity of models typically considered and compared in SEM, namely, exploratory and confirmatory factor analysis models positing a different number of latent factors or a different hierarchical structure (single-factor, correlated factors, higher-order, and bifactor models). Furthermore, the behavior of commonly used fit indices (CFI, SRMR, RMSEA, TLI) and information criteria (AIC, BIC) in accounting for fitting propensity was assessed. Although the results demonstrated varying degrees of fitting propensity for the models under scrutiny, these differences were mostly driven by the number of free parameters. There was little evidence for additional differences in the functional form of the compared models. Fit indices adjusting for the number of free parameters such as the RMSEA and TLI thus adequately accounted for differences in fitting propensity. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Abstract
Guidelines to evaluate the fit of structural equation models can only offer meaningful insights to the extent that they apply equally to a wide range of situations. However, a number of ...previous studies found that statistical power to reject a misspecified model increases and descriptive fit-indices deteriorate when loadings are high, thereby inappropriately panelizing high reliability indicators. Based on both theoretical considerations and empirical simulation studies, we show that previous results only hold for a particular definition and a particular type of model error. At a constant degree of misspecification (as measured through the minimum of the fit-function), statistical power to reject a wrong model and noncentrality based fit-indices (such as the root-mean squared error of approximation; RMSEA) are independent of loading magnitude. If the degree of model error is controlled through the average residuals, higher loadings are associated with increased statistical power and a higher RMSEA when the measurement model is misspecified, but with decreased power and a lower RMSEA when the structural model is misspecified. In effect, inconsistencies among noncentrality and residual based fit-indices can provide information about possible sources of misfit that would be obscured when considering either measure in isolation.
Translational Abstract
Statistical models have to be compared with the observed data to gauge the adequacy of the model. The fit of structural equation models can be evaluated in different ways. One way is to consider the average residual, that is, the average deviation between the observed and the model-implied covariance structure. A common index reflecting such an unweighted measure of the residuals is the standardized root-mean-square residual (SRMR). Another way to evaluate model fit is to consider the minimum of the fitting function used to estimate the parameters of the model. When maximum likelihood is used to estimate the model parameters, the minimum of this function amounts to a weighting of the residuals. A common index based on the minimum of the fit function (and hence reflecting a weighted measure of the residuals) is the root-mean-squared error of approximation (RMSEA). Weighted and unweighted measures of the residuals can provide incongruent information about the validity of the model, depending on the loading magnitude and the type of model error. When the loadings are high, the RMSEA is more sensitive to misspecified measurement models, whereas the SRMR is more sensitive to misspecified structural models. This behavior reverses when the loadings are low, so that the RMSEA is more sensitive to misspecified structural models, whereas the SRMR is more sensitive to misspecified measurement models. Inconsistencies among weighted and unweighted measures of the residuals thus provide information about sources of misfit that would be obscured when considering either measure in isolation.