Cronbach’s Coefficient Alpha Cho, Eunseong; Kim, Seonghoon
Organizational research methods,
04/2015, Letnik:
18, Številka:
2
Journal Article
Recenzirano
This study disproves the following six common misconceptions about coefficient alpha: (a) Alpha was first developed by Cronbach. (b) Alpha equals reliability. (c) A high value of alpha is an ...indication of internal consistency. (d) Reliability will always be improved by deleting items using “alpha if item deleted.” (e) Alpha should be greater than or equal to .7 (or, alternatively, .8). (f) Alpha is the best choice among all published reliability coefficients. This study discusses the inaccuracy of each of these misconceptions and provides a correct statement. This study recommends that the assumptions of unidimensionality and tau-equivalency be examined before the application of alpha and that structural equation modeling (SEM)–based reliability estimators be substituted for alpha when one of these conditions is not satisfied. This study also provides formulas for SEM-based reliability estimators that do not rely on matrix notation and step-by-step explanations for the computation of SEM-based reliability estimates.
In this paper, the consistency of the nearest neighbor estimator of the density function based on widely orthant dependent (WOD, in short) samples is investigated. The convergence rate of strong ...consistency, the complete consistency, the uniformly complete consistency and uniformly strong consistency of the nearest neighbor estimator of the density function based on WOD samples are established. Our results established in the paper generalize or improve the corresponding ones for independent samples and some negatively dependent samples.
It is crucial to distinguish mislabeled samples for dealing with noisy labels. Previous methods such as "Co-teaching" and "JoCoR" introduce two different networks to select clean samples out of the ...noisy ones and only use these clean samples to train the deep models. Different from these methods which require to train two networks simultaneously, we propose a simple and effective method to identify clean samples only using one single network. We discover that the clean samples prefer to reach consistent predictions for the original images and the transformed images while noisy samples usually suffer from inconsistent predictions. Motivated by this observation, we propose a noisy label detection approach, named Transform Consistency Network (TC-Net), which constrains the transform consistency ( i.e., category consistency and visual attention consistency) between the original images and the transformed images for network training. Then we can select small-loss samples to update the parameters of the network. Furthermore, in order to mitigate the negative influence of noisy labels, we design a classification loss by using the off-line hard labels and on-line soft labels to provide more reliable supervisions for training a robust model. We conduct comprehensive experiments on CIFAR-10, CIFAR-100 and Clothing1M datasets. Compared with the clean sample selection baselines, we achieve the state-of-the-art performance. Especially, in most cases, our proposed method outperforms the baselines by a large margin.
It is shown that the accepted proof of the anti-symmetrised geminal power (AGP) wave functions lack of size consistency is not general enough to constitute a proof for the size consistency of the AGP ...wave function. The origin of the size consistency problem for AGP wave function in previous proofs is shown to stem from the perceived notion that the natural orbitals of the AGP always can be localised or guessed a priori. We here show that by applying different constraints on a more general geminal coefficient matrix that the ionised/electron-attached determinants can be eliminated in different ways in a spin-restricted basis, which is not possible in the accepted proof. Furthermore it is shown how different constraints on the coefficients in the geminal coefficient matrix can lead to different ionisation channels upon dissociation. We discuss the consequences of the generation of natural orbitals from the solution of the AGP using a more general coefficient matrix. Finally the modern use of the natural AGP as a reference function for another correlation method is discussed where improvements to the orbitals used in the modern AGP are suggested.
Abstract
Empirical studies in psychology commonly report Cronbach's alpha as a measure of internal consistency reliability despite the fact that many methodological studies have shown that Cronbach's ...alpha is riddled with problems stemming from unrealistic assumptions. In many circumstances, violating these assumptions yields estimates of reliability that are too small, making measures look less reliable than they actually are. Although methodological critiques of Cronbach's alpha are being cited with increasing frequency in empirical studies, in this tutorial we discuss how the trend is not necessarily improving methodology used in the literature. That is, many studies continue to use Cronbach's alpha without regard for its assumptions or merely cite methodological articles advising against its use to rationalize unfavorable Cronbach's alpha estimates. This tutorial first provides evidence that recommendations against Cronbach's alpha have not appreciably changed how empirical studies report reliability. Then, we summarize the drawbacks of Cronbach's alpha conceptually without relying on mathematical or simulation-based arguments so that these arguments are accessible to a broad audience. We continue by discussing several alternative measures that make less rigid assumptions which provide justifiably higher estimates of reliability compared to Cronbach's alpha. We conclude with empirical examples to illustrate advantages of alternative measures of reliability including omega total, Revelle's omega total, the greatest lower bound, and Coefficient H. A detailed software appendix is also provided to help researchers implement alternative methods.
Translational Abstract
Scales are commonly used in psychological research to measure directly unobservable constructs like motivation or depression. These scales are comprised of multiple items, each aiming to provide information about various aspects of the construct of interest. Whenever a scale is used in a psychological study, it is important to report on its reliability. Since the 1950s, the primary method for capturing reliability has been Cronbach's alpha, a method whose status is perhaps best exemplified by its place as one of the most cited scientific articles of all-time, in any field. Despite its overwhelming popularity, the underlying assumptions of Cronbach's alpha have been questioned recently in the statistical literature because these assumptions were commonplace 65 years ago but have largely disappeared from more modern statistical methods for constructing scales. Though the ideas in these statistical articles have the potential to significantly alter how psychological research is conducted and reported, recommendations from the statistical literature have yet to permeate the psychological literature. In this article, the goal is to demonstrate why Cronbach's alpha is no longer the optimal method for reporting on reliability. To differentiate this article from articles appearing in the statistical literature, we approach issues with Cronbach's alpha with very little focus on mathematical or computational detail so that the deficiencies of Cronbach's alpha are illustrated in words and examples rather than proofs and simulations so that these ideas can impact a larger group of researchers-namely, the researchers who most often report Cronbach's alpha.
In this paper, we develop two regression methods that transform hesitant fuzzy preference relations (HFPRs) into fuzzy preference relations (FPRs). On the basis of the complete consistency, reduced ...FPRs with the highest consistency levels can be derived from HFPRs. Compared with a straightforward method, this regression method is more efficient in the Matlab environment. Based on the weak consistency, another regression method is developed to transform HFPRs into reduced FPRs which satisfy the weak consistency. Two algorithms are proposed for the two regression methods, and some examples are provided to verify the practicality and superiority of the proposed methods.
Void lensing as a test of gravity Baker, Tessa; Clampitt, Joseph; Jain, Bhuvnesh ...
Physical review. D,
07/2018, Letnik:
98, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We propose a consistency test of gravity based on the weak lensing signal of cosmic voids. For a given void profile, as traced by galaxies, the lensing signal can vary in different gravity theories. ...Thus the comparison of the lensing shear profile of such voids with the general relativistic prediction can test for deviations from general relativity (GR). For concreteness, we calculate the expected lensing signal in two gravity theories involving scalar fields with derivative couplings. We find that the scalar field has the potential to boost the tangential shear both within and outside the void radius. Reversing the method, one can infer the void central density parameter from the lensing signal, and compare to the value estimated independently using the galaxy tracer profiles of voids. Hence, one can check for consistency between the behavior of light and matter under the assumption of GR. We use voids traced by luminous red galaxies in SDSS to demonstrate our methodology, finding that the void central density parameter can shift from its GR value by up to 20% in some Galileon gravity models. Although Galileon gravity is now disfavored as a source of cosmic acceleration by other data sets, the methods we demonstrate here can be used to test for more general fifth force effects with upcoming void lensing data.
In order to extend the best-worst method (BWM) to uncertain circumstances, in this paper, we propose an intuitionistic fuzzy multiplicative best-worst method (IFMBWM) with intuitionistic fuzzy ...multiplicative preference relations (IFMPRs) for multi-criteria group decision making. First of all, we aggregate individual IFMPRs provided by the decision makers to a collective one by using the intuitionistic fuzzy multiplicative weighted geometric aggregation (IFMWGA) operator. Afterwards, we design an algorithm to rank the criteria according to the membership degrees of the intuitionistic fuzzy assessments, which can be used to identify the best and worst criteria by calculating the out-degrees and in-degrees of the directed network about the collective IFMPR. Furthermore, based on the new definition of the multiplicative consistent IFMPR, we develop several max-min programming models to derive the weights of criteria, and then propose a consistency ratio to check the reliability of the derived results. The procedure of the IFMBWM is provided for the convenience of practical applications. Finally, a numerical example concerning the evaluation of the severity of pulmonary emphysema is given to illustrate the proposed method.
In this paper, we present a new method for group decision making with incomplete fuzzy preference relations based on the additive consistency and the order consistency. We estimate unknown preference ...values based on the additive consistency and then construct the consistency matrix which satisfies the additive consistency and the order consistency simultaneously for aggregation. The existing group decision making methods may not satisfy the order consistency for aggregation in some situations. The proposed method can overcome the drawback of the existing methods. It provides us with a useful way for group decision making with incomplete fuzzy preference relations based on the additive consistency and the order consistency.