Q-interactive is a relatively new technology-based individualized testing platform developed by Pearson, Inc. for use by practitioners as an alternative to the traditional paper-and-pencil method of ...individualized testing. The potential utility of this type of assessment format for both practicing psychologists and trainers of psychologists is explored, including positive and negative initial reactions to the use of the program and first impressions from a number of first-time users. The implementation of new technology as part of a testing course for graduate students from 3 different graduate programs was initiated, and data were collected over 1.5 years in order to investigate the utility of Q-interactive as a test administration method, determine any potential problems for the use of this testing format, and explore graduate student user impressions. No differences were noted among graduate student ratings of test administration experiences, regardless of the administration method learned initially. Significant differences were found, however, with regard to students' impressions of volunteer client engagement, eagerness to participate, and client enjoyment of testing, with volunteer clients rated as more engaged, more eager, and having more fun when presented with technology-based materials. Interestingly, although the majority of students indicated a strong preference for one administration format over another, the number preferring a technology-enhanced administration was only slightly higher, with most preferring to learn using the paper-and-pencil administration format initially. Implications for practitioners, supervisors, and instructors are discussed.
Emotions measures represent an important means of obtaining construct validity evidence for emotional intelligence (EI) tests because they have the same theoretical underpinnings. Additionally, the ...extent to which both emotions and EI measures relate to intelligence is poorly understood. The current study was designed to address these issues. Participants (
N
= 138) completed the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT), two emotions measures, as well as four intelligence tests. Results provide mixed support for the model hypothesized to underlie the MSCEIT, with emotions research and EI measures failing to load on the same factor. The emotions measures loaded on the same factor as intelligence measures. The validity of certain EI components (in particular, Emotion Perception), as currently assessed, appears equivocal.
Decline in cognitive ability is a core diagnostic criterion for dementia. Knowing the extent of decline requires a baseline score from which change can be reckoned. In the absence of prior cognitive ...ability scores, vocabulary-based cognitive tests are used to estimate premorbid cognitive ability. It is important that such tests are short yet informative, to maximize information and practicability. The National Adult Reading Test (NART) is commonly used to estimate premorbid intelligence. People are asked to pronounce 50 words ranging from easy to difficult but whether its words conform to a hierarchy is unknown. Five hundred eighty-seven healthy community-dwelling older people with known age 11 IQ scores completed the NART as part of the Lothian Birth Cohort 1936 study. Mokken analysis was used to explore item responses for unidimensional, ordinal, and hierarchical scales. A strong hierarchical scale ("mini-NART") of 23 of the 50 items was identified. These items are invariantly ordered across all ability levels. The validity of the interpretation of this briefer scale's score as an estimate of premorbid ability was examined using the actual age 11 IQ score. The mini-NART accounted for a similar amount of the variance in age 11 IQ as the full NART (NART = 46.5%, mini-NART = 44.8%). The mini-NART is proposed as a useful short clinical tool to estimate prior cognitive ability. The mini-NART has clinical relevance, comprising highly discriminatory, invariantly ordered items allowing for sensitive measurement, and adaptive testing, reducing test administration time, and patient stress.
This study compared 3 measures of the team emotional intelligence (EI) construct: an individual-referent subjective measure, an individual-referent performance measure and a team-referent measure. ...Results showed that when using emotion-related variables (e.g., team relationship conflict and cohesion) as criterion variables, the team-referent EI measure was the strongest predictor and demonstrated incremental validity over both of the individual-referent measures. Furthermore, the individual-referent subjective measure demonstrated marginal incremental validity over the individual-referent performance measure. These results were not found when task-related variables, such as task conflict and performance, were used as criteria. Implications of the results are discussed.
Global composites (e.g., IQs) calculated in intelligence tests are interpreted as indexes of the general factor of intelligence, or psychometric g. It is therefore important to understand the ...proportion of variance in those global composites that is explained by g. In this study, we calculated this value, referred to as hierarchical omega, using large-scale, nationally representative norming sample data from 3 popular individually administered tests of intelligence for children and adolescents. We also calculated the proportion of variance explained in the global composites by g and the group factors, referred to as omega total, or composite reliability, for comparison purposes. Within each battery, g was measured equally well. Using total sample data, we found that 82%-83% of the total test score variance was explained by g. The group factors were also measured in the global composites, with both g and group factors explaining 89%-91% of the total test score variance for the total samples. Global composites are primarily indexes of g, but the group factors, as a whole, also explain a meaningful amount of variance.
Intelligence Gottfredson, Linda; Saklofske, Donald H
Canadian psychology = Psychologie canadienne,
08/2009, Letnik:
50, Številka:
3
Journal Article
Recenzirano
Odprti dostop
There is no more central topic in psychology than intelligence and intelligence testing. With a history as long as psychology itself, intelligence is the most studied and likely the best understood ...construct in psychology, albeit still with many "unknowns." The psychometric sophistication employed in creating intelligence tests is at the highest level. The authors provide an overview of the history, theory, and assessment of intelligence. Five questions are proposed and discussed that focus on key areas of confusion or misunderstanding associated with the measurement and assessment of intelligence.
Aucun sujet n'est aussi central en psychologie que l'intelligence et sa mesure. Avec une histoire aussi ancienne que la psychologie en tant que telle, l'intelligence est le construit le plus étudié et peut-être le mieux compris en psychologie, même si plusieurs « questions sans réponses » subsistent. Le perfectionnement psychométrique des tests d'intelligence atteint des niveaux inégalés. Les auteurs font un survol de l'histoire, de la théorie et de la mesure de l'intelligence. Cinq questions portant sur des thèmes centraux à l'origine de confusion ou d'incompréhension associées à la mesure et l'évaluation de l'intelligence sont soulevées et discutées.
Intelligence Neisser, Ulric; Boodoo, Gwyneth; Bouchard, Thomas J ...
The American psychologist,
02/1996, Letnik:
51, Številka:
2
Journal Article
Recenzirano
In the fall of 1994, the publication of Herrnstein and Murray's book
The Bell Curve
sparked a new round of debate about the meaning of intelligence test scores and the nature of intelligence. The ...debate was characterized by strong assertions as well as by strong feelings. Unfortunately, those assertions often revealed serious misunderstandings of what has (and has not) been demonstrated by scientific research in this field. Although a great deal is now known, the issues remain complex and in many cases still unresolved. Another unfortunate aspect of the debate was that many participants made little effort to distinguish scientific issues from political ones. Research findings were often assessed not so much on their merits or their scientific standing as on their supposed political implications. In such a climate, individuals who wish to make their own judgments find it hard to know what to believe.
Reviewing the intelligence debate at its meeting of November 1994, the Board of Scientific Affairs (BSA) of the American Psychological Association (APA) concluded that there was urgent need for an authoritative report on these issues-one that all sides could use as a basis for discussion. Acting by unanimous vote, BSA established a Task Force charged with preparing such a report. Ulric Neisser, Professor of Psychology at Emory University and a member of BSA, was appointed Chair. The APA Board on the Advancement of Psychology in the Public Interest, which was consulted extensively during this process, nominated one member of the Task Force; the Committee on Psychological Tests and Assessment nominated another; a third was nominated by the Council of Representatives. Other members were chosen by an extended consultative process, with the aim of representing a broad range of expertise and opinion.
The Task Force met twice, in January and March of 1995. Between and after these meetings, drafts of the various sections were circulated, revised, and revised yet again. Disputes were resolved by discussion. As a result, the report presented here has the unanimous support of the entire Task Force.
With the introduction of the iPad-based Q-interactive platform for cognitive ability and achievement test administration, psychology training programs need to adapt to effectively train ...doctoral-level psychologists to be competent in administering, scoring, and interpreting assessment instruments. This article describes the implications for graduate training of moving to iPad-mediated administration of the Wechsler intelligence tests using the Q-interactive program by Pearson. We enumerate differences between Q-interactive and traditional assessment administration, including cost structure, technological requirements, and approach to administration. Changes to coursework, practicum, and supervision and evaluation of assessment competencies are discussed. The benefits of Q-interactive include reduced testing and training time and the decrease or elimination of many types of administration and scoring errors. However, new training challenges are introduced, including the need to be proficient at trouble-shooting technology, changes in rapport-building with clients, and assessing and facilitating clients' comfort with the platform. Challenges for course instructors and practicum supervisors include deciding which testing modality to use, increased difficulty evaluating some aspects of administration and scoring competency, and the potential for more frequent updates requiring additional training and updating of skills. We discuss the training implications of this new platform, and make specific suggestions for how training programs may respond to these changes and integrate iPad administration into their courses and practicum.
CHC Model According to Weiss Pezzuti, Lina; Lang, Margherita; Rossetti, Serena ...
Journal of individual differences,
2018, Letnik:
39, Številka:
1
Journal Article
Recenzirano
The Italian version of the Wechsler Adult Intelligence Scale - Fourth Edition (WAIS-IV) - was standardized using a sample of 2,174 participants, aged between 16 and 90 years. The WAIS-IV consists of ...10 core subtests and 5 supplemental subtests. While the 70-90 yr group is usually excluded from three of the five supplemental subtests (Letter-Number Sequencing, Figure Weights, and Cancellation), we administered all 15 subtests both to adults and elderly people. The aim of the present study was to investigate the factorial invariance of the Weiss and colleagues' hierarchical five-factor CHC (Cattell-Horn-Carroll) model in Italian adults and elders. The overall results of this study generally support both the configural and factorial invariance of the WAIS-IV, and hence the five-factor CHC model of Weiss is equivalent in adults and elderly people. However, for the elderly sample we found higher loadings of WAIS-IV subtests on the second-order g factor.
The detrimental effects of anxiety on cognitive performance have been explained by the activation of worry, which detracts attention away from the task at hand. However, recent research showed that ...anxiety is related to performance only when self-control capacity is low (i.e., ego depletion). The present work extends these findings by showing that activation of worry interferes with cognitive performance more strongly when self-control capacity is momentarily depleted as compared to intact. After manipulations of self-control capacity and worry activation, 70 undergraduates completed a standardized intelligence test. As expected, activation of worry was associated with poorer performance when self-control capacity was depleted, but had no effect on performance when self-control capacity was intact. The findings indicate that worry may play a causal role in the anxiety-performance relationship, but only when its regulation by self-control is momentarily hindered.