The name “SAT” has become synonymous with college admissions testing; it has been dubbed “the gold standard.” Numerous studies on its reliability and predictive validity show that the SAT predicts ...college performance beyond high school grade point average. Surprisingly, studies of the factorial structure of the current version of today’s SAT, revised in 2005, have not been reported, if conducted. One purpose of this study was to examine the factorial structure of two administrations of the SAT (October 2010 and May 2011), testing competing models (e.g., one-factor—general ability; two factor—mathematics and “literacy”; three factor—mathematics, critical reading, and writing). We found support for the two-factor model with revise-in-context writing items loading on (and bridging) a reading and writing factor equally, thereby bridging these factors into a literacy factor. A second purpose was to draw tentative implications of our finding for the “next generation” SAT or other college readiness exams in light of Common Core State Standards Consortia efforts, suggesting that combining critical reading and writing (including the essay) would offer unique revision opportunities. More specifically, a reading and writing (combined) construct might pose a relevant problem or issue with multiple documents to be used to answer questions about the issue(s) (multiple-choice, short answer) and to write an argumentative/analytical essay based on the documents provided. In this way, there may not only be an opportunity to measure students’ literacy but also perhaps students’ critical thinking—key factors in assessing college readiness.
The authors examine methodological characteristics of summative evaluations in informal science education (ISE), asking: What are the major types of designs used in summative evaluations, and what ...kinds of questions can they answer? What are the types of data collection methods and measures used, and how many are self-reports or direct measures? They reviewed all summative reports from the year 2012 on informalscience.org, the online resources portal from the Center for Advancement of Informal Science Education. They found reliance on nonexperimental evaluation designs and heavy use of self-report instruments. If a primary function of summative evaluations is to assess impact, and impact is a causal question, then these findings are problematic; the field needs to move beyond the mostly descriptive studies found in the sample. Interviews with nine leaders in ISE and ISE evaluation help explain evaluation challenges in ISE and generate ideas for advancing the field.
Summative evaluation plays a critical role in documenting the impacts of informal science education (ISE), potentially contributing to the ISE knowledge base and informing ongoing improvements in ...practice and decision-making. In response to the growing demand for capacity-building in ISE evaluation, this article presents a framework for summative evaluation based on an extensive review of literature and research-based refinements. The framework synthesizes key elements of high-quality summative evaluation into three dimensions: (a) Intervention Rationale, (b) Methodological Rigor and Appropriateness, and (c) Evaluation Uses. Judgment of the value of the intervention (e.g., program, exhibition) should draw upon all three dimensions.
Learning progressions--descriptions of increasingly sophisticated ways of thinking about or understanding a topic (National Research Council, 2007)--represent a promising framework for developing ...organized curricula and meaningful assessments in science. In addition, well-grounded learning progressions may allow for coherence between cognitive models of how understanding develops in a given domain, classroom instruction, professional development, and classroom and large-scale assessments.
Learning progressions have captured the imagination and the rhetoric of school reformers and education researchers as one possible elixir for getting K-12 education “on track” (Corcoran et al.’s ...metaphor, 2009, p. 8). Indeed, the train has left the station and is rapidly gathering speed in the education reform and research communities. As we are concerned about this enthusiasm—and the potential for internecine warfare in a competitive market for ideas— we share the Center on Continuous Instructional Improvement’s view of the state-of-learningprogressions as quoted above. Even more, we fear that learning progressions will be adapted to fit various Procrustean beds made by researchers and reformers who seek to fix educational problems.