Advances in experimental methods have resulted in the generation of enormous volumes of data across the life sciences. Hence clustering and classification techniques that were once predominantly the ...domain of ecologists are now being used more widely. This 2006 book provides an overview of these important data analysis methods, from long-established statistical methods to more recent machine learning techniques. It aims to provide a framework that will enable the reader to recognise the assumptions and constraints that are implicit in all such techniques. Important generic issues are discussed first and then the major families of algorithms are described. Throughout the focus is on explanation and understanding and readers are directed to other resources that provide additional mathematical rigour when it is required. Examples taken from across the whole of biology, including bioinformatics, are provided throughout the book to illustrate the key concepts and each technique's potential.
Engaging and accessible, this book offers students a complete guide to using NVivo for qualitative data analysis. Drawing on their wealth of expertise, the authors offer detailed, practical advice ...that relates to students' own experience and research projects. Packed with real-world examples and case studies, the book supports students through every stage of qualitative data analysis. The third edition contains fully integrated instructions for using NVivo on both Mac and PC, with screenshots and click-by-click guidance, seamlessly interweaves theory and practice in easy-to-follow steps, empowers students to develop their critical thinking. Accompanied by video tutorials for both Mac and PC, web links and a host of other helpful online resources, this step-by-step book removes students' anxiety about tackling data analysis. Whether for advanced researchers or those approaching the task for the first time, this clear, yet comprehensive guide is the perfect companion for anyone doing qualitative data analysis with NVivo.
This invaluable manual from world-renowned expert Johnny Saldana illuminates the process of qualitative coding and provides clear, insightful guidance for qualitative researchers at all levels. The ...fourth edition includes a range of updates that build upon the huge success of the previous editions: A structural reformat has increased accessibility; the 3 sections from the previous edition are now spread over 15 chapters for easier sectional reference There are two new first cycle coding methods join the 33 others in the collection: Metaphor Coding and Themeing the Data: Categorically Includes a brand new companion website with links to SAGE journal articles, sample transcripts, links to CAQDAS sites, student exercises, links to video and digital content Analytic software screenshots and academic references have been updated, alongside several new figures added throughout the manual It remains the only book that looks specifically at coding qualitative data, as a core but often neglected skill that researchers and students alike need to effectively make sense of their data and to identify patterns, before they can analyse the material. Saldana presents a range of coding options with advantages and disadvantages to help researchers to choose the most appropriate approach for their project, reinforcing their perspective with real world examples, used to show step-by-step processes and to demonstrate important skills.
Provides a coherent and comprehensive account of the theory and practice of real-time human disease outbreak detection, explicitly recognizing the revolution in practices of infection control and ...public health surveillance. * Reviews the current mathematical, statistical, and computer science systems for early detection of disease outbreaks * Provides extensive coverage of existing surveillance data * Discusses experimental methods for data measurement and evaluation * Addresses engineering and practical implementation of effective early detection systems * Includes real case studies
In 1948 the first randomized controlled trial was published by the English Medical Research Council in the British Medical Journal. Until then, observations had been uncontrolled. Initially, trials ...frequently did not confirm hypotheses to be tested. This phenomenon was attributed to little sensitivity due to small samples, as well as inappropriate hypotheses based on biased prior trials. Additional flaws were being recognized and subsequently better accounted for. Such flaws of a mainly technical nature have been largely implemented and after 1970 led to trials being of significantly better quality than before. The past decade focused, in addition to technical aspects, on the need for circumspection in the planning and conducting of clinical trials. As a consequence, prior to approval, clinical trial protocols are now routinely scrutinized by different circumstantial organs, including ethics committees, institutional and federal review boards, national and international scientific organizations, and monitoring committees charged with conducting interim analyses. This third edition not only explains classical statistical analyses of clinical trials, but addresses relatively novel issues, including equivalence testing, interim analyses, sequential analyses, meta-analyses, and provides a framework of the best statistical methods currently available for such purposes.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
This text details the basics and the latest of densitometry. This edition, updated and expanded, covers new applications and includes new material on radiation safety and an entire appendix devoted ...to the recent ISCD Position Development Conference.
Full text
Available for:
FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Introduction to Statistical Analysis of Laboratory Data presents a detailed discussion of important statistical concepts and methods of data presentation and analysis * Provides detailed discussions ...on statistical applications including a comprehensive package of statistical tools that are specific to the laboratory experiment process * Introduces terminology used in many applications such as the interpretation of assay design and validation as well as "fit for purpose" procedures including real world examples * Includes a rigorous review of statistical quality control procedures in laboratory methodologies and influences on capabilities * Presents methodologies used in the areas such as method comparison procedures, limit and bias detection, outlier analysis and detecting sources of variation * Analysis of robustness and ruggedness including multivariate influences on response are introduced to account for controllable/uncontrollable laboratory conditions
The explanation of psychological phenomena is a central aim of psychological science. However, the nature of explanation and the processes by which we evaluate whether a theory explains a phenomenon ...are often unclear. Consequently, it is often unknown whether a given psychological theory indeed explains a phenomenon. We address this shortcoming by proposing a productive account of explanation: a theory explains a phenomenon to some degree if and only if a formal model of the theory produces the statistical pattern representing the phenomenon. Using this account, we outline a workable methodology of explanation: (a) explicating a verbal theory into a formal model, (b) representing phenomena as statistical patterns in data, and (c) assessing whether the formal model produces these statistical patterns. In addition, we provide three major criteria for evaluating the goodness of an explanation (precision, robustness, and empirical relevance), and examine some cases of explanatory breakdowns. Finally, we situate our framework within existing theories of explanation from philosophy of science and discuss how our approach contributes to constructing and developing better psychological theories. (PsycInfo Database Record (c) 2024 APA, all rights reserved) (Source: journal abstract)
Full text
Available for:
CEKLJ, FFLJ, NUK, ODKLJ, PEFLJ, UPUK
Multicollinearity represents a high degree of linear intercorrelation between explanatory variables in a multiple regression model and leads to incorrect results of regression analyses. Diagnostic ...tools of multicollinearity include the variance inflation factor (VIF), condition index and condition number, and variance decomposition proportion (VDP). The multicollinearity can be expressed by the coefficient of determination (Rh2) of a multiple regression model with one explanatory variable (Xh) as the model's response variable and the others (Xi i ≠ h) as its explanatory variables. The variance (σh2) of the regression coefficients constituting the final regression model are proportional to the VIF. Hence, an increase in Rh2 (strong multicollinearity) increases σh2. The larger σh2 produces unreliable probability values and confidence intervals of the regression coefficients. The square root of the ratio of the maximum eigenvalue to each eigenvalue from the correlation matrix of standardized explanatory variables is referred to as the condition index. The condition number is the maximum condition index. Multicollinearity is present when the VIF is higher than 5 to 10 or the condition indices are higher than 10 to 30. However, they cannot indicate multicollinear explanatory variables. VDPs obtained from the eigenvectors can identify the multicollinear variables by showing the extent of the inflation of σh2 according to each condition index. When two or more VDPs, which correspond to a common condition index higher than 10 to 30, are higher than 0.8 to 0.9, their associated explanatory variables are multicollinear. Excluding multicollinear explanatory variables leads to statistically stable multiple regression models.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Statistical mechanics relies on the maximization of entropy in a system at thermal equilibrium. However, an isolated quantum many-body system initialized in a pure state remains pure during ...Schrödinger evolution, and in this sense it has static, zero entropy. We experimentally studied the emergence of statistical mechanics in a quantum state and observed the fundamental role of quantum entanglement in facilitating this emergence. Microscopy of an evolving quantum system indicates that the full quantum state remains pure, whereas thermalization occurs on a local scale. We directly measured entanglement entropy, which assumes the role of the thermal entropy in thermalization. The entanglement creates local entropy that validates the use of statistical physics for local observables. Our measurements are consistent with the eigenstate thermalization hypothesis.
Full text
Available for:
BFBNIB, NMLJ, NUK, ODKLJ, PNG, SAZU, UL, UM, UPUK