Aim: Higher-elevation areas on islands and continental mountains tend to be separated by longer distances, predicting higher endemism at higher elevations; our study is the first to test the ...generality of the predicted pattern. We also compare it empirically with contrasting expectations from hypotheses invoking higher speciation with area, temperature and species richness. Location: Thirty-two insular and 18 continental elevational gradients from around the world. Methods: We compiled entire floras with elevation-specific occurrence information, and calculated the proportion of native species that are endemic ('percent endemism') in 100-m bands, for each of the 50 elevational gradients. Using generalized linear models, we tested the relationships between percent endemism and elevation, isolation, temperature, area and species richness. Results: Percent endemism consistently increased monotonically with elevation, globally. This was independent of richness—elevation relationships, which had varying shapes but decreased with elevation at high elevations. The endemism—elevation relationships were consistent with isolation-related predictions, but inconsistent with hypotheses related to area, richness and temperature. Main conclusions: Higher per-species speciation rates caused by increasing isolation with elevation are the most plausible and parsimonious explanation for the globally consistent pattern of higher endemism at higher elevations that we identify. We suggest that topography-driven isolation increases speciation rates in mountainous areas, across all elevations and increasingly towards the equator. If so, it represents a mechanism that may contribute to generating latitudinal diversity gradients in a way that is consistent with both present-day and palaeontological evidence.
In psychology, attempts to replicate published findings are less successful than expected. For properly powered studies replication rate should be around 80%, whereas in practice less than 40% of the ...studies selected from different areas of psychology can be replicated. Researchers in cognitive psychology are hindered in estimating the power of their studies, because the designs they use present a sample of stimulus materials to a sample of participants, a situation not covered by most power formulas. To remedy the situation, we review the literature related to the topic and introduce recent software packages, which we apply to the data of two masked priming studies with high power. We checked how we could estimate the power of each study and how much they could be reduced to remain powerful enough. On the basis of this analysis, we recommend that a properly powered reaction time experiment with repeated measures has at least 1,600 word observations per condition (e.g., 40 participants, 40 stimuli). This is considerably more than current practice. We also show that researchers must include the number of observations in meta-analyses because the effect sizes currently reported depend on the number of stimuli presented to the participants. Our analyses can easily be applied to new datasets gathered.
This Tutorial serves as both an approachable theoretical introduction to mixed-effects modeling and a practical introduction to how to implement mixed-effects models in R. The intended audience is ...researchers who have some basic statistical knowledge, but little or no experience implementing mixed-effects models in R using their own data. In an attempt to increase the accessibility of this Tutorial, I deliberately avoid using mathematical terminology beyond what a student would learn in a standard graduate-level statistics course, but I reference articles and textbooks that provide more detail for interested readers. This Tutorial includes snippets of R code throughout; the data and R script used to build the models described in the text are available via OSF at https://osf.io/v6qag/, so readers can follow along if they wish. The goal of this practical introduction is to provide researchers with the tools they need to begin implementing mixed-effects models in their own research.
There is a growing demand for high‐quality soil data. However, soil measurements are subject to many error sources. We aimed to quantify uncertainties in synthetic and real‐world wet chemistry soil ...data through a linear mixed‐effects model, including batch and laboratory effects. The use of synthetic data allowed us to investigate how accurately the model parameters were estimated for various experimental measurement designs, whereas the real‐world case served to explore if estimates of the random effect variances were still accurate for unbalanced datasets with few replicates. The variance estimates for synthetic pHH2O data were unbiased, but limited laboratory information led to imprecise estimates. The same was observed for unbalanced synthetic datasets, where 20, 50 and 80% of the data were removed randomly. Removal led to a sharp increase of the interquartile range (IQR) of the variance estimates for batch effect and the residual. The model was also fitted to real‐world pHH2O and total organic carbon (TOC) data, provided by the Wageningen Evaluating Programmes for Analytical Laboratories (WEPAL). For pHH2O, the model yielded unbiased estimates with relatively small IQRs. However, the limited number of batches with replicate measurements (5.8%) caused the batch effect to be larger than expected. A strong negative correlation between batch effect and residual variance suggested that the model could not distinguish well between these two random effects. For TOC, batch effect was removed from the model as no replicates were available within batches. Again, unbiased model estimates were obtained. However, the IQRs were relatively large, which could be attributed to the smaller dataset with only a single replicate measurement. Our findings demonstrated the importance of experimental measurement design and replicate measurements in the quantification of uncertainties in wet chemistry soil data.
Highlights
Accurate uncertainty quantification depends on the experimental measurement design.
Linear mixed‐effects models can be used as a tool to quantify uncertainty in wet chemistry soil data.
Lack of replicate measurements leads to poor estimates of error variance components.
Measurement error in wet chemistry soil data should not be ignored.
Aims
Linezolid is often used for the infections caused by drug‐resistant Gram‐positive bacteria. Recent studies suggest that large between‐subject variability (BSV) and within‐subject variability ...could alter drug pharmacokinetics (PK) during linezolid therapy due to pathophysiological changes. This review synthesized information on linezolid population PK studies and summarized the significant covariates that influence linezolid PK.
Methods
A literature search was performed using PubMed, Web of Science and Embase from their inception to 30 September 2021. Published studies were included if they contained data analysing linezolid PK parameters in humans using a population approach with a nonlinear mixed‐effects model.
Results
Twenty‐five studies conducted in adults and five in paediatrics were included. One‐ and two‐compartment models were the commonly used structural models for linezolid. Body size (weight, lean body weight and body surface area), creatinine clearance (CLcr) and age significantly influenced linezolid PK. The median clearance (CL) values (ranges) in infants (0.128 L/h/kg 0.121–0.135 and children (0.107 L/h/kg 0.088–0.151 were higher than in adults (0.098 L/h/kg 0.044–0.237. For patients with severe renal impairment (CLcr ≤ 30 mL/min), the CL was 37.2% (15.2–55.3%) lower than in patients with normal renal function.
Conclusion
The optimal linezolid dosage should be adjusted based on the patient's body size, renal function and age. More studies are needed to explore the exact mechanism of linezolid elimination and evaluate the PK characteristics in paediatric patients.
Climate change and habitat loss are both key threatening processes driving the global loss in biodiversity. Yet little is known about their synergistic effects on biological populations due to the ...complexity underlying both processes. If the combined effects of habitat loss and climate change are greater than the effects of each threat individually, current conservation management strategies may be inefficient and at worst ineffective. Therefore, there is a pressing need to identify whether interacting effects between climate change and habitat loss exist and, if so, quantify the magnitude of their impact. In this article, we present a meta‐analysis of studies that quantify the effect of habitat loss on biological populations and examine whether the magnitude of these effects depends on current climatic conditions and historical rates of climate change. We examined 1319 papers on habitat loss and fragmentation, identified from the past 20 years, representing a range of taxa, landscapes, land‐uses, geographic locations and climatic conditions. We find that current climate and climate change are important factors determining the negative effects of habitat loss on species density and/or diversity. The most important determinant of habitat loss and fragmentation effects, averaged across species and geographic regions, was current maximum temperature, with mean precipitation change over the last 100 years of secondary importance. Habitat loss and fragmentation effects were greatest in areas with high maximum temperatures. Conversely, they were lowest in areas where average rainfall has increased over time. To our knowledge, this is the first study to conduct a global terrestrial analysis of existing data to quantify and test for interacting effects between current climate, climatic change and habitat loss on biological populations. Understanding the synergistic effects between climate change and other threatening processes has critical implications for our ability to support and incorporate climate change adaptation measures into policy development and management response.
Second language acquisition researchers often face particular challenges when attempting to generalize study findings to the wider learner population. For example, language learners constitute a ...heterogeneous group, and it is not always clear how a study's findings may generalize to other individuals who may differ in terms of language background and proficiency, among many other factors. In this paper, we provide an overview of how mixed‐effects models can be used to help overcome these and other issues in the field of second language acquisition. We provide an overview of the benefits of mixed‐effects models and a practical example of how mixed‐effects analyses can be conducted. Mixed‐effects models provide second language researchers with a powerful statistical tool in the analysis of a variety of different types of data.
Despite the emergence of SARS-CoV-2 variants and waning immunity after initial vaccination, data on antibody kinetics following booster doses, particularly those adapted to Omicron subvariants like ...XBB.1.5, remain limited. This study assesses the kinetics of anti-spike protein receptor-binding domain (S-RBD) IgG antibody titers post-booster vaccination in a Japanese population during the Omicron variant epidemic.
A prospective cohort study was conducted in Bizen City, Japan, from November 2023 to January 2024. Participants included residents and workers aged ≥18 years, with at least three COVID-19 vaccinations. Antibody levels were measured from venous blood samples. The study analyzed 424 participants and 821 antibody measurements, adjusting for variables such as age, sex, underlying conditions, and prior infection status. Mixed-effects models were employed to describe the kinetics of log-transformed S-RBD antibody titers.
The study found that S-RBD antibody titers declined over time but increased with the number of booster vaccinations, particularly those adapted to Omicron and its subvariant XBB.1.5 (Pfizer-BioNTech Omicron-compatible: 0.156, 95%CI −0.032 to 0.344; Pfizer-BioNTech XBB-compatible: 0.226; 95%CI −0.051 to 0.504; Moderna Omicron-compatible: 0.279, 95%CI 0.012 to 0.546; and Moderna XBB-compatible: 0.338, 95%CI −0.052 to 0.728). Previously infected individuals maintained higher antibody titers, which declined more gradually compared to uninfected individuals (coefficient for interaction with time 0.006; 95%CI 0.001 to 0.011). Sensitivity analyses using Generalized Estimating Equations and interval-censored random intercept model confirmed the robustness of these findings.
The study provides specific data on antibody kinetics post-booster vaccination, including the XBB.1.5-adapted vaccine, in a highly vaccinated Japanese population. The results highlight the importance of considering individual demographics and prior infection history in optimizing vaccination strategies.
•Modeled SARS-CoV-2 antibody kinetics after boosters in a Japanese population.•Antibody levels declined over time but increased with more boosters.•Variant-adapted vaccines resulted in substantially higher antibody levels.•Previously infected maintained higher, more durable antibody titers post-vaccination.•Support individualized booster strategies based on age, sex, and prior infection.
Repeatability (more precisely the common measure of repeatability, the intra‐class correlation coefficient, ICC) is an important index for quantifying the accuracy of measurements and the constancy ...of phenotypes. It is the proportion of phenotypic variation that can be attributed to between‐subject (or between‐group) variation. As a consequence, the non‐repeatable fraction of phenotypic variation is the sum of measurement error and phenotypic flexibility. There are several ways to estimate repeatability for Gaussian data, but there are no formal agreements on how repeatability should be calculated for non‐Gaussian data (e.g. binary, proportion and count data). In addition to point estimates, appropriate uncertainty estimates (standard errors and confidence intervals) and statistical significance for repeatability estimates are required regardless of the types of data. We review the methods for calculating repeatability and the associated statistics for Gaussian and non‐Gaussian data. For Gaussian data, we present three common approaches for estimating repeatability: correlation‐based, analysis of variance (ANOVA)‐based and linear mixed‐effects model (LMM)‐based methods, while for non‐Gaussian data, we focus on generalised linear mixed‐effects models (GLMM) that allow the estimation of repeatability on the original and on the underlying latent scale. We also address a number of methods for calculating standard errors, confidence intervals and statistical significance; the most accurate and recommended methods are parametric bootstrapping, randomisation tests and Bayesian approaches. We advocate the use of LMM‐ and GLMM‐based approaches mainly because of the ease with which confounding variables can be controlled for. Furthermore, we compare two types of repeatability (ordinary repeatability and extrapolated repeatability) in relation to narrow‐sense heritability. This review serves as a collection of guidelines and recommendations for biologists to calculate repeatability and heritability from both Gaussian and non‐Gaussian data.
A central challenge in global change research is the projection of the future behavior of a system based upon past observations. Tree‐ring data have been used increasingly over the last decade to ...project tree growth and forest ecosystem vulnerability under future climate conditions. But how can the response of tree growth to past climate variation predict the future, when the future does not look like the past? Space‐for‐time substitution (SFTS) is one way to overcome the problem of extrapolation: the response at a given location in a warmer future is assumed to follow the response at a warmer location today. Here we evaluated an SFTS approach to projecting future growth of Douglas‐fir (Pseudotsuga menziesii), a species that occupies an exceptionally large environmental space in North America. We fit a hierarchical mixed‐effects model to capture ring‐width variability in response to spatial and temporal variation in climate. We found opposing gradients for productivity and climate sensitivity with highest growth rates and weakest response to interannual climate variation in the mesic coastal part of Douglas‐fir's range; narrower rings and stronger climate sensitivity occurred across the semi‐arid interior. Ring‐width response to spatial versus temporal temperature variation was opposite in sign, suggesting that spatial variation in productivity, caused by local adaptation and other slow processes, cannot be used to anticipate changes in productivity caused by rapid climate change. We thus substituted only climate sensitivities when projecting future tree growth. Growth declines were projected across much of Douglas‐fir's distribution, with largest relative decreases in the semiarid U.S. Interior West and smallest in the mesic Pacific Northwest. We further highlight the strengths of mixed‐effects modeling for reviving a conceptual cornerstone of dendroecology, Cook's 1987 aggregate growth model, and the great potential to use tree‐ring networks and results as a calibration target for next‐generation vegetation models.
We evaluated a space‐for‐time substitution approach to projecting future growth of Douglas‐fir. Fitting a hierarchical mixed‐effects model, we found opposing gradients of productivity and climate sensitivity, with highest growth rates and weakest responses to interannual climate variability in the mesic coastal part of Douglas‐fir's range; narrower rings and stronger climate sensitivity occurred across the semi‐arid interior. Ring‐width response to spatial versus temporal temperature variation was opposite in sign, suggesting that spatial variation in productivity, caused by local adaptation and other slow processes, cannot be used to anticipate changes in productivity caused by rapid climate change.