The ongoing release of large-scale sequencing data in the UK Biobank allows for the identification of associations between rare variants and complex traits. SAIGE-GENE+ is a valid approach to ...conducting set-based association tests for quantitative and binary traits. However, for ordinal categorical phenotypes, applying SAIGE-GENE+ with treating the trait as quantitative or binarizing the trait can cause inflated type I error rates or power loss. In this study, we propose a scalable and accurate method for rare-variant association tests, POLMM-GENE, in which we used a proportional odds logistic mixed model to characterize ordinal categorical phenotypes while adjusting for sample relatedness. POLMM-GENE fully utilizes the categorical nature of phenotypes and thus can well control type I error rates while remaining powerful. In the analyses of UK Biobank 450k whole-exome-sequencing data for five ordinal categorical traits, POLMM-GENE identified 54 gene-phenotype associations.
To analyze rare variants, Bi et al. proposed POLMM-GENE, an approach that is scalable for large-scale sequencing datasets. POLMM-GENE fully utilizes the categorical nature of phenotypes, which avoids inflated type I error rates or power loss. It can identify gene-phenotype associations, providing valuable insights into missing trait heritability.
Sharks are known to contain high levels of mercury in their meat. However, few studies have directly assessed the changes in mercury concentration in the human body according to shark meat intake. ...One hundred and ninety-seven participants that traditionally consume shark meat during the Chuseok holiday were recruited from two areas of Gyeongsangbuk-do, South Korea to examine their blood mercury level before and after the holiday season. Blood mercury levels were measured before and after the holiday season. Characteristics such as the consumption of shark meat, intake amount, and the effect on mercury concentration were assessed during the survey. Univariable and multivariable analysis (Linear Mixed Model) were done for assessing the association between shark meat consumption of holiday season and blood mercury level. Among the total participants, 83 consumed shark meat during holiday. In the univariable analysis, a significant increase in blood mercury levels before and after Chuseok was observed only for the group that consumed shark meat during holiday. The multivariable analysis (adjusted for identified confounders that affect both exposure and outcome considering repeated measurements) showed that consuming shark meat was significantly associated with increased blood mercury levels by 3.56 μg/L (95% confidence interval CI, 2.64–4.67 μg/L). In the model considering the amount consumed as two group, the level of increase was 2.61 μg/L (95% CI, 1.63–3.58 μg/L) for those consuming <100 g, and 6.20 μg/L (95% CI, 4.77–7.62 μg/L) for those consuming ≥100 g compared to group without consuming shark meat. Considering amount consumed as continuous value, 0.02 μg/L (95% CI, 0.01–0.02 μg/L) of blood mercury increase was significantly associated with consuming 1 g. Consumption of shark meat significantly elevated blood mercury levels, exceeding commonly suggested reference concentrations in less than 2 weeks. These findings suggest the need for public health warnings and regulations regarding shark meat consumption.
Display omitted
•The consume shark meat significantly increase blood mercury level in participants•Short-term consumption of shark meat can be associated to adverse health effects•These results highlight the need for public health warnings on shark meat consumption
Standardized drought indices such as the Standardized Precipitation Index (SPI) or the Standardized Precipitation and Evapotranspiration Index (SPEI) are frequently used around the world to assess ...drought severity across a continent or a larger region covering different meteorological regimes. But how standard are these standardized indices? In this paper we quantify the uncertainty of SPI and SPEI based on an Austrian dataset to shed light on what are the main sources of uncertainty in the study area. Five factors that either defy the control of the analyst (record length, observation period), or need to be subjectively decided during the steps of the calculation (choice of the distribution, parameter estimation method, and GOF-test of the fitted distribution) are considered. We use the root mean squared error (ERMS) for estimating the typical error for different calculation algorithm of SPI and SPEI. The total and relative uncertainty components for each factor are analysed by a linear mixed model (LMM) and significance of each model parameter are tested by the Akaike information criterion (AIC) and the restricted likelihood ratio test. The ERMS indicates that computational variations of standardized drought indices lead to highly variable results. From the LMM, the choice of the distribution and the observational window are the most important sources of uncertainty. They, on average, control between 19% and 63% (choice of distribution) and 24% to 70% (observation period) of the total variance of the SPI across all stations and month of the year, with similar values observed for the SPEI. The parameter estimation method and the GOF-tests, however, have almost no effect on the standardized indices. Total errors and observation period uncertainty are typically decreasing with the record length as one would expect, while the distribution uncertainty is almost independent from the record length. An additional assessment shows that the uncertainties are similar at the pan-European scale leading to uncertain characterizations of major events such as the drought of 2015. Overall, the uncertainty of standardized drought indices is substantial. Alternative approaches as nonparametric methods, ensemble approaches or probability-based indices based on established methods of extreme-value statistics should be considered, to make the indices more accurate.
•A novel error model sheds light on the accuracy of drought indices SPI and SPEI.•Main sources of uncertainty are observation period and choice of distribution.•Parameter estimation methods and GOF tests have almost no effect.•Errors are substantial and may yield to false classifications of drought events.•Concepts are discussed to make drought indices more accurate.
Predictions of soil hydraulic properties by pedotransfer functions (PTFs) must be treated with caution when they are used in an application domain which differs from the domain of their original ...development and calibration. However, in some settings, scientists may have little alternative but to use PTFs calibrated elsewhere. In this paper we consider how legacy data can be used to evaluate PTFs in new regions, paying particular attention to the challenges that arise when, as is often the case, the legacy data are not obtained by independent random sampling, and may be clustered at multiple scales. We undertook this work in southern Africa (Zimbabwe, Zambia and Malawi) where PTFs have been little-used, despite the scarcity of direct measurements of the soil properties of interest. We evaluated the extent to which existing PTFs provide a useful tool for the prediction of soil moisture content at field-capacity (−33 kPa) and permanent wilting-point (−1500 kPa) at different spatial scales. Soil legacy data for Zambia, Zimbabwe and Malawi were collated from various sources and PTFs from temperate and tropical domains were evaluated. We examined error variance components of predictions at within-profile, within-site and between-site scales; and estimated their mean errors. In general the better-performing PTFs (with respect to bias and the size of the error variance components) were ones calibrated with data from a tropical domain. This was most apparent at −1500 kPa. However, not all PTFs calibrated with data on tropical soils performed well, and predictions from some PTFs calibrated over a temperate domain were better at −33 kPa. The observations were spatially clustered, with data from different depth intervals in the same profile, from profiles in the same experimental site or farm, and from clusters across the region. This enabled us to show, with an appropriate mixed model analysis, that PTFs which effectively capture regional-scale variation may be less useful for predicting variation within a profile. We propose that such studies, based on legacy data, and with a suitable linear mixed model, should be used to screen PTFs of any provenance before their wider application.
•We showed how correlated and clustered soil legacy data can be used to evaluate PTFs.•Linear mixed models were used, and show the scale-dependence of PTF performance.•The geographical calibration domain and ranges of predictor values should be considered.•For water content at field capacity, a PTF from a temperate domain had advantages.
Industry trends such as product customization, radical innovation, and local production accelerate the adoption of mixed‐model assembly lines (MMALs) that can cope with a widening gap between model ...processing times and true build to order capabilitiy. The existing high work content deviations on such assembly lines stress production planning, especially the assembly line sequencing. Most manufacturers set the launching rate for all assembly line products to a fixed launching rate resulting in rising utility work and idle time when system load increases. We present an “ideal” variable rate launching (VRL) case resulting in minimal computation and achieving 100% productivity (full elimination of idle time and utility work) for balanced assembly times and homogeneous station lengths. Managers should foster the ideal circumstances where operators need not wait for a preceding task to be completed and product sequence restrictions are eliminated, thus enabling unmatched production flexibility. Furthermore, we present a mixed‐integer model to analyze both closed and open workstations on an MMAL for fixed rate launching and VRL. This model incorporates costs not only for labor inefficiencies but also for extending the line length. We present a heuristic solution method when process times and station lengths are heterogeneous and demonstrate that the variable takt dominates the fixed takt. In a numerical, industrial benchmark study, we illustrate that a VRL strategy with open stations has significantly lower labor costs as well as a substantially reduced total line length and thus lower throughput time.
Identification of the majority of organisms present in human-associated microbial communities is feasible with the advent of high throughput sequencing technology. As substantial variability in ...microbiota communities is seen across subjects, the use of longitudinal study designs is important to better understand variation of the microbiome within individual subjects. Complex study designs with longitudinal sample collection require analytic approaches to account for this additional source of variability. A common approach to assessing community changes is to evaluate the change in alpha diversity (the variety and abundance of organisms in a community) over time. However, there are several commonly used alpha diversity measures and the use of different measures can result in different estimates of magnitude of change and different inferences. It has recently been proposed that diversity profile curves are useful for clarifying these differences, and may provide a more complete picture of the community structure. However, it is unclear how to utilize these curves when interest is in evaluating changes in community structure over time. We propose the use of a bi-exponential function in a longitudinal model that accounts for repeated measures on each subject to compare diversity profiles over time. Furthermore, it is possible that no change in alpha diversity (single community/sample) may be observed despite the presence of a highly divergent community composition. Thus, it is also important to use a beta diversity measure (similarity between multiple communities/samples) that captures changes in community composition. Ecological methods developed to evaluate temporal turnover have currently only been applied to investigate changes of a single community over time. We illustrate the extension of this approach to multiple communities of interest (i.e., subjects) by modeling the beta diversity measure over time. With this approach, a rate of change in community composition is estimated. There is a need for the extension and development of analytic methods for longitudinal microbiota studies. In this paper, we discuss different approaches to model alpha and beta diversity indices in longitudinal microbiota studies and provide both a review of current approaches and a proposal for new methods.
Stable carbon isotopes (δ13C) in lake sediments can record changes in CO2 emissions from fossil fuels (FFs) combustion; however, the exact proportion of δ13C changes caused by FFs combustion remains ...unclear. Here, taking the Huguangyan Maar Lake (HGY) located in low latitude with high intensity of human activities in southern China as an example, we used the MixSIAR Bayesian stable isotope mixing model to trace the source change of δ13C in the HGY sediments over the past 130 years. The δ13C value, ranging from −20.23‰ to −24.29‰, with an average of −22.39 ± 1.5‰, displayed a continuous downward trend from the bottom to top throughout the core, with the fastest decline occurring between 1950AD and 1996AD. Quantitative source appointment indicates that the contribution of FFs sources to δ13C depletion in the HGY sediments has increased nearly three-fold over the past 130 years, from 18.6% to 67.8%, which may be related to the Suess Effect of the 13C-depleted FFs burning. The increase in δ13C in HGY sediments in the past decade was consistent with the increase in δ13C–CO2 emitted by global fossil-fuel consumption, and was also related to the decrease in fossil fuels consumption caused by China's Clean Air Act. This study demonstrates that the isotope traceability model is helpful for source appointment of δ13C in lake sediments and that the δ13C in maar lake sediments can be used to trace CO2 emissions from global FFs combustion. This therefore provides new insights on the carbon cycle during the Anthropocene.
Display omitted
•A Bayesian stable isotope mixing model was used to trace source change of δ13C.•δ13C has shown a continuous downward trend from bottom to top throughout the core.•The contribution of fossil fuels sources to δ13C depletion has doubled over 130 years.•Suess Effect from 13C-depleted fossil fuels burning may contribute to the δ13C depletion.•Decreased fossil fuels usage results in the rapid increase in δ13C over the past decade.
Pancreatitis has the potential to occur with increasing cases of hyperlipidemia and alcohol consumption. Meanwhile, until now pharmacotherapy for pancreatitis is more emphasized for pain relief and ...lowering triglyceride levels (for HFD-induced pancreatitis). The polyphenol (quercetin) has several biological activities, including anti-cancer, antiallergic, anti-inflammatory and ant-diabetic. This study aimed to evaluate the potential effect of quercetin to treat pancreatitis. After verification, 19 relevant articles were included in the analysis, where Linear mix model analysis was applied to parameters including blood glucose, pancreas islet number, insulin level, TBARS level, nuclear factor kappa B (NF-κB), endogenous antioxidant (GSH, SOD, CAT, GPx), TNFα, IL−6, Caspase 3 (Casp3) and IL−1b gene expression. The level quercetin was used as fixed effect and study setting as random effect. The administration of quercetin was able to significantly increase the production of tissue insulin, restore the antioxidant defense system (increase SOD, GSH, decrease TBARS), and inhibit the progression of inflammation based on the expression of NF-kB, TNFα and IL−6 genes. Therefore, the results of meta-analysis support the hypothesis that quercetin is able to restore pancreatic function, and has the potential to be developed to treat pancreatitis.
In the automotive industry, a great challenge of production scheduling is to sequence cars on assembly lines. Among a wide variety of scheduling approaches, academics and manufacturers pay close ...attention to two specific models: Mixed-Model Sequencing (MMS) and Car Sequencing (CS). Whereas MMS explicitly considers the assembly line balance, CS operates with sequencing rules to find the best car sequence fulfilling the assembly plant requirements, like minimising work overload for assembly workers. Meanwhile, automakers including Renault Group are increasingly willing to consider other requirements, like end-to-end supply chain matters, in production planning and scheduling. In this context, this study compares MMS- and CS-feasible solution spaces to determine which workload-oriented sequencing model would be the most appropriate to later integrate new optimisation. We introduce two exact methods based on Dynamic Programming to assess the gap between both models. Numerical experiments are carried out on real-life manufacturing features from a Renault Group assembly plant. They show that MMS generates more feasible sequences than CS regardless of the sequencing rule calculation method. Only the sequencing rules used by real-life production schedulers result in a higher number of distinct feasible sequences for CS, highlighting that the plant might select a sequence with work overload situations.