Prenatal exposures to endocrine-disrupting chemicals (EDCs) during critical developmental windows have been implicated in the etiologies of a wide array of adverse perinatal and pediatric outcomes. ...Epidemiological studies have concentrated on the health effects of individual chemicals, despite the understanding that EDCs act together via common mechanisms, that pregnant women are exposed to multiple EDCs simultaneously, and that substantial toxicological evidence of adverse developmental effects has been documented. There is a move toward multipollutant models in environmental epidemiology; however, there is no current consensus on appropriate statistical methods.
We aimed to review the statistical methods used in these studies, to identify additional applicable methods, and to determine the strengths and weaknesses of each method for addressing the salient statistical and epidemiological challenges.
We searched Embase, MEDLINE, and Web of Science for epidemiological studies of endocrine-sensitive outcomes in the children of mothers exposed to EDC mixtures during pregnancy and identified alternative statistical methods from the wider literature.
We identified 74 studies and analyzed the methods used to estimate mixture health effects, identify important mixture components, account for nonmonotonicity in exposure–response relationships, assess interactions, and identify windows of exposure susceptibility. We identified both frequentist and Bayesian methods that are robust to multicollinearity, performing shrinkage, variable selection, dimension reduction, statistical learning, or smoothing, including methods that were not used by the studies included in our review.
Compelling motivation exists for analyzing EDCs as mixtures, yet many studies make simplifying assumptions about EDC additivity, relative potency, and linearity, or overlook the potential for bias due to asymmetries in chemical persistence. We discuss the potential impacts of these choices and suggest alternative methods to improve analyses of prenatal exposure to EDC mixtures. https://doi.org/10.1289/EHP2207.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, OILJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK, VSZLJ
Background: Papers describing the results of a randomised trial should include a baseline table that compares the characteristics of randomised groups. Researchers who fraudulently generate trials ...often unwittingly create baseline tables that are implausibly similar (under-dispersed) or have large differences between groups (over-dispersed). I aimed to create an automated algorithm to screen for under- and over-dispersion in the baseline tables of randomised trials.
Methods: Using a cross-sectional study I examined 2,245 randomised controlled trials published in health and medical journals on
PubMed Central. I estimated the probability that a trial's baseline summary statistics were under- or over-dispersed using a Bayesian model that examined the distribution of t-statistics for the between-group differences, and compared this with an expected distribution without dispersion. I used a simulation study to test the ability of the model to find under- or over-dispersion and compared its performance with an existing test of dispersion based on a uniform test of p-values. My model combined categorical and continuous summary statistics, whereas the uniform test used only continuous statistics.
Results: The algorithm had a relatively good accuracy for extracting the data from baseline tables, matching well on the size of the tables and sample size. Using t-statistics in the Bayesian model out-performed the uniform test of p-values, which had many false positives for skewed, categorical and rounded data that were not under- or over-dispersed. For trials published on
PubMed Central, some tables appeared under- or over-dispersed because they had an atypical presentation or had reporting errors. Some trials flagged as under-dispersed had groups with strikingly similar summary statistics.
Conclusions: Automated screening for fraud of all submitted trials is challenging due to the widely varying presentation of baseline tables. The Bayesian model could be useful in targeted checks of suspected trials or authors.
Appropriate descriptions of statistical methods are essential for evaluating research quality and reproducibility. Despite continued efforts to improve reporting in publications, inadequate ...descriptions of statistical methods persist. At times, reading statistical methods sections can conjure feelings of dèjá vu, with content resembling cut-and-pasted or "boilerplate text" from already published work. Instances of boilerplate text suggest a mechanistic approach to statistical analysis, where the same default methods are being used and described using standardized text. To investigate the extent of this practice, we analyzed text extracted from published statistical methods sections from PLOS ONE and the Australian and New Zealand Clinical Trials Registry (ANZCTR). Topic modeling was applied to analyze data from 111,731 papers published in PLOS ONE and 9,523 studies registered with the ANZCTR. PLOS ONE topics emphasized definitions of statistical significance, software and descriptive statistics. One in three PLOS ONE papers contained at least 1 sentence that was a direct copy from another paper. 12,675 papers (11%) closely matched to the sentence "a p-value < 0.05 was considered statistically significant". Common topics across ANZCTR studies differentiated between study designs and analysis methods, with matching text found in approximately 3% of sections. Our findings quantify a serious problem affecting the reporting of statistical methods and shed light on perceptions about the communication of statistics as part of the scientific process. Results further emphasize the importance of rigorous statistical review to ensure that adequate descriptions of methods are prioritized over relatively minor details such as p-values and software when reporting research outcomes.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Some acronyms are useful and are widely understood, but many of the acronyms used in scientific papers hinder understanding and contribute to the increasing fragmentation of science. Here we report ...the results of an analysis of more than 24 million article titles and 18 million article abstracts published between 1950 and 2019. There was at least one acronym in 19% of the titles and 73% of the abstracts. Acronym use has also increased over time, but the re-use of acronyms has declined. We found that from more than one million unique acronyms in our data, just over 2,000 (0.2%) were used regularly, and most acronyms (79%) appeared fewer than 10 times. Acronyms are not the biggest current problem in science communication, but reducing their use is a simple change that would help readers and potentially increase the value of science.
Many previous studies have found seasonal patterns in birth outcomes, but with little agreement about which season poses the highest risk. Some of the heterogeneity between studies may be explained ...by a previously unknown bias. The bias occurs in retrospective cohorts which include all births occurring within a fixed start and end date, which means shorter pregnancies are missed at the start of the study, and longer pregnancies are missed at the end. Our objective was to show the potential size of this bias and how to avoid it.
To demonstrate the bias we simulated a retrospective birth cohort with no seasonal pattern in gestation and used a range of cohort end dates. As a real example, we used a cohort of 114,063 singleton births in Brisbane between 1 July 2005 and 30 June 2009 and examined the bias when estimating changes in gestation length associated with season (using month of conception) and a seasonal exposure (temperature). We used survival analyses with temperature as a time-dependent variable.
We found strong artificial seasonal patterns in gestation length by month of conception, which depended on the end date of the study. The bias was avoided when the day and month of the start date was just before the day and month of the end date (regardless of year), so that the longer gestations at the start of the study were balanced by the shorter gestations at the end. After removing the fixed cohort bias there was a noticeable change in the effect of temperature on gestation length. The adjusted hazard ratios were flatter at the extremes of temperature but steeper between 15 and 25°C.
Studies using retrospective birth cohorts should account for the fixed cohort bias by removing selected births to get unbiased estimates of seasonal health effects.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
BACKGROUND: Heat-related mortality is a matter of great public health concern, especially in the light of climate change. Although many studies have found associations between high temperatures and ...mortality, more research is needed to project the future impacts of climate change on heat related mortality. OBJECTIVES: We conducted a systematic review of research and methods for projecting future heat related mortality under climate change scenarios. DATA SOURCES AND EXTRACTION: A literature search was conducted in August 2010, using the electronic databases PubMed, Scopus, ScienceDirect, ProQuest, and Web of Science. The search was limited to peer-reviewed journal articles published in English from January 1980 through July 2010. DATA SYNTHESIS: Fourteen studies fulfilled the inclusion criteria. Most projections showed that climate change would result in a substantial increase in heat-related mortality. Projecting heat-related mortality requires understanding historical temperature-mortality relationships and considering the future changes in climate, population, and acclimatization. Further research is needed to provide astronger theoretical framework for projections, including a better understanding of socioeconomic development, adaptation strategies, land-use patterns, air pollution, and mortality displacement. CONCLUSIONS: Scenario-based projection research will meaningfully contribute to assessing and managing the potential impacts of climate change on heat-related mortality.
Extreme heat is a leading weather-related cause of illness and death in many locations across the globe, including subtropical Australia. The possibility of increasingly frequent and severe heat ...waves warrants continued efforts to reduce this health burden, which could be accomplished by targeting intervention measures toward the most vulnerable communities.
We sought to quantify spatial variability in heat-related morbidity in Brisbane, Australia, to highlight regions of the city with the greatest risk. We also aimed to find area-level social and environmental determinants of high risk within Brisbane.
We used a series of hierarchical Bayesian models to examine city-wide and intracity associations between temperature and morbidity using a 2007-2011 time series of geographically referenced hospital admissions data. The models accounted for long-term time trends, seasonality, and day of week and holiday effects.
On average, a 10°C increase in daily maximum temperature during the summer was associated with a 7.2% increase in hospital admissions (95% CI: 4.7, 9.8%) on the following day. Positive statistically significant relationships between admissions and temperature were found for 16 of the city's 158 areas; negative relationships were found for 5 areas. High-risk areas were associated with a lack of high income earners and higher population density.
Geographically targeted public health strategies for extreme heat may be effective in Brisbane, because morbidity risk was found to be spatially variable. Emergency responders, health officials, and city planners could focus on short- and long-term intervention measures that reach communities in the city with lower incomes and higher population densities, including reduction of urban heat island effects.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, OILJ, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK, VSZLJ
The "publish or perish" incentive drives many researchers to increase the quantity of their papers at the cost of quality. Lowering quality increases the number of false positive errors which is a ...key cause of the reproducibility crisis. We adapted a previously published simulation of the research world where labs that produce many papers are more likely to have "child" labs that inherit their characteristics. This selection creates a competitive spiral that favours quantity over quality. To try to halt the competitive spiral we added random audits that could detect and remove labs with a high proportion of false positives, and also improved the behaviour of "child" and "parent" labs who increased their effort and so lowered their probability of making a false positive error. Without auditing, only 0.2% of simulations did not experience the competitive spiral, defined by a convergence to the highest possible false positive probability. Auditing 1.35% of papers avoided the competitive spiral in 71% of simulations, and auditing 1.94% of papers in 95% of simulations. Audits worked best when they were only applied to established labs with 50 or more papers compared with labs with 25 or more papers. Adding a ±20% random error to the number of false positives to simulate peer reviewer error did not reduce the audits' efficacy. The main benefit of the audits was via the increase in effort in "child" and "parent" labs. Audits improved the literature by reducing the number of false positives from 30.2 per 100 papers to 12.3 per 100 papers. Auditing 1.94% of papers would cost an estimated $15.9 million per year if applied to papers produced by National Institutes of Health funding. Our simulation greatly simplifies the research world and there are many unanswered questions about if and how audits would work that can only be addressed by a trial of an audit.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
China is experiencing more and more days of serious air pollution recently, and has the highest lung cancer burden in the world.
To examine the associations between lung cancer incidence and fine ...particles (PM2.5) and ozone in China.
We used 75 communities’ data of lung cancer incidence from the National Cancer Registration of China from 1990 to 2009. The annual concentrations of fine particles (PM2.5) and ozone at 0.1°×0.1° spatial resolution were generated by combing remote sensing, global chemical transport models, and improvements in coverage of surface measurements. A spatial age-period-cohort model was used to examine the relative risks of lung cancer incidence associated with the air pollutants, after adjusting for impacts of age, period, and birth cohort, sex, and community type (rural and urban) as well as the spatial variation on lung cancer incidence.
The relative risks of lung cancer incidence related to a 10µg/m3 increase in 2-year average PM2.5 were 1.055 (95% confidence interval (CI): 1.038, 1.072) for men, 1.149 (1.120, 1.178) for women, 1.060 (1.044, 1.075) for an urban communities, 1.037 (0.998, 1.078) for a rural population, 1.074 (1.052, 1.096) for people aged 30–65 years, and 1.111 (1.077, 1.146) for those aged over 75 years. Ozone also had a significant association with lung cancer incidence.
The increased risks of lung cancer incidence were associated with PM2.5 and ozone air pollution. Control measures to reduce air pollution would likely lower the future incidence of lung cancer.
BACKGROUND:The effect of extreme temperature has become an increasing public health concern. Evaluating the impact of ambient temperature on morbidity has received less attention than its impact on ...mortality.
METHODS:We performed a systematic literature review and extracted quantitative estimates of the effects of hot temperatures on cardiorespiratory morbidity. There were too few studies on effects of cold temperatures to warrant a summary. Pooled estimates of effects of heat were calculated using a Bayesian hierarchical approach that allowed multiple results to be included from the same study, particularly results at different latitudes and with varying lagged effects.
RESULTS:Twenty-one studies were included in the final meta-analysis. The pooled results suggest an increase of 3.2% (95% posterior interval = −3.2% to 10.1%) in respiratory morbidity with 1°C increase on hot days. No apparent association was observed for cardiovascular morbidity (−0.5% −3.0% to 2.1%). The length of lags had inconsistent effects on the risk of respiratory and cardiovascular morbidity, whereas latitude had little effect on either.
CONCLUSIONS:The effects of temperature on cardiorespiratory morbidity seemed to be smaller and more variable than previous findings related to mortality.