Alcohol's impact on telomere length, a proposed marker of biological aging, is unclear. We performed the largest observational study to date (in n = 245,354 UK Biobank participants) and compared ...findings with Mendelian randomization (MR) estimates. Two-sample MR used data from 472,174 participants in a recent genome-wide association study (GWAS) of telomere length. Genetic variants were selected on the basis of associations with alcohol consumption (n = 941,280) and alcohol use disorder (AUD) (n = 57,564 cases). Non-linear MR employed UK Biobank individual data. MR analyses suggested a causal relationship between alcohol traits, more strongly for AUD, and telomere length. Higher genetically-predicted AUD (inverse variance-weighted (IVW) β = -0.06, 95% confidence interval (CI): -0.10 to -0.02, p = 0.001) was associated with shorter telomere length. There was a weaker association with genetically-predicted alcoholic drinks weekly (IVW β = -0.07, CI: -0.14 to -0.01, p = 0.03). Results were consistent across methods and independent from smoking. Non-linear analyses indicated a potential threshold relationship between alcohol and telomere length. Our findings indicate that alcohol consumption may shorten telomere length. There are implications for age-related diseases.
In recent years, analyses of event related potentials/fields have moved from the selection of a few components and peaks to a mass-univariate approach in which the whole data space is analyzed. Such ...extensive testing increases the number of false positives and correction for multiple comparisons is needed.
Here we review all cluster-based correction for multiple comparison methods (cluster-height, cluster-size, cluster-mass, and threshold free cluster enhancement - TFCE), in conjunction with two computational approaches (permutation and bootstrap).
Data driven Monte-Carlo simulations comparing two conditions within subjects (two sample Student's t-test) showed that, on average, all cluster-based methods using permutation or bootstrap alike control well the family-wise error rate (FWER), with a few caveats.
(i) A minimum of 800 iterations are necessary to obtain stable results; (ii) below 50 trials, bootstrap methods are too conservative; (iii) for low critical family-wise error rates (e.g. p=1%), permutations can be too liberal; (iv) TFCE controls best the type 1 error rate with an attenuated extent parameter (i.e. power<1).
A radio counterpart to a neutron star merger Hallinan, G.; Corsi, A.; Mooley, K. P. ...
Science (American Association for the Advancement of Science),
12/2017, Letnik:
358, Številka:
6370
Journal Article
Recenzirano
Odprti dostop
Gravitational waves have been detected from a binary neutron star merger event, GW170817. The detection of electromagnetic radiation from the same source has shown that the merger occurred in the ...outskirts of the galaxy NGC 4993, at a distance of 40 megaparsecs from Earth. We report the detection of a counterpart radio source that appears 16 days after the event, allowing us to diagnose the energetics and environment of the merger. The observed radio emission can be explained by either a collimated ultrarelativistic jet, viewed off-axis, or a cocoon of mildly relativistic ejecta. Within 100 days of the merger, the radio light curves will enable observers to distinguish between these models, and the angular velocity and geometry of the debris will be directly measurable by very long baseline interferometry.
We performed a whole-transcriptome correlation analysis, followed by the pathway enrichment and testing of innate immune response pathway analyses to evaluate the hypothesis that transcriptional ...activity can predict cortical gray matter thickness (GMT) variability during normal cerebral aging.
Transcriptome and GMT data were available for 379 individuals (age range=28–85) community-dwelling members of large extended Mexican American families. Collection of transcriptome data preceded that of neuroimaging data by 17years. Genome-wide gene transcriptome data consisted of 20,413 heritable lymphocytes-based transcripts. GMT measurements were performed from high-resolution (isotropic 800μm) T1-weighted MRI. Transcriptome-wide and pathway enrichment analysis was used to classify genes correlated with GMT. Transcripts for sixty genes from seven innate immune pathways were tested as specific predictors of GMT variability.
Transcripts for eight genes (IGFBP3, LRRN3, CRIP2, SCD, IDS, TCF4, GATA3, and HN1) passed the transcriptome-wide significance threshold. Four orthogonal factors extracted from this set predicted 31.9% of the variability in the whole-brain and between 23.4 and 35% of regional GMT measurements. Pathway enrichment analysis identified six functional categories including cellular proliferation, aggregation, differentiation, viral infection, and metabolism. The integrin signaling pathway was significantly (p<10−6) enriched with GMT. Finally, three innate immune pathways (complement signaling, toll-receptors and scavenger and immunoglobulins) were significantly associated with GMT.
Expression activity for the genes that regulate cellular proliferation, adhesion, differentiation and inflammation can explain a significant proportion of individual variability in cortical GMT. Our findings suggest that normal cerebral aging is the product of a progressive decline in regenerative capacity and increased neuroinflammation.
•Transcriptome activity predicted gray matter thickness in aging.•Transcriptome-wide association and enrichment analyses were used.•Cellular proliferation and differentiation led to higher cerebral integrity.•Upregulation of neuroinflammation led to lower cerebral integrity.
Schippers, Renken and Keysers (NeuroImage, 2011) present a simulation of multi-subject lag-based causality estimation. We fully agree that single-subject evaluations (e.g., Smith et al., 2011) need ...to be revisited in the context of multi-subject studies, and Schippers' paper is a good example, including detailed multi-level simulation and cross-subject statistical modelling. The authors conclude that “the average chance to find a significant Granger causality effect when no actual influence is present in the data stays well below the p-level imposed on the second level statistics” and that “when the analyses reveal a significant directed influence, this direction was accurate in the vast majority of the cases”. Unfortunately, we believe that the general meaning that may be taken from these statements is not supported by the paper's results, as there may in reality be a systematic (group-average) difference in haemodynamic delay between two brain areas. While many statements in the paper (e.g., the final two sentences) do refer to this problem, we fear that the overriding message that many readers may take from the paper could cause misunderstanding.
► Group-level FMRI simulations can be useful to test methods such as Granger causality. ► Simulation results need careful evaluation and interpretation. ► There is ample evidence of haemodynamic variability across regions and voxels. ► Lag-based FMRI causality analysis may be biassed by such variation. ► This confound should be considered when reporting lag-based results.
Non-white noise in fMRI: Does modelling have an impact? Lund, Torben E.; Madsen, Kristoffer H.; Sidaros, Karam ...
NeuroImage (Orlando, Fla.),
2006, 2006-Jan-01, 2006-01-00, 20060101, Letnik:
29, Številka:
1
Journal Article
Recenzirano
The sources of non-white noise in Blood Oxygenation Level Dependent (BOLD) functional magnetic resonance imaging (fMRI) are many. Familiar sources include low-frequency drift due to hardware ...imperfections, oscillatory noise due to respiration and cardiac pulsation and residual movement artefacts not accounted for by rigid body registration. These contributions give rise to temporal autocorrelation in the residuals of the fMRI signal and invalidate the statistical analysis as the errors are no longer independent. The low-frequency drift is often removed by high-pass filtering, and other effects are typically modelled as an autoregressive (AR) process. In this paper, we propose an alternative approach: Nuisance Variable Regression (NVR). By inclusion of confounding effects in a general linear model (GLM), we first confirm that the spatial distribution of the various fMRI noise sources is similar to what has already been described in the literature. Subsequently, we demonstrate, using diagnostic statistics, that removal of these contributions reduces first and higher order autocorrelation as well as non-normality in the residuals, thereby improving the validity of the drawn inferences. In addition, we also compare the performance of the NVR method to the whitening approach implemented in SPM2.
Targeted knockout of genes in primary human cells using CRISPR-Cas9-mediated genome-editing represents a powerful approach to study gene function and to discern molecular mechanisms underlying ...complex human diseases. We used lentiviral delivery of CRISPR-Cas9 machinery and conditional reprogramming culture methods to knockout the MUC18 gene in human primary nasal airway epithelial cells (AECs). Massively parallel sequencing technology was used to confirm that the genome of essentially all cells in the edited AEC populations contained coding region insertions and deletions (indels). Correspondingly, we found mRNA expression of MUC18 was greatly reduced and protein expression was absent. Characterization of MUC18 knockout cell populations stimulated with TLR2, 3 and 4 agonists revealed that IL-8 (a proinflammatory chemokine) responses of AECs were greatly reduced in the absence of functional MUC18 protein. Our results show the feasibility of CRISPR-Cas9-mediated gene knockouts in AEC culture (both submerged and polarized), and suggest a proinflammatory role for MUC18 in airway epithelial response to bacterial and viral stimuli.
•Meta-analyses require a consistent approach but specific guidelines are lacking.•Best-practice recommendations for conducting neuroimaging meta-analyses are proposed.•We set standards regarding ...which information should be reported for meta-analyses.•The guidelines should improve transparency and replicability of meta-analytic results.
Neuroimaging has evolved into a widely used method to investigate the functional neuroanatomy, brain-behaviour relationships, and pathophysiology of brain disorders, yielding a literature of more than 30,000 papers. With such an explosion of data, it is increasingly difficult to sift through the literature and distinguish spurious from replicable findings. Furthermore, due to the large number of studies, it is challenging to keep track of the wealth of findings. A variety of meta-analytical methods (coordinate-based and image-based) have been developed to help summarise and integrate the vast amount of data arising from neuroimaging studies. However, the field lacks specific guidelines for the conduct of such meta-analyses. Based on our combined experience, we propose best-practice recommendations that researchers from multiple disciplines may find helpful. In addition, we provide specific guidelines and a checklist that will hopefully improve the transparency, traceability, replicability and reporting of meta-analytical results of neuroimaging data.
Sound decision making in environmental research and management requires an understanding of causal relationships between stressors and ecological responses. However, demonstrating cause–effect ...relationships in natural systems is challenging because of difficulties with natural variability, performing experiments, lack of replication, and the presence of confounding influences. Thus, even the best-designed study may not establish causality. We describe a method that uses evidence available in the extensive published ecological literature to assess support for cause–effect hypotheses in environmental investigations. Our method, called Eco Evidence, is a form of causal criteria analysis—a technique developed by epidemiologists in the 1960s—who faced similar difficulties in attributing causation. The Eco Evidence method is an 8-step process in which the user conducts a systematic review of the evidence for one or more cause–effect hypotheses to assess the level of support for an overall question. In contrast to causal criteria analyses in epidemiology, users of Eco Evidence use a subset of criteria most relevant to environmental investigations and weight each piece of evidence according to its study design. Stronger studies contribute more to the assessment of causality, but weaker evidence is not discarded. This feature is important because environmental evidence is often scarce. The outputs of the analysis are a guide to the strength of evidence for or against the cause–effect hypotheses. They strengthen confidence in the conclusions drawn from that evidence, but cannot ever prove causality. They also indicate situations where knowledge gaps signify insufficient evidence to reach a conclusion. The method is supported by the freely available Eco Evidence software package, which produces a standard report, maximizing the transparency and repeatability of any assessment. Environmental science has lagged behind other disciplines in systematic assessment of evidence to improve research and management. Using the Eco Evidence method, environmental scientists can better use the extensive published literature to guide evidence-based decisions and undertake transparent assessments of ecological cause and effect.