Schizophrenia is a devastating mental disease with an apparent disruption in the highly associative default mode network (DMN). Interplay between this canonical network and others probably ...contributes to goal-directed behavior so its disturbance is a candidate neural fingerprint underlying schizophrenia psychopathology. Previous research has reported both hyperconnectivity and hypoconnectivity within the DMN, and both increased and decreased DMN coupling with the multimodal saliency network (SN) and dorsal attention network (DAN). This study systematically revisited network disruption in patients with schizophrenia using data-derived network atlases and multivariate pattern-learning algorithms in a multisite dataset (n = 325). Resting-state fluctuations in unconstrained brain states were used to estimate functional connectivity, and local volume differences between individuals were used to estimate structural co-occurrence within and between the DMN, SN, and DAN. In brain structure and function, sparse inverse covariance estimates of network coupling were used to characterize healthy participants and patients with schizophrenia, and to identify statistically significant group differences. Evidence did not confirm that the backbone of the DMN was the primary driver of brain dysfunction in schizophrenia. Instead, functional and structural aberrations were frequently located outside of the DMN core, such as in the anterior temporoparietal junction and precuneus. Additionally, functional covariation analyses highlighted dysfunctional DMN-DAN coupling, while structural covariation results highlighted aberrant DMN-SN coupling. Our findings reframe the role of the DMN core and its relation to canonical networks in schizophrenia. We thus underline the importance of large-scale neural interactions as effective biomarkers and indicators of how to tailor psychiatric care to single patients.
Altered brain connectivity has been described in people with Parkinson's disease and in response to dopaminergic medications. However, it is unclear whether dopaminergic medications primarily ...'normalize' disease related connectivity changes or if they induce unique alterations in brain connectivity. Further, it is unclear how these disease- and medication-associated changes in brain connectivity relate differently to specific motor manifestations of disease, such as bradykinesia/rigidity and tremor. In this study, we applied a novel covariance projection approach in combination with a bootstrapped permutation test to resting state functional MRI data from 57 Parkinson's disease and 20 healthy control participants to determine the Parkinson's medication-state and disease-state connectivity changes associated with different motor manifestations of disease. First, we identified brain connections that best classified Parkinson's disease ON versus OFF dopamine and Parkinson's disease versus healthy controls, achieving 96.9 ± 5.9% and 72.7 ± 12.4% classification accuracy, respectively. Second, we investigated the connections that significantly contribute to the classifications. We found that the connections greater in Parkinson's disease OFF compared to ON dopamine are primarily between motor (cerebellum and putamen) and posterior cortical regions, such as the posterior cingulate cortex. By contrast, connections that are greater in ON compared to OFF dopamine are between the right and left medial prefrontal cortex. We also identified the connections that are greater in healthy control compared to Parkinson's disease and found the most significant connections are associated with primary motor regions, such as the striatum and the supplementary motor area. Notably, these are different connections than those identified in Parkinson's disease OFF compared to ON. Third, we determined which of the Parkinson's medication-state and disease-state connections are associated with the severity of different motor symptoms. We found two connections correlate with both bradykinesia/rigidity severity and tremor severity, whereas four connections correlate with only bradykinesia/rigidity severity, and five connections correlate with only tremor severity. Connections that correlate with only tremor severity are anchored by the cerebellum and the supplemental motor area, but only those connections that include the supplemental motor area predict dopaminergic improvement in tremor. Our results suggest that dopaminergic medications do not simply 'normalize' abnormal brain connectivity associated with Parkinson's disease, but rather dopamine drives distinct connectivity changes, only some of which are associated with improved motor symptoms. In addition, the dissociation between of connections related to severity of bradykinesia/rigidity versus tremor highlights the distinct abnormalities in brain circuitry underlying these specific motor symptoms.
Ultrafast synchrotron microtomography has been used to study in situ and in real time the initial stages of silicate glass melt formation from crystalline granular raw materials. Significant and ...unexpected rearrangements of grains occur below the nominal eutectic temperature, and several drastically different solid‐state reactions are observed to take place at different types of intergranular contacts. These reactions have a profound influence on the formation and composition of the liquids produced, and control the formation of heterogeneities.
Scientists are skilled with computers, and main of them understand the intricacy of numerical computing. Yet, designing the sophisticated software architecture that controls an experiment requires ...different skills, and small- and mid-sized experimental labs often lack a software engineering culture. Bad design choices plague experimental labs even though the real experimental difficulty seldom lies in the software itself. In this article, I give some guidelines for designing an experiment's control software based on my experience in various Bose-Einstein condensation labs. I explore the tools and patterns that lead to successful projects - in particular, a flexible and reliable code base that lets scientists cope with a research lab's ever-changing goals and resources.
We report on light-shift tomography of a cloud of 87 Rb atoms in a far-detuned optical-dipole trap at 1565 nm. Our method is based on standard absorption imaging, but takes advantage of the strong ...light-shift of the excited state of the imaging transition, which is due to a quasi-resonance of the trapping laser with a higher excited level. We use this method to (i) map the equipotentials of a crossed optical-dipole trap, and (ii) study the thermalisation of an atomic cloud by following the evolution of the potential-energy of atoms during the free-evaporation process.
Are two sets of observations drawn from the same distribution? This problem
is a two-sample test. Kernel methods lead to many appealing properties. Indeed
state-of-the-art approaches use the $L^2$ ...distance between kernel-based
distribution representatives to derive their test statistics. Here, we show
that $L^p$ distances (with $p\geq 1$) between these distribution
representatives give metrics on the space of distributions that are
well-behaved to detect differences between distributions as they metrize the
weak convergence. Moreover, for analytic kernels, we show that the $L^1$
geometry gives improved testing power for scalable computational procedures.
Specifically, we derive a finite dimensional approximation of the metric given
as the $\ell_1$ norm of a vector which captures differences of expectations of
analytic functions evaluated at spatial locations or frequencies (i.e,
features). The features can be chosen to maximize the differences of the
distributions and give interpretable indications of how they differs. Using an
$\ell_1$ norm gives better detection because differences between
representatives are dense as we use analytic kernels (non-zero almost
everywhere). The tests are consistent, while much faster than state-of-the-art
quadratic-time kernel-based tests. Experiments on artificial and real-world
problems demonstrate improved power/time tradeoff than the state of the art,
based on $\ell_2$ norms, and in some cases, better outright power than even the
most expensive quadratic-time tests.
Scikit-learn Varoquaux, G.; Buitinck, L.; Louppe, G. ...
GetMobile (New York, N.Y.),
06/2015, Letnik:
19, Številka:
1
Journal Article
Recenzirano
Machine learning is a pervasive development at the intersection of statistics and computer science. While it can benefit many data-related applications, the technical nature of the research ...literature and the corresponding algorithms slows down its adoption. Scikit-learn is an open-source software project that aims at making machine learning accessible to all, whether it be in academia or in industry. It benefits from the general-purpose Python language, which is both broadly adopted in the scientific world, and supported by a thriving ecosystem of contributors. Here we give a quick introduction to scikit-learn as well as to machine-learning basics.
Medical segmentation models are evaluated empirically. As such an evaluation is based on a limited set of example images, it is unavoidably noisy. Beyond a mean performance measure, reporting ...confidence intervals is thus crucial. However, this is rarely done in medical image segmentation. The width of the confidence interval depends on the test set size and on the spread of the performance measure (its standard-deviation across of the test set). For classification, many test images are needed to avoid wide confidence intervals. Segmentation, however, has not been studied, and it differs by the amount of information brought by a given test image. In this paper, we study the typical confidence intervals in medical image segmentation. We carry experiments on 3D image segmentation using the standard nnU-net framework, two datasets from the Medical Decathlon challenge and two performance measures: the Dice accuracy and the Hausdorff distance. We show that the parametric confidence intervals are reasonable approximations of the bootstrap estimates for varying test set sizes and spread of the performance metric. Importantly, we show that the test size needed to achieve a given precision is often much lower than for classification tasks. Typically, a 1% wide confidence interval requires about 100-200 test samples when the spread is low (standard-deviation around 3%). More difficult segmentation tasks may lead to higher spreads and require over 1000 samples.
It has been know for at least one decade that functional MRI time series display long-memory properties, such as power-law scaling in the frequency spectrum. Concomitantly, multivariate model-free ...analysis of spatial patterns, such as spatial Independent Component Analysis (sICA), has been successfully used to segment from spontaneous activity Resting-State Networks (RSN) that correspond to known brain function. As recent neuroscientific studies suggest a link between spectral properties of brain activity and cognitive processes, a burning question emerges: can temporal scaling properties offer new markers of brain states encoded in these large scale networks? In this paper, we combine two recent methodologies: group-level canonical ICA for multi-subject segmentation of brain network, and wavelet leader-based multifractal formalism for the analysis of RSN scaling properties. We identify the brain networks that elicit self-similarity or multifractality and explore which spectral properties correspond specifically to known functionally-relevant processes in spontaneous activity.
Are two sets of observations drawn from the same distribution? This problem is a two-sample test. Kernel methods lead to many appealing properties. Indeed state-of-the-art approaches use the \(L^2\) ...distance between kernel-based distribution representatives to derive their test statistics. Here, we show that \(L^p\) distances (with \(p\geq 1\)) between these distribution representatives give metrics on the space of distributions that are well-behaved to detect differences between distributions as they metrize the weak convergence. Moreover, for analytic kernels, we show that the \(L^1\) geometry gives improved testing power for scalable computational procedures. Specifically, we derive a finite dimensional approximation of the metric given as the \(\ell_1\) norm of a vector which captures differences of expectations of analytic functions evaluated at spatial locations or frequencies (i.e, features). The features can be chosen to maximize the differences of the distributions and give interpretable indications of how they differs. Using an \(\ell_1\) norm gives better detection because differences between representatives are dense as we use analytic kernels (non-zero almost everywhere). The tests are consistent, while much faster than state-of-the-art quadratic-time kernel-based tests. Experiments on artificial and real-world problems demonstrate improved power/time tradeoff than the state of the art, based on \(\ell_2\) norms, and in some cases, better outright power than even the most expensive quadratic-time tests.