Stress granules (SGs) form during cellular stress and are implicated in neurodegenerative diseases such as amyotrophic lateral sclerosis and frontotemporal dementia (ALS/FTD). To yield insights into ...the role of SGs in pathophysiology, we performed a high-content screen to identify small molecules that alter SG properties in proliferative cells and human iPSC-derived motor neurons (iPS-MNs). One major class of active molecules contained extended planar aromatic moieties, suggesting a potential to intercalate in nucleic acids. Accordingly, we show that several hit compounds can prevent the RNA-dependent recruitment of the ALS-associated RNA-binding proteins (RBPs) TDP-43, FUS, and HNRNPA2B1 into SGs. We further demonstrate that transient SG formation contributes to persistent accumulation of TDP-43 into cytoplasmic puncta and that our hit compounds can reduce this accumulation in iPS-MNs from ALS patients. We propose that compounds with planar moieties represent a promising starting point to develop small-molecule therapeutics for treating ALS/FTD.
Display omitted
Display omitted
•∼100 small-molecule compounds modulate SGs in HEK293xT cells, NPCs, and iPS-MNs•ALS-associated RBPs accumulate in SGs during prolonged stress•Molecules with planar moieties disrupt accumulation of ALS-associated RBPs in SGs•Compounds reduce TDP-43 accumulation in cytoplasmic puncta in ALS mutant iPS-MNs
Using high-content screening, we identified a class of planar small molecules that can (1) modulate the dynamics of neurodegeneration-linked stress granules (SGs), (2) reduce SG association of ALS-linked RNA-binding proteins, and (3) prevent accumulation of TDP-43 within persistent cytoplasmic puncta.
Context is known to affect how a stimulus is perceived. A variety of illusions have been attributed to contextual processing-from orientation tilt effects to chromatic induction phenomena, but their ...neural underpinnings remain poorly understood. Here, we present a recurrent network model of classical and extraclassical receptive fields that is constrained by the anatomy and physiology of the visual cortex. A key feature of the model is the postulated existence of near- versus far- extraclassical regions with complementary facilitatory and suppressive contributions to the classical receptive field. The model accounts for a variety of contextual illusions, reveals commonalities between seemingly disparate phenomena, and helps organize them into a novel taxonomy. It explains how center-surround interactions may shift from attraction to repulsion in tilt effects, and from contrast to assimilation in induction phenomena. The model further explains enhanced perceptual shifts generated by a class of patterned background stimuli that activate the two opponent extraclassical regions cooperatively. Overall, the ability of the model to account for the variety and complexity of contextual illusions provides computational evidence for a novel canonical circuit that is shared across visual modalities.
Skeletal muscle contractions are initiated by an increase in Ca2+ released during excitation–contraction (EC) coupling, and defects in EC coupling are associated with human myopathies. EC coupling ...requires communication between voltage-sensing dihydropyridine receptors (DHPRs) in transverse tubule membrane and Ca2+ release channel ryanodine receptor 1 (RyR1) in the sarcoplasmic reticulum (SR). Stac3 protein (SH3 and cysteine-rich domain 3) is an essential component of the EC coupling apparatus and a mutation in human STAC3 causes the debilitating Native American myopathy (NAM), but the nature of how Stac3 acts on the DHPR and/or RyR1 is unknown. Using electron microscopy, electrophysiology, and dynamic imaging of zebrafish muscle fibers, we find significantly reduced DHPR levels, functionality, and stability in stac3 mutants. Furthermore, stac3NAM
myofibers exhibited increased caffeine-induced Ca2+ release across a wide range of concentrations in the absence of altered caffeine sensitivity as well as increased Ca2+ in internal stores, which is consistent with increased SR luminal Ca2+. These findings define critical roles for Stac3 in EC coupling and human disease.
Neurotoxicity can be detected in live microscopy by morphological changes such as retraction of neurites, fragmentation, blebbing of the neuronal soma and ultimately the disappearance of ...fluorescently labeled neurons. However, quantification of these features is often difficult, low-throughput, and imprecise due to the overreliance on human curation. Recently, we showed that convolutional neural network (CNN) models can outperform human curators in the assessment of neuronal death from images of fluorescently labeled neurons, suggesting that there is information within the images that indicates toxicity but that is not apparent to the human eye. In particular, the CNN's decision strategy indicated that information within the nuclear region was essential for its superhuman performance. Here, we systematically tested this prediction by comparing images of fluorescent neuronal morphology from nuclear-localized fluorescent protein to those from freely diffused fluorescent protein for classifying neuronal death. We found that biomarker-optimized (BO-) CNNs could learn to classify neuronal death from fluorescent protein-localized nuclear morphology (mApple-NLS-CNN) alone, with super-human accuracy. Furthermore, leveraging methods from explainable artificial intelligence, we identified novel features within the nuclear-localized fluorescent protein signal that were indicative of neuronal death. Our findings suggest that the use of a nuclear morphology marker in live imaging combined with computational models such mApple-NLS-CNN can provide an optimal readout of neuronal death, a common result of neurotoxicity.
Cellular events underlying neurodegenerative disease may be captured by longitudinal live microscopy of neurons. While the advent of robot-assisted microscopy has helped scale such efforts to ...high-throughput regimes with the statistical power to detect transient events, time-intensive human annotation is required. We addressed this fundamental limitation with biomarker-optimized convolutional neural networks (BO-CNNs): interpretable computer vision models trained directly on biosensor activity. We demonstrate the ability of BO-CNNs to detect cell death, which is typically measured by trained annotators. BO-CNNs detected cell death with superhuman accuracy and speed by learning to identify subcellular morphology associated with cell vitality, despite receiving no explicit supervision to rely on these features. These models also revealed an intranuclear morphology signal that is difficult to spot by eye and had not previously been linked to cell death, but that reliably indicates death. BO-CNNs are broadly useful for analyzing live microscopy and essential for interpreting high-throughput experiments.
Visual understanding requires comprehending complex visual relations between objects within a scene. Here, we seek to characterize the computational demands for abstract visual reasoning. We do this ...by systematically assessing the ability of modern deep convolutional neural networks (CNNs) to learn to solve the synthetic visual reasoning test (SVRT) challenge, a collection of 23 visual reasoning problems. Our analysis reveals a novel taxonomy of visual reasoning tasks, which can be primarily explained by both the type of relations (same-different versus spatial-relation judgments) and the number of relations used to compose the underlying rules. Prior cognitive neuroscience work suggests that attention plays a key role in humans' visual reasoning ability. To test this hypothesis, we extended the CNNs with spatial and feature-based attention mechanisms. In a second series of experiments, we evaluated the ability of these attention networks to learn to solve the SVRT challenge and found the resulting architectures to be much more efficient at solving the hardest of these visual reasoning tasks. Most important, the corresponding improvements on individual tasks partially explained our novel taxonomy. Overall, this work provides a granular computational account of visual reasoning and yields testable neuroscience predictions regarding the differential need for feature-based versus spatial attention depending on the type of visual reasoning problem.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The pathologic diagnosis and Gleason grading of prostate cancer are time-consuming, error-prone, and subject to interobserver variability. Machine learning offers opportunities to improve the ...diagnosis, risk stratification, and prognostication of prostate cancer.
To develop a state-of-the-art deep learning algorithm for the histopathologic diagnosis and Gleason grading of prostate biopsy specimens.
A total of 85 prostate core biopsy specimens from 25 patients were digitized at 20× magnification and annotated for Gleason 3, 4, and 5 prostate adenocarcinoma by a urologic pathologist. From these virtual slides, we sampled 14803 image patches of 256×256 pixels, approximately balanced for malignancy.
We trained and tested a deep residual convolutional neural network to classify each patch at two levels: (1) coarse (benign vs malignant) and (2) fine (benign vs Gleason 3 vs 4 vs 5). Model performance was evaluated using fivefold cross-validation. Randomization tests were used for hypothesis testing of model performance versus chance.
The model demonstrated 91.5% accuracy (p<0.001) at coarse-level classification of image patches as benign versus malignant (0.93 sensitivity, 0.90 specificity, and 0.95 average precision). The model demonstrated 85.4% accuracy (p<0.001) at fine-level classification of image patches as benign versus Gleason 3 versus Gleason 4 versus Gleason 5 (0.83 sensitivity, 0.94 specificity, and 0.83 average precision), with the greatest number of confusions in distinguishing between Gleason 3 and 4, and between Gleason 4 and 5. Limitations include the small sample size and the need for external validation.
In this study, a deep learning-based computer vision algorithm demonstrated excellent performance for the histopathologic diagnosis and Gleason grading of prostate cancer.
We developed a deep learning algorithm that demonstrated excellent performance for the diagnosis and grading of prostate cancer.
In this pilot study, a deep learning-based computer vision algorithm demonstrated excellent accuracy for the histopathologic diagnosis and Gleason grading of prostate cancer. These results are encouraging for the future clinical application of automated histopathologic diagnosis via deep learning.