CT perfusion imaging is commonly used for infarct core quantification in acute ischemic stroke patients. The outcomes and perfusion maps of CT perfusion software, however, show many discrepancies ...between vendors. We aim to perform infarct core segmentation directly from CT perfusion source data using machine learning, excluding the need to use the perfusion maps from standard CT perfusion software. To this end, we present a symmetry-aware spatio-temporal segmentation model that encodes the micro-perfusion dynamics in the brain, while decoding a static segmentation map for infarct core assessment. Our proposed spatio-temporal PerfU-Net employs an attention module on the skip-connections to match the dimensions of the encoder and decoder. We train and evaluate the method on 94 and 62 scans, respectively, using the Ischemic Stroke Lesion Segmentation (ISLES) 2018 challenge data. We achieve state-of-the-art results compared to methods that only use CT perfusion source imaging with a Dice score of 0.46. We are almost on par with methods that use perfusion maps from third party software, whilst it is known that there is a large variation in these perfusion maps from various vendors. Moreover, we achieve improved performance compared to simple perfusion map analysis, which is used in clinical practice.
Display omitted
•Baseline infarct segmentation directly from CT perfusion source data.•Independent of using discrepant perfusion maps from external software.•Symmetry-aware model exploits the infarcted and healthy hemispheres simultaneously.•PerfU-Net encodes dynamic CT perfusion source data and decodes static segmentations.•PerfU-Net employs attention to propagate only the most informative features.
CT perfusion imaging is important in the imaging workup of acute ischemic stroke for evaluating affected cerebral tissue. CT perfusion analysis software produces cerebral perfusion maps from commonly ...noisy spatio-temporal CT perfusion data. High levels of noise can influence the results of CT perfusion analysis, necessitating software tuning. This work proposes a novel approach for CT perfusion analysis that uses physics-informed learning, an optimization framework that is robust to noise. In particular, we propose SPPINN: Spatio-temporal Perfusion Physics-Informed Neural Network and research spatio-temporal physics-informed learning. SPPINN learns implicit neural representations of contrast attenuation in CT perfusion scans using the spatio-temporal coordinates of the data and employs these representations to estimate a continuous representation of the cerebral perfusion parameters. We validate the approach on simulated data to quantify perfusion parameter estimation performance. Furthermore, we apply the method to in-house patient data and the public Ischemic Stroke Lesion Segmentation 2018 benchmark data to assess the correspondence between the perfusion maps and reference standard infarct core segmentations. Our method achieves accurate perfusion parameter estimates even with high noise levels and differentiates healthy tissue from infarcted tissue. Moreover, SPPINN perfusion maps accurately correspond with reference standard infarct core segmentations. Hence, we show that using spatio-temporal physics-informed learning for cerebral perfusion estimation is accurate, even in noisy CT perfusion data. The code for this work is available at https://github.com/lucasdevries/SPPINN.
Display omitted
•We present SPPINN: A new approach to CT perfusion analysis in acute ischemic stroke.•We use spatio-temporal physics-informed learning with implicit neural representations.•SPPINN is consistent across noise levels and distinguishes healthy/infarcted tissue.•Perfusion maps accurately correspond to reference standard infarct core segmentations.
Intravenous thrombolysis (IVT) before endovascular treatment (EVT) for acute ischemic stroke might induce intracerebral hemorrhages which could negatively affect patient outcomes. Measuring white ...matter lesions size using deep learning (DL-WML) might help safely guide IVT administration. We aimed to develop, validate, and evaluate a DL-WML volume on CT compared to the Fazekas scale (WML-Faz) as a risk factor and IVT effect modifier in patients receiving EVT directly after IVT.
We developed a deep-learning model for WML segmentation on CT and validated with internal and external test sets. In a post hoc analysis of the MR CLEAN No-IV trial, we associated DL-WML volume and WML-Faz with symptomatic-intracerebral hemorrhage (sICH) and 90-day functional outcome according to the modified Rankin Scale (mRS). We used multiplicative interaction terms between WML measures and IVT administration to evaluate IVT treatment effect modification. Regression models were used to report unadjusted and adjusted common odds ratios (cOR/acOR).
In total, 516 patients from the MR CLEAN No-IV trial (male/female, 291/225; age median, 71 IQR, 62-79) were analyzed. Both DL-WML volume and WML-Faz are associated with sICH (DL-WML volume acOR, 1.78 95%CI, 1.17; 2.70; WML-Faz acOR, 1.53 95%CI 1.02; 2.31) and mRS (DL-WML volume acOR, 0.70 95%CI, 0.55; 0.87, WML-Faz acOR, 0.73 95%CI 0.60; 0.88). Only in the unadjusted IVT effect modification analysis WML-Faz was associated with more sICH if IVT was given (p = 0.046). Neither WML measure was associated with worse mRS if IVT was given.
DL-WML volume and WML-Faz had a similar relationship with functional outcome and sICH. Although more sICH might occur in patients with more severe WML-Faz receiving IVT, no worse functional outcome was observed.
White matter lesion severity on baseline CT in acute ischemic stroke patients has a similar predictive value if measured with deep learning or the Fazekas scale. Safe administration of intravenous thrombolysis using white matter lesion severity should be further studied.
White matter damage is a predisposing risk factor for intracranial hemorrhage in patients with acute ischemic stroke but remains difficult to measure on CT. White matter lesion volume on CT measured with deep learning had a similar association with symptomatic intracerebral hemorrhages and worse functional outcome as the Fazekas scale. A patient-level meta-analysis is required to study the benefit of white matter lesion severity-based selection for intravenous thrombolysis before endovascular treatment.
While Deep Neural Networks (DNNs) achieve state-of-the-art performance in many fields, e.g., object recognition, they rely on deep networks with millions or even billions of parameters. Accelerating ...DNNs by reducing the parameters of DNNs is crucial for real-time object recognition. This paper presents an evolutionary approach to evolve efficient DNNs that can be run on Low-Performance Computing Hardware (LPCH) for real-time object recognition with the fastest possible speed and an accuracy of more than 95%. This approach achieves the goal by means of two design choices. First, NeuroEvolution of Augmenting Topologies (NEAT) is applied to evolve both weights and topology of DNNs starting from simple initial topology, which reduces the number of parameters of DNNs from millions to thousands. Second, we propose the novel fitness functions to further select the evolved DNNs for lower computation time, while maintaining high accuracy. We test the approach to evolve the efficient DNNs on the well-known benchmark MNIST dataset and the self-defined modular robots dataset. Furthermore, com-pared with most current studies, we not only evolve DNNs on the datasets but also implement the best evolved DNN on LPCH to recognize objects real-time in the real world. The experimental results show that the best evolved DNN recognizes the modular robots on a microcomputer, Raspberry Pi 3, with an accuracy of 95.6% and a speed of 5.3 fps. This work can be extended to achieve efficient DNNs for other real-time tasks. We published the source code 1 that was used to evolve the efficient DNNs, and the video 2 that the best evolved DNN was run on a Raspberry Pi 3 to recognize two modular robots simultaneously in the real world.
Energy use in developing countries is heterogeneous across households. Present day global energy models are mostly too aggregate to account for this heterogeneity. Here, a bottom-up model for ...residential energy use that starts from key dynamic concepts on energy use in developing countries is presented and applied to India. Energy use and fuel choice is determined for five end-use functions (cooking, water heating, space heating, lighting and appliances) and for five different income quintiles in rural and urban areas. The paper specifically explores the consequences of different assumptions for income distribution and rural electrification on residential sector energy use and CO
2 emissions, finding that results are clearly sensitive to variations in these parameters. As a result of population and economic growth, total Indian residential energy use is expected to increase by around 65–75% in 2050 compared to 2005, but residential carbon emissions may increase by up to 9–10 times the 2005 level. While a more equal income distribution and rural electrification enhance the transition to commercial fuels and reduce poverty, there is a trade-off in terms of higher CO
2 emissions via increased electricity use.
► A bottom-up model for residential energy use was developed and applied to India. ► The model distinguishes five end-use functions and rural/urban income quintiles. ► We explore consequences of income distribution and electrification on energy use. ► Equal income and electrification enhance the transition to commercial fuels. ► Higher CO
2 emissions from increased electricity use are a trade-off.
Up to 80% of wheelchair users are affected by shoulder pain. The Clinical Practice Guidelines for preservation of upper limb function following spinal cord injury suggest that using a proper ...wheelchair propulsion technique could minimize the shoulder injury risk. Yet, the exact relationship between the wheelchair propulsion technique and shoulder load is not well understood.
This study aimed to examine the changes in shoulder loading accompanying the typical changes in propulsion technique following 80 min of low-intensity wheelchair practice distributed over 3 weeks.
Seven able-bodied participants performed the pre- and the post-test and 56 min of visual feedback-based low-intensity wheelchair propulsion practice. Kinematics and kinetics of propulsion technique were recorded during the pre- and the post-test. A musculoskeletal model was used to calculate muscle force and glenohumeral reaction force.
Participants decreased push frequency (51→36 pushes/min, p = 0.04) and increased contact angle (68→94°, p = 0.02) between the pre- and the post-test. The excursion of the upper arm increased, approaching significance (297→342 mm, p = 0.06). Range of motion of the hand, trunk and shoulder remained unchanged. The mean glenohumeral reaction force per cycle decreased by 13%, approaching significance (268→232 N, p = 0.06).
Despite homogenous changes in propulsion technique, the kinematic solution to the task varied among the participants. Participants exhibited two glenohumeral reaction force distribution patterns: 1) Two individuals developed high force at the onset of the push, leading to increased peak and mean glenohumeral forces 2) Five individuals distributed the force more evenly over the cycle, lowering both peak and mean glenohumeral forces.
Abstract
Targeted DNA double-strand breaks (DSBs) with CRISPR–Cas9 have revolutionized genetic modification by enabling efficient genome editing in a broad range of eukaryotic systems. Accurate gene ...editing is possible with near-perfect efficiency in haploid or (predominantly) homozygous genomes. However, genomes exhibiting polyploidy and/or high degrees of heterozygosity are less amenable to genetic modification. Here, we report an up to 99-fold lower gene editing efficiency when editing individual heterozygous loci in the yeast genome. Moreover, Cas9-mediated introduction of a DSB resulted in large scale loss of heterozygosity affecting DNA regions up to 360 kb and up to 1700 heterozygous nucleotides, due to replacement of sequences on the targeted chromosome by corresponding sequences from its non-targeted homolog. The observed patterns of loss of heterozygosity were consistent with homology directed repair. The extent and frequency of loss of heterozygosity represent a novel mutagenic side-effect of Cas9-mediated genome editing, which would have to be taken into account in eukaryotic gene editing. In addition to contributing to the limited genetic amenability of heterozygous yeasts, Cas9-mediated loss of heterozygosity could be particularly deleterious for human gene therapy, as loss of heterozygous functional copies of anti-proliferative and pro-apoptotic genes is a known path to cancer.