COVID-19 is a life threatening disease which has a enormous global impact. As the cause of the disease is a novel coronavirus whose gene information is unknown, drugs and vaccines are yet to be ...found. For the present situation, disease spread analysis and prediction with the help of mathematical and data driven model will be of great help to initiate prevention and control action, namely lockdown and qurantine. There are various mathematical and machine-learning models proposed for analyzing the spread and prediction. Each model has its own limitations and advantages for a particluar scenario. This article reviews the state-of-the art mathematical models for COVID-19, including compartment models, statistical models and machine learning models to provide more insight, so that an appropriate model can be well adopted for the disease spread analysis. Furthermore, accurate diagnose of COVID-19 is another essential process to identify the infected person and control further spreading. As the spreading is fast, there is a need for quick auotomated diagnosis mechanism to handle large population. Deep-learning and machine-learning based diagnostic mechanism will be more appropriate for this purpose. In this aspect, a comprehensive review on the deep learning models for the diagnosis of the disease is also provided in this article.
Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit ...history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems.
In-memory computing using resistive memory devices is a promising non-von Neumann approach for making energy-efficient deep learning inference hardware. However, due to device variability and noise, ...the network needs to be trained in a specific way so that transferring the digitally trained weights to the analog resistive memory devices will not result in significant loss of accuracy. Here, we introduce a methodology to train ResNet-type convolutional neural networks that results in no appreciable accuracy loss when transferring weights to phase-change memory (PCM) devices. We also propose a compensation technique that exploits the batch normalization parameters to improve the accuracy retention over time. We achieve a classification accuracy of 93.7% on CIFAR-10 and a top-1 accuracy of 71.6% on ImageNet benchmarks after mapping the trained weights to PCM. Our hardware results on CIFAR-10 with ResNet-32 demonstrate an accuracy above 93.5% retained over a one-day period, where each of the 361,722 synaptic weights is programmed on just two PCM devices organized in a differential configuration.
Memristive devices, whose conductance depends on previous programming history, are of significant interest for building nonvolatile memory and brain-inspired computing systems. Here, we report ...half-integer quantized conductance transitions G = (n/2) (2e 2/h) for n = 1, 2, 3, etc., in Cu/SiO2/W memristive devices observed below 300 mV at room temperature. This is attributed to the nanoscale filamentary nature of Cu conductance pathways formed inside SiO2. Retention measurements also show spontaneous filament decay with quantized conductance levels. Numerical simulations shed light into the dynamics underlying the data retention loss mechanisms and provide new insights into the nanoscale physics of memristive devices and trade-offs involved in engineering them for computational applications.
Abstract
Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to ...nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks—including convnets, recurrent networks, and transformers—can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities.
Abstract
Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent ...advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights—given the plethora of complex memory non-idealities—represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential.
Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large ...DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.
Bacterial diversity in endodontic infections has not been sufficiently studied. The use of modern pyrosequencing technology should allow for more comprehensive analysis than traditional Sanger ...sequencing. This study investigated bacterial diversity in endodontic infections through taxonomic classification based on 16S rRNA gene sequences generated by 454 GS-FLX pyrosequencing and conventional Sanger capillary sequencing technologies. Sequencings were performed on 7 specimens from endodontic infections. On average, 47 vs. 28,590 sequences were obtained per sample for Sanger sequencing vs. pyrosequencing, representing a 600-fold difference in “depth-of-coverage”. Based on Ribosomal Database Project (RDP II) Classifier analysis, pyrosequencing identified 179 bacterial genera in 13 phyla, which was significantly more than Sanger sequencing. The phylum Bacteroidetes was the most prevalent bacterial phylum. These results indicate that bacterial communities in endodontic infections are more diverse than previously demonstrated. In addition, deep-coverage pyrosequencing of the 16S rRNA gene revealed low-abundance micro-organisms with potential clinical implications.
The 5G and beyond wireless networks will be more dynamic and heterogeneous, which needs to work on multistrand waveforms. One of the most significant challenges in such a dynamic network, especially ...non cooperated cases, is the identification of particular modulation type, which the transmitter uses at the given time to decode the data successfully. This research proposes a modulation classification algorithm using the combination architectures of modified convolutional neural network. The proposed deep learning architecture is developed by combining the convolutional neural network, dense network, and long short-term memory network (LSTM), which is named as convolutional LSTM dense neural network (CLDNN). Moreover, the mean cumulative sum metric (MCS) is introduced in the pooling layer for improved classification accuracy. Dimensionality reduction through Principal Component Analysis is also applied to minimize the training time, so that the proposed architecture can be adopted for its practical usage. The simulation results prove that the presented CLDNN outperforms an ordinary CNN, while taking less training time.