Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across ...several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
Spiking Neural Networks (SNNs) have recently emerged as a prominent neural computing paradigm. However, the typical shallow SNN architectures have limited capacity for expressing complex ...representations while training deep SNNs using input spikes has not been successful so far. Diverse methods have been proposed to get around this issue such as converting off-the-shelf trained deep Artificial Neural Networks (ANNs) to SNNs. However, the ANN-SNN conversion scheme fails to capture the temporal dynamics of a spiking system. On the other hand, it is still a difficult problem to directly train deep SNNs using input spike events due to the discontinuous, non-differentiable nature of the spike generation function. To overcome this problem, we propose an approximate derivative method that accounts for the leaky behavior of LIF neurons. This method enables training deep convolutional SNNs directly (with input spike events) using spike-based backpropagation. Our experiments show the effectiveness of the proposed spike-based learning on deep networks (VGG and Residual architectures) by achieving the best classification accuracies in MNIST, SVHN, and CIFAR-10 datasets compared to other SNNs trained with a spike-based learning. Moreover, we analyze sparse event-based computations to demonstrate the efficacy of the proposed SNN training method for inference operation in the spiking domain.
In this work, we propose ReStoCNet, a residual stochastic multilayer convolutional Spiking Neural Network (SNN) composed of binary kernels, to reduce the synaptic memory footprint and enhance the ...computational efficiency of SNNs for complex pattern recognition tasks. ReStoCNet consists of an input layer followed by stacked convolutional layers for hierarchical input feature extraction, pooling layers for dimensionality reduction, and fully-connected layer for inference. In addition, we introduce residual connections between the stacked convolutional layers to improve the hierarchical feature learning capability of deep SNNs. We propose Spike Timing Dependent Plasticity (STDP) based probabilistic learning algorithm, referred to as Hybrid-STDP (HB-STDP), incorporating Hebbian and anti-Hebbian learning mechanisms, to train the binary kernels forming ReStoCNet in a layer-wise unsupervised manner. We demonstrate the efficacy of ReStoCNet and the presented HB-STDP based unsupervised training methodology on the MNIST and CIFAR-10 datasets. We show that residual connections enable the deeper convolutional layers to self-learn useful high-level input features and mitigate the accuracy loss observed in deep SNNs devoid of residual connections. The proposed ReStoCNet offers >20 × kernel memory compression compared to full-precision (32-bit) SNN while yielding high enough classification accuracy on the chosen pattern recognition tasks.
Spiking neural networks (SNNs) have emerged as a promising brain inspired neuromorphic-computing paradigm for cognitive system design due to their inherent event-driven processing capability. The ...fully connected (FC) shallow SNNs typically used for pattern recognition require large number of trainable parameters to achieve competitive classification accuracy. In this paper, we propose a deep spiking convolutional neural network (SpiCNN) composed of a hierarchy of stacked convolutional layers followed by a spatial-pooling layer and a final FC layer. The network is populated with biologically plausible leaky-integrate-and-fire (LIF) neurons interconnected by shared synaptic weight kernels. We train convolutional kernels layer-by-layer in an unsupervised manner using spike-timing-dependent plasticity (STDP) that enables them to self-learn characteristic features making up the input patterns. In order to further improve the feature learning efficiency, we propose using smaller <inline-formula> <tex-math notation="LaTeX">3\boldsymbol \times 3 </tex-math></inline-formula> kernels trained using STDP-based synaptic weight updates performed over a mini-batch of input patterns. Our deep SpiCNN, consisting of two convolutional layers trained using the unsupervised convolutional STDP learning methodology, achieved classification accuracies of 91.1% and 97.6%, respectively, for inferring handwritten digits from the MNIST data set and a subset of natural images from the Caltech data set.
In this work, we propose stochastic Binary Spiking Neural Network (sBSNN) composed of stochastic spiking neurons and binary synapses (stochastic only during training) that computes probabilistically ...with one-bit precision for power-efficient and memory-compressed neuromorphic computing. We present an energy-efficient implementation of the proposed sBSNN using 'stochastic bit' as the core computational primitive to realize the stochastic neurons and synapses, which are fabricated in 90nm CMOS process, to achieve efficient on-chip training and inference for image recognition tasks. The measured data shows that the 'stochastic bit' can be programmed to mimic spiking neurons, and stochastic Spike Timing Dependent Plasticity (or sSTDP) rule for training the binary synaptic weights without expensive random number generators. Our results indicate that the proposed sBSNN realization offers possibility of up to <inline-formula> <tex-math notation="LaTeX">32\times </tex-math></inline-formula> neuronal and synaptic memory compression compared to full precision (32-bit) SNN and energy efficiency of 89.49 TOPS/Watt for two-layer fully-connected SNN.
Liquid state machine (LSM), a bio-inspired computing model consisting of the input sparsely connected to a randomly interlinked reservoir (or liquid) of spiking neurons followed by a readout layer, ...finds utility in a range of applications varying from robot control and sequence generation to action, speech, and image recognition. LSMs stand out among other Recurrent Neural Network (RNN) architectures due to their simplistic structure and lower training complexity. Plethora of recent efforts have been focused toward mimicking certain characteristics of biological systems to enhance the performance of modern artificial neural networks. It has been shown that biological neurons are more likely to be connected to other neurons in the close proximity, and tend to be disconnected as the neurons are spatially far apart. Inspired by this, we propose a group of locally connected neuron reservoirs, or an ensemble of liquids approach, for LSMs. We analyze how the segmentation of a single large liquid to create an ensemble of multiple smaller liquids affects the latency and accuracy of an LSM. In our analysis, we quantify the ability of the proposed ensemble approach to provide an improved representation of the input using the Separation Property (SP) and Approximation Property (AP). Our results illustrate that the ensemble approach enhances class discrimination (quantified as the ratio between the SP and AP), leading to better accuracy in speech and image recognition tasks, when compared to a single large liquid. Furthermore, we obtain performance benefits in terms of improved inference time and reduced memory requirements, due to lowered number of connections and the freedom to parallelize the liquid evaluation process.
Nutritional and functional roles of millets—A review Nithiyanantham, Srinivasan; Kalaiselvi, Palanisamy; Mahomoodally, Mohamad Fawzi ...
Journal of food biochemistry,
July 2019, Letnik:
43, Številka:
7
Journal Article
Recenzirano
Odprti dostop
The available cultivable plant‐based food resources in developing tropical countries are inadequate to supply proteins for both human and animals. Such limition of available plant food sources are ...due to shrinking of agricultural land, rapid urbanization, climate change, and tough competition between food and feed industries for existing food and feed crops. However, the cheapest food materials are those that are derived from plant sources which although they occur in abundance in nature, are still underutilized. At this juncture, identification, evaluation, and introduction of underexploited millet crops, including crops of tribal utility which are generally rich in protein is one of the long‐term viable solutions for a sustainable supply of food and feed materials. In view of the above, the present review endeavors to highlight the nutritional and functional potential of underexploited millet crops.
Practical applications
Millets are an important food crop at a global level with a significant economic impact on developing countries. Millets have advantageous characteristics as they are drought and pest‐resistance grains. Millets are considered as high‐energy yielding nourishing foods which help in addressing malnutrition. Millet‐based foods are considered as potential prebiotic and probiotics with prospective health benefits. Grains of these millet species are widely consumed as a source of traditional medicines and important food to preserve health.
Deep neural networks are biologically inspired class of algorithms that have recently demonstrated the state-of-the-art accuracy in large-scale classification and recognition tasks. Hardware ...acceleration of deep networks is of paramount importance to ensure their ubiquitous presence in future computing platforms. Indeed, a major landmark that enables efficient hardware accelerators for deep networks is the recent advances from the machine learning community that have demonstrated the viability of aggressively scaled deep binary networks. In this paper, we demonstrate how deep binary networks can be accelerated in modified von Neumann machines by enabling binary convolutions within the static random access memory (SRAM) arrays. In general, binary convolutions consist of bit-wise exclusive-NOR (XNOR) operations followed by a population count (popcount). We present two proposals: one based on charge sharing approach to perform vector XNOR and approximate popcount and another based on bit-wise XNOR followed by a digital bit-tree adder for accurate popcount. We highlight the various tradeoffs in terms of circuit complexity, speed-up, and classification accuracy for both the approaches. Few key techniques presented as a part of the manuscript are the use of low-precision, low-overhead analog-to-digital converter (ADC), to achieve a fairly accurate popcount for the charge-sharing scheme and proposal for sectioning of the SRAM array by adding switches onto the read-bitlines, thereby achieving improved parallelism. Our results on benchmark image classification datasets for CIFAR-10 and SVHN on a binarized neural network architecture show energy improvements of up to <inline-formula> <tex-math notation="LaTeX">6.1\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">2.3\times </tex-math></inline-formula> for the two proposals, compared to conventional SRAM banks. In terms of latency, improvements of up to <inline-formula> <tex-math notation="LaTeX">15.8\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">8.1\times </tex-math></inline-formula> were achieved for the two respective proposals.
Spiking neural networks (SNNs), with their inherent capability to learn sparse spike-based input representations over time, offer a promising solution for enabling the next generation of intelligent ...autonomous systems. Nevertheless, end-to-end training of deep SNNs is both compute- and memory-intensive because of the need to backpropagate error gradients through time. We propose BlocTrain, which is a scalable and complexity-aware incremental algorithm for memory-efficient training of deep SNNs. We divide a deep SNN into blocks, where each block consists of few convolutional layers followed by a classifier. We train the blocks sequentially using local errors from the classifier. Once a given block is trained, our algorithm dynamically figures out easy vs. hard classes using the class-wise accuracy, and trains the deeper block only on the hard class inputs. In addition, we also incorporate a hard class detector (HCD) per block that is used during inference to exit early for the easy class inputs and activate the deeper blocks only for the hard class inputs. We trained ResNet-9 SNN divided into three blocks, using BlocTrain, on CIFAR-10 and obtained 86.4% accuracy, which is achieved with up to 2.95× lower memory requirement during the course of training, and 1.89× compute efficiency per inference (due to early exit strategy) with 1.45× memory overhead (primarily due to classifier weights) compared to end-to-end network. We also trained ResNet-11, divided into four blocks, on CIFAR-100 and obtained 58.21% accuracy, which is one of the first reported accuracy for SNN trained entirely with spike-based backpropagation on CIFAR-100.
Perovskite neural trees Zhang, Hai-Tian; Park, Tae Joon; Zaluzhnyy, Ivan A ...
Nature communications,
05/2020, Letnik:
11, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Trees are used by animals, humans and machines to classify information and make decisions. Natural tree structures displayed by synapses of the brain involves potentiation and depression capable of ...branching and is essential for survival and learning. Demonstration of such features in synthetic matter is challenging due to the need to host a complex energy landscape capable of learning, memory and electrical interrogation. We report experimental realization of tree-like conductance states at room temperature in strongly correlated perovskite nickelates by modulating proton distribution under high speed electric pulses. This demonstration represents physical realization of ultrametric trees, a concept from number theory applied to the study of spin glasses in physics that inspired early neural network theory dating almost forty years ago. We apply the tree-like memory features in spiking neural networks to demonstrate high fidelity object recognition, and in future can open new directions for neuromorphic computing and artificial intelligence.