In this paper, we propose an adaptive quantization method that can easily transfer the weights, which are trained in software network with floating point operation, to the real synaptic devices in ...hardware-based neural networks and maintain high performance. An n-type gated Schottky diode is investigated as a synaptic device, and the conductance behavior of this device is modeled successfully. Max value normalization and <inline-formula> <tex-math notation="LaTeX">3\sigma </tex-math></inline-formula> normalization are applied to the weights trained with an accuracy of 98.29% on fully connected neural network (<inline-formula> <tex-math notation="LaTeX">784\times 256\times10 </tex-math></inline-formula>) using software network. Then, the weights are quantized using the adaptive quantization method and can be transferred by adjusting the number of identical pulses applied to the synaptic devices. After applying the adaptive quantization method, accuracy rates of 98.09% and 97.20% in MNIST classification are obtained for both max value normalization and <inline-formula> <tex-math notation="LaTeX">3\sigma </tex-math></inline-formula> normalization, respectively. The proposed quantization method works well even when there is nonideality of synaptic devices such as nonlinearity of conductance behavior, limited conductance levels, and variation of conductance.
Hardware-based spiking neural networks (SNNs) inspired by a biological nervous system are regarded as an innovative computing system with very low power consumption and massively parallel operation. ...To train the SNNs with supervision, we propose an efficient on-chip training scheme approximating backpropagation algorithm suitable for hardware implementation. We show that the accuracy of the proposed scheme for SNNs is close to that of conventional artificial neural networks, by using stochastic characteristics of neurons. In a hardware configuration, gated Schottky diodes (GSDs) are used as synaptic devices, which have a saturated current with respect to the input voltage. We design the SNN system by adopting the proposed on-chip training scheme with the GSDs which can update their conductance in parallel to speed up the overall system. The performance of on-chip training SNN system is validated through MNIST data set classification based on network size and total time step. The SNN systems achieve accuracy of 97.83% with 1 hidden layer, and 98.44% with 4 hidden layers in fully connected neural networks. We then evaluate the effect of nonlinearity and asymmetry of conductance response for long-term potentiation (LTP) and long-term depression (LTD) on the performance of the on-chip training SNN system. In addition, the impact of device variations on the performance of the on-chip training SNN system is evaluated.
With the recently increasing prevalence of deep learning, both academia and industry exhibit substantial interest in neuromorphic computing, which mimics the functional and structural features of the ...human brain. To realize neuromorphic computing, an energy‐efficient and reliable artificial synapse must be developed. In this study, the synaptic ferroelectric field‐effect‐transistor (FeFET) array is fabricated as a component of a neuromorphic convolutional neural network. Beyond the single transistor level, the long‐term potentiation and depression of synaptic weights are achieved at the array level, and a successful program‐inhibiting operation is demonstrated in the synaptic array, achieving a learning accuracy of 79.84% on the Canadian Institute for Advanced Research (CIFAR)‐10 dataset. Furthermore, an efficient self‐curing method is proposed to improve the endurance of the FeFET array by tenfold, utilizing the punch‐through current inherent to the device. Low‐frequency noise spectroscopy is employed to quantitatively evaluate the curing efficiency of the proposed self‐curing method. The results of this study provide a method to fabricate and operate reliable synaptic FeFET arrays, thereby paving the way for further development of ferroelectric‐based neuromorphic computing.
The primary challenge that ferroelectric field‐effect transistors face is their vulnerability to the repeated program/erase cycle. To solve this issue, an efficient self‐curing method is presented. The proposed method successfully recovers synaptic fatigue damage, enhancing learning accuracy in the convolutional neural network.
Reinforcement learning (RL), exhibiting outstanding performance in various fields, requires large amounts of data for high performance. While exploration techniques address this requirement, ...conventional exploration methods have limitations: complexity of hardware implementation and significant hardware burden. Herein, in‐memory RL systems leveraging intrinsic 1/f noise of synaptic ferroelectric field‐effect‐transistors (FeFETs) for efficient exploration are proposed. The electrical characteristics of fabricated FeFETs with low‐power operation capability verify their suitability for neuromorphic systems. The proposed system achieves comparable performance to the conventional exploration method without additional circuits. The intrinsic 1/f noise of the FeFETs facilitates efficient exploration and offers significant advantages: efficiency in hardware implementation and simplicity in adjusting the 1/f noise level for optimal performance. This approach effectively addresses the challenges of conventional exploration methods. The operation mechanism of the exploration method utilizing the 1/f noise is systematically analyzed. The proposed in‐memory RL system demonstrates robustness and reliability to the device‐to‐device variation and the initial conductance distribution. This work provides further insights into the exploration methods of RL, paving the way for advanced in‐memory RL systems.
Exploration techniques play a significant role in reinforcement learning (RL). This study introduces an exploration approach to enhance in‐memory RL using intrinsic 1/f noise of synaptic ferroelectric field‐effect‐transistors. The proposed approach minimizes hardware requirements and offers simplicity in adjusting the noise level to an optimal level. This research paves the way for efficient and flexible hardware‐based RL applications.
In recent years, neuromorphic computing has been rapidly developed to overcome the limitations of von Neumann architecture. In this regard, the demand for high‐performance synaptic devices with high ...switching speeds, low power consumption, and multilevel conductance is increasing. Among the various synaptic devices, ferroelectric tunnel junctions (FTJs) are promising candidates. While previous studies have focused on improving reliability of FTJs to enhance the synaptic behavior, low‐frequency noise (LFN) of FTJs has not been characterized and its impact on the learning accuracy in neuromorphic computing remains unknown. Herein, the LFN characteristics of FTJs fabricated on n‐ and p‐type Si along with the impact of 1/f noise on the learning accuracy of convolutional neural networks (CNNs) are investigated. The results indicate that the FTJ on p‐type Si exhibits a far lower 1/f noise than that on n‐type Si. The FTJ on p‐type Si exhibits a significantly higher learning accuracy (86.26%) than that on n‐type Si (78.70%) owing to its low‐noise properties. This study provides valuable insights into the LFN characteristics of FTJs and a solution to improve the performance of synaptic devices by significantly reducing the 1/f noise.
Comprehensive investigation of 1/f noise in synaptic ferroelectric tunnel junctions with high reliability is provided.
Hardware-based Spiking Neural Networks (SNNs) are regarded as promising candidates for the cognitive computing system due to its low power consumption and highly parallel operation. In this paper, we ...train the SNN in which the firing time carries information using temporal backpropagation. The temporally encoded SNN with 512 hidden neurons achieved an accuracy of 96.90% for the MNIST test set. Furthermore, the effect of the device variation on the accuracy in temporally encoded SNN is investigated and compared with that of the rate-encoded network. In a hardware configuration of our SNN, NOR-type analog memory having an asymmetric floating gate is used as a synaptic device. In addition, we propose a neuron circuit including a refractory period generator for temporally encoded SNN. The performance of the 2-layer neural network composed of synapses and proposed neurons is evaluated through circuit simulation using SPICE based on the BSIM3v3 model with <inline-formula> <tex-math notation="LaTeX">0.35~\mu \text{m} </tex-math></inline-formula> technology. The network with 128 hidden neurons achieved an accuracy of 94.9%, a 0.1% reduction compared to that of the system simulation of the MNIST dataset. Finally, each block's latency and power consumption constituting the temporal network is analyzed and compared with those of the rate-encoded network depending on the total time step. Assuming that the network has 256 total time steps, the temporal network consumes 15.12 times less power than the rate-encoded network and makes decisions 5.68 times faster.