Recently, brain-inspired computing models have shown great potential to outperform today's deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks ...(SNNs) and HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model on memory, we propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing. SpikeHD generates a scalable and strong cognitive learning system that better mimics brain functionality. SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data. Then, it utilizes HDC to operate over SNN output by mapping the signal into high-dimensional space, learning the abstract information, and classifying the data. Our extensive evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture: (1) significantly enhance learning capability by exploiting two-stage information processing, (2) enables substantial robustness to noise and failure, and (3) reduces the network size and required parameters to learn complex information.
Memorization is an essential functionality that enables today's machine learning algorithms to provide a high quality of learning and reasoning for each prediction. Memorization gives algorithms ...prior knowledge to keep the context and define confidence for their decision. Unfortunately, the existing deep learning algorithms have a weak and nontransparent notion of memorization. Brain-inspired HyperDimensional Computing (HDC) is introduced as a model of human memory. Therefore, it mimics several important functionalities of the brain memory by operating with a vector that is computationally tractable and mathematically rigorous in describing human cognition. In this manuscript, we introduce a brain-inspired system that represents HDC memorization capability over a graph of relations. We propose GrapHD, hyperdimensional memorization that represents graph-based information in high-dimensional space. GrapHD defines an encoding method representing complex graph structure while supporting both weighted and unweighted graphs. Our encoder spreads the information of all nodes and edges across into a full holistic representation so that no component is more responsible for storing any piece of information than another. Then, GrapHD defines several important cognitive functionalities over the encoded memory graph. These operations include memory reconstruction, information retrieval, graph matching, and shortest path. Our extensive evaluation shows that GrapHD: (1) significantly enhances learning capability by giving the notion of short/long term memorization to learning algorithms, (2) enables cognitive computing and reasoning over memorization graph, and (3) enables holographic brain-like computation with substantial robustness to noise and failure.
•The electrical load series is analyzed in a two dimensional matrix form.•Univariate load and temperature data are expanded to multidimensional features.•2-D convolutional layers are used for hidden ...feature extraction.•A network containing LSTM and GRU units is proposed for load forecasting.
The consumed electrical load is affected by many external factors such as weather, season of the year, weekday or weekend and holiday. In this paper, it is tried to provide a high accurate forecasting model for hourly load consumption with considering these external variables. At first, the electrical load and temperature time series are rearranged into separate two-dimensional matrices. Convolutional neural networks (CNNs) are utilized to extract the load and temperature features. The autocorrelation coefficients of the load and temperature sequences are used to determine the kernel size of the convolutional layers. At this stage, the convolutional layers specifically convert the univariate data to multidimensional features by applying two-dimensional convolutional kernels, which potentially increase the forecasting capability of recurrent neural networks. On the other hand, long short term memory (LSTM) and gated recurrent unit (GRU) are able to hold short-term and long-term memories. Therefore, in the next stage, the multidimensional features extracted by 2-D CNNs are fed as input to the bidirectional propagating GRU and LSTM units to perform hourly electrical load forecasting. The results of experiments on two datasets show the superiority of the proposed method compared to some recent works in the field of short-term load forecasting.
Brain-inspired computing models have shown great potential to outperform today's deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks (SNNs) and ...HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model of memory, we aim to explore the difference between Spiking neural networks and hyperdimensional computing and how they can combine to develop a more advanced cognitive learning model.
Background
The emergence of the COVID-19 and its consequences has led to fears, worries, and anxiety among individuals worldwide. The present study developed the Fear of COVID-19 Scale (FCV-19S) to ...complement the clinical efforts in preventing the spread and treating of COVID-19 cases.
Methods
The sample comprised 717 Iranian participants. The items of the FCV-19S were constructed based on extensive review of existing scales on fears, expert evaluations, and participant interviews. Several psychometric tests were conducted to ascertain its reliability and validity properties.
Results
After panel review and corrected item-total correlation testing, seven items with acceptable corrected item-total correlation (0.47 to 0.56) were retained and further confirmed by significant and strong factor loadings (0.66 to 0.74). Also, other properties evaluated using both classical test theory and Rasch model were satisfactory on the seven-item scale. More specifically, reliability values such as internal consistency (
α
= .82) and test–retest reliability (ICC = .72) were acceptable. Concurrent validity was supported by the Hospital Anxiety and Depression Scale (with depression,
r
= 0.425 and anxiety,
r
= 0.511) and the Perceived Vulnerability to Disease Scale (with perceived infectability,
r
= 0.483 and germ aversion, r = 0.459).
Conclusion
The Fear of COVID-19 Scale, a seven-item scale, has robust psychometric properties. It is reliable and valid in assessing fear of COVID-19 among the general population and will also be useful in allaying COVID-19 fears among individuals.
The recent integration of imaging technology with additive manufacturing (AM) leads to the plethora of in-process and high-dimensional data. Machine learning (ML) methods have been implemented to ...improve understanding of defect formation in AM-built parts and controlling process variability in real-time. However, modern ML methods, in particular deep neural networks, are empowered by massive high-quality labeled data, which are limited in AM due to the following reasons: First, large data labeling is often tedious, costly, and requires substantial human efforts with considerable expertise. Second, the performance of the learning methods depends to a great extent on the presence of positive data instances (i.e., defective) as they are more informative for monitoring. Third, the rare positives result in a severe imbalanced dataset poses critical challenges in training ML methods designed with the assumption that the input contains an equal number of instances from each class. In this research, we propose novel annotation and learning with limited number of data through the integration of active search and hyperdimensional computing (HDC). The active search is developed to benefit from a single bandit model to learn about the data distribution (exploration) while sampling from the regions potentially containing more positives (exploitation). HDC is introduced as an alternative computing method that mimics important brain functionalities and encodes data with high-dimensional vectors, thereby enabling single-pass learning with just a few samples. Experimental results on a real-world case study of drag link joint build show the proposed model locates the rare positives thoroughly and detects lack of fusion defects with the accuracy of 89.58%, in 3.221 ± 0.029 second training time and with only 66 sample data. The joint active search and neuromorphic computing framework is shown to have strong potentials for general applications in a diverse set of domains with in-situ imaging data.
Brain-inspired hyperdimensional (HD) computing emulates cognition by computing with long-size vectors. HD computing consists of two main modules: encoder and associative search. The encoder module ...maps inputs into high dimensional vectors, called hypervectors. The associative search finds the closest match between the trained model (set of hypervectors) and a query hypervector by calculating a similarity metric. To perform the reasoning task for practical classification problems, HD needs to store a non-binary model and uses costly similarity metrics as cosine . In this article we propose an FPGA-based acceleration of HD exploiting Co mputational Re use (<inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq1-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq2-2992662.gif"/> </inline-formula>) which significantly improves the computation efficiency of both encoding and associative search modules. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq3-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq4-2992662.gif"/> </inline-formula> enables computation reuse in both encoding and associative search modules. We observed that consecutive inputs have high similarity which can be used to reduce the complexity of the encoding step. The previously encoded hypervector is reused to eliminate the redundant operations in encoding the current input. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq5-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq6-2992662.gif"/> </inline-formula>, additionally eliminates the majority of multiplication operations by clustering the class hypervector values, and sharing the values among all the class hypervectors. Our evaluations on several classification problems show that <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq7-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq8-2992662.gif"/> </inline-formula> can provide <inline-formula><tex-math notation="LaTeX">4.4\times</tex-math> <mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq9-2992662.gif"/> </inline-formula> energy efficiency improvement and <inline-formula><tex-math notation="LaTeX">4.8\times</tex-math> <mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>8</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq10-2992662.gif"/> </inline-formula> speedup over the optimized GPU implementation while ensuring the same quality of classification. <inline-formula><tex-math notation="LaTeX">\mathtt {HD}</tex-math> <mml:math><mml:mi mathvariant="monospace">HD</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq11-2992662.gif"/> </inline-formula>-<inline-formula><tex-math notation="LaTeX">\mathtt {Core}</tex-math> <mml:math><mml:mi mathvariant="monospace">Core</mml:mi></mml:math><inline-graphic xlink:href="salamat-ieq12-2992662.gif"/> </inline-formula> provides <inline-formula><tex-math notation="LaTeX">2.4\times</tex-math> <mml:math><mml:mrow><mml:mn>2</mml:mn><mml:mo>.</mml:mo><mml:mn>4</mml:mn><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="salamat-ieq13-2992662.gif"/> </inline-formula> more throughput than the state-of-the-art FPGA implementation; on average, 40 percent of this improvement comes directly from enabling computation reuse in the encoding module and the rest comes from the computation reuse in the associative search module.
Increased antibiotic resistance of microorganisms as well as the need to reduce health-care costs necessitates the production of new antimicrobials at lower costs. For this reason, this study was ...aimed to optimize the synthesis of magnesium oxide nanoparticles with the greatest antibacterial activity. In this study, 9 experiments containing different proportions of the factors (magnesium nitrate, NaOH, and stirring time) effective in the synthesis of magnesium oxide nanoparticles were designed using the Taguchi method. Magnesium oxide nanoparticles were synthesized using the coprecipitation method, and their antibacterial activity was evaluated using colony-forming unit (CFU) and disk diffusion. Morphology, crystalline structure, and size of synthesized nanoparticles were investigated using Fourier transform infrared (FTIR), X-ray diffraction (XRD), and scanning electron microscope (SEM). The optimum conditions (0.2 M magnesium nitrate, 2 M NaOH, and 90 min stirring time) for the synthesis of magnesium oxide nanoparticles with the greatest antibacterial activity were determined using the Taguchi method. The results of colony-forming unit and disk diffusion revealed the optimal antibacterial activity of synthesized nanoparticles against Staphylococcus aureus and Escherichia coli bacteria. The results obtained from FTIR and XRD analyses confirmed the synthesis of nanoparticles with favorable conditions. Also, according to the SEM image, the average size of synthesized nanoparticles was determined to be about 21 nm. According to the results, magnesium oxide nanoparticles can significantly reduce the number of Gram-positive and Gram-negative bacteria and can be used as an appropriate alternative to commonly used antibacterial compounds in order to tackle drug resistance among pathogens.
Deep Fingerprinting Sirinam, Payap; Imani, Mohsen; Juarez, Marc ...
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security,
10/2018
Conference Proceeding
Website fingerprinting enables a local eavesdropper to determine which websites a user is visiting over an encrypted connection. State-of-the-art website fingerprinting attacks have been shown to be ...effective even against Tor. Recently, lightweight website fingerprinting defenses for Tor have been proposed that substantially degrade existing attacks: WTF-PAD and Walkie-Talkie. In this work, we present Deep Fingerprinting (DF), a new website fingerprinting attack against Tor that leverages a type of deep learning called Convolutional Neural Networks (CNN) with a sophisticated architecture design, and we evaluate this attack against WTF-PAD and Walkie-Talkie. The DF attack attains over 98% accuracy on Tor traffic without defenses, better than all prior attacks, and it is also the only attack that is effective against WTF-PAD with over 90% accuracy. Walkie-Talkie remains effective, holding the attack to just 49.7% accuracy. In the more realistic open-world setting, our attack remains effective, with 0.99 precision and 0.94 recall on undefended traffic. Against traffic defended with WTF-PAD in this setting, the attack still can get 0.96 precision and 0.68 recall. These findings highlight the need for effective defenses that protect against this new attack and that could be deployed in Tor.