Spiking neural networks (SNNs) are promising in ascertaining brain-like behaviors since spikes are capable of encoding spatio-temporal information. Recent schemes, e.g., pre-training from artificial ...neural networks (ANNs) or direct training based on backpropagation (BP), make the high-performance supervised training of SNNs possible. However, these methods primarily fasten more attention on its spatial domain information, and the dynamics in temporal domain are attached less significance. Consequently, this might lead to the performance bottleneck, and scores of training techniques shall be additionally required. Another underlying problem is that the spike activity is naturally non-differentiable, raising more difficulties in supervised training of SNNs. In this paper, we propose a spatio-temporal backpropagation (STBP) algorithm for training high-performance SNNs. In order to solve the non-differentiable problem of SNNs, an approximated derivative for spike activity is proposed, being appropriate for gradient descent training. The STBP algorithm combines the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD), and does not require any additional complicated skill. We evaluate this method through adopting both the fully connected and convolutional architecture on the static MNIST dataset, a custom object detection dataset, and the dynamic N-MNIST dataset. Results bespeak that our approach achieves the best accuracy compared with existing state-of-the-art algorithms on spiking networks. This work provides a new perspective to investigate the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics.
Artificial neural networks (ANNs), a popular path towards artificial intelligence, have experienced remarkable success via mature models, various benchmarks, open-source datasets, and powerful ...computing platforms. Spiking neural networks (SNNs), a category of promising models to mimic the neuronal dynamics of the brain, have gained much attention for brain inspired computing and been widely deployed on neuromorphic devices. However, for a long time, there are ongoing debates and skepticisms about the value of SNNs in practical applications. Except for the low power attribute benefit from the spike-driven processing, SNNs usually perform worse than ANNs especially in terms of the application accuracy. Recently, researchers attempt to address this issue by borrowing learning methodologies from ANNs, such as backpropagation, to train high-accuracy SNN models. The rapid progress in this domain continuously produces amazing results with ever-increasing network size, whose growing path seems similar to the development of deep learning. Although these ways endow SNNs the capability to approach the accuracy of ANNs, the natural superiorities of SNNs and the way to outperform ANNs are potentially lost due to the use of ANN-oriented workloads and simplistic evaluation metrics.
In this paper, we take the visual recognition task as a case study to answer the questions of “what workloads are ideal for SNNs and how to evaluate SNNs makes sense”. We design a series of contrast tests using different types of datasets (ANN-oriented and SNN-oriented), diverse processing models, signal conversion methods, and learning algorithms. We propose comprehensive metrics on the application accuracy and the cost of memory & compute to evaluate these models, and conduct extensive experiments. We evidence the fact that on ANN-oriented workloads, SNNs fail to beat their ANN counterparts; while on SNN-oriented workloads, SNNs can fully perform better. We further demonstrate that in SNNs there exists a trade-off between the application accuracy and the execution cost, which will be affected by the simulation time window and firing threshold. Based on these abundant analyses, we recommend the most suitable model for each scenario. To the best of our knowledge, this is the first work using systematical comparisons to explicitly reveal that the straightforward workload porting from ANNs to SNNs is unwise although many works are doing so and a comprehensive evaluation indeed matters. Finally, we highlight the urgent need to build a benchmarking framework for SNNs with broader tasks, datasets, and metrics.
The accurate prediction of protein−ligand binding free energies is a primary objective in computer-aided drug design. The solvation free energy of a small molecule provides a surrogate to the ...desolvation of the ligand in the thermodynamic process of protein−ligand binding. Here, we use explicit solvent molecular dynamics free energy perturbation to predict the absolute solvation free energies of a set of 239 small molecules, spanning diverse chemical functional groups commonly found in drugs and drug-like molecules. We also compare the performance of absolute solvation free energies obtained using the OPLS_2005 force field with two other commonly used small molecule force fieldsgeneral AMBER force field (GAFF) with AM1-BCC charges and CHARMm-MSI with CHelpG charges. Using the OPLS_2005 force field, we obtain high correlation with experimental solvation free energies (R 2 = 0.94) and low average unsigned errors for a majority of the functional groups compared to AM1-BCC/GAFF or CHelpG/CHARMm-MSI. However, OPLS_2005 has errors of over 1.3 kcal/mol for certain classes of polar compounds. We show that predictions on these compound classes can be improved by using a semiempirical charge assignment method with an implicit bond charge correction.
Dilute alloying is an effective strategy to tune properties of solid catalysts but is rarely leveraged in complex reactions beyond small molecule conversion. In this work, dilute dopants are ...demonstrated to serve as activating centers to construct multiatom catalytic domains in metal nitride electrocatalysts for lithium–sulfur (Li–S) batteries, of which the sulfur cathode suffers from sluggish and complex conversion reactions. With titanium nitride (TiN) as a model system, the dilute cobalt alloying is shown to greatly improve the reaction kinetics while inducing negligible catalyst reconstruction. Compared to the pristine TiN, the dilute nitride alloy catalyst enables onefold increase in the high rate (2.0 C) capacities of Li–S batteries, as well as an impressively low cyclic decay rate of 0.17% at a sulfur loading of 4.0 mgS cm−2. This work opens up new opportunities toward the rational design of Li–S electrocatalysts by dilute alloying and also enlightens the understandings of complex domain‐catalyzed reactions in energy applications.
Dilute alloying implants “activating” centers in nitride alloy electrocatalysts to boost lithium–sulfur (Li–S) batteries. Dilute Co dopants activate the surrounding N and Ti atoms to construct multiatom active domains for efficient bidirectional catalysis of S redox reactions. The corresponding dilute nitride alloy improves the reaction kinetics and electrochemical performance of Li–S batteries.
Perovskite-style materials are cathode systems known for their stability in solid oxide fuel cells (SOFCs). Pr0.5Sr0.5FeO3−δ (PSF) exhibits excellent electrode performance in perovskite cathode ...systems at high temperatures. Via VB subgroup metals (V, Nb, and Ta) modifying the B-site, the oxidation and spin states of iron elements can be adjusted, thereby ultimately adjusting the cathode’s physicochemical properties. Theoretical predictions indicate that PSF has poor stability, but the relative arrangement of the three elements on the B-site can significantly improve this material’s properties. The modification of Nb has a large effect on the stability of PSF cathode materials, reaching a level of −2.746 eV. The surface structure of PSF becomes slightly more stable with an increase in the percentage of oxygen vacancy structures, but the structural instability persists. Furthermore, the differential charge density distribution and adsorption state density of the three modified cathode materials validate our adsorption energy prediction results. The initial and final states of the VB subgroup metal-doped PSF indicate that PSFN is more likely to complete the cathode surface adsorption reaction. Interestingly, XRD and EDX characterization are performed on the synthesized pure and Nb-doped PSF material, which show the orthorhombic crystal system of the composite theoretical model structure and subsequent experimental components. Although PSF exhibits strong catalytic activity, it is highly prone to decomposition and instability at high temperatures. Furthermore, PSFN, with the introduction of Nb, shows greater stability and can maintain its activity for the ORR. EIS testing clearly indicates that Nb most significantly improves the cathode. The consistency between the theoretical predictions and experimental validations indicates that Nb-doped PSF is a stable and highly active cathode electrode material with excellent catalytic activity.
Designing tight-binding ligands is a primary objective of small-molecule drug discovery. Over the past few decades, free-energy calculations have benefited from improved force fields and sampling ...algorithms, as well as the advent of low-cost parallel computing. However, it has proven to be challenging to reliably achieve the level of accuracy that would be needed to guide lead optimization (∼5× in binding affinity) for a wide range of ligands and protein targets. Not surprisingly, widespread commercial application of free-energy simulations has been limited due to the lack of large-scale validation coupled with the technical challenges traditionally associated with running these types of calculations. Here, we report an approach that achieves an unprecedented level of accuracy across a broad range of target classes and ligands, with retrospective results encompassing 200 ligands and a wide variety of chemical perturbations, many of which involve significant changes in ligand chemical structures. In addition, we have applied the method in prospective drug discovery projects and found a significant improvement in the quality of the compounds synthesized that have been predicted to be potent. Compounds predicted to be potent by this approach have a substantial reduction in false positives relative to compounds synthesized on the basis of other computational or medicinal chemistry approaches. Furthermore, the results are consistent with those obtained from our retrospective studies, demonstrating the robustness and broad range of applicability of this approach, which can be used to drive decisions in lead optimization.
Considering the flexible chemical composition, tunable electronic properties and unique two-dimensional structure of layered double hydroxides (LDHs), we constructed NiFe-LDH/Cu2O heterostructure ...photocatalysts. The photocatalytic performance of NiFe-LDH/Cu2O heterostructure photocatalysts was evaluated by methyl blue (MB) degradation and CO2 reduction under visible-light illumination. The removal efficiency of MB was improved from 20% for Cu2O and 45% for NiFe-LDH to 93% for NiFe-LDH/Cu2O after 30 min adsorption and 240 min visible-light irradiation. Moreover, CH4 yield from CO2 reduction over NiFe-LDH/Cu2O is about 5.6 and 6.9 times that of NiFe-LDH and Cu2O, respectively. Based on a detailed study of structural, electronic, optical and electrochemical properties, Z-scheme photocatalytic mechanism was proposed to explain the enhanced photocatalytic performance of NiFe-LDH/Cu2O. This work presents an inexpensive and flexible strategy for manufacturing heterostructure photocatalysts using earth-abundant elements.
•NiFe-LDH/Cu2O heterostructure photocatalysts were successfully prepared by a co-precipitation method.•MB removal efficiency can be improved from 20% for Cu2O and 45% for NiFe-LDH to 93% for NiFe/Cu2O。.•CH4 yield from CO2 photoreduction over NiFe-LDH/Cu2O is 5.6 and 6.9 times of NiFe-LDH and Cu2O, respectively.•Z-scheme mechanism is proposed, which is responsible for promoted charge separation and higher redox potentials.
The perception of facial emotion is not only determined by the physical features of the face itself but also be influenced by the emotional information of the background or surrounding information. ...However, the details of such effect are not fully understood. Here, the authors tested the perceived emotion of a target face surrounded by stimuli with different levels of emotional valence. In Experiment 1, four types of objects were divided into three groups (negative, unpleasant flowers and unpleasant animals; mildly negative (neutral), houses; positive, pleasant flowers). In Experiment 2, three groups of surrounding faces with different social–emotional valence (negative, neutral, and positive) were formed with the memory of affective personal knowledge. The data from two experiments showed that the perception of facial emotion can be influenced and modulated by the emotional valence of the surrounding stimuli, which can be explained by assimilation: the positive stimuli increased the valence of a target face, while the negative stimuli comparatively decreased it. Furthermore, the neutral stimuli also increased the valence of the target, which could be explained by the social positive effect. Therefore, the process of assimilation is likely to be a high-level emotional cognition rather than a low-level visual perception. The results of this study may help us better understand face perception in realistic scenarios.
Exposure to extreme cold or heat temperature is one leading cause of weather-associated mortality and morbidity in animals. Emerging studies demonstrate that the microbiota residing in guts act as an ...integral factor required to modulate host tolerance to cold or heat exposure, but common and unique patterns of animal-temperature associations between cold and heat have not been simultaneously examined. Therefore, we attempted to investigate the roles of gut microbiota in modulating tolerance to cold or heat exposure in mice.
The results showed that both cold and heat acutely change the body temperature of mice, but mice efficiently maintain their body temperature at conditions of chronic extreme temperatures. Mice adapt to extreme temperatures by adjusting body weight gain, food intake and energy harvest. Fascinatingly, 16 S rRNA sequencing shows that extreme temperatures result in a differential shift in the gut microbiota. Moreover, transplantation of the extreme-temperature microbiota is sufficient to enhance host tolerance to cold and heat, respectively. Metagenomic sequencing shows that the microbiota assists their hosts in resisting extreme temperatures through regulating the host insulin pathway.
Our findings highlight that the microbiota is a key factor orchestrating the overall energy homeostasis under extreme temperatures, providing an insight into the interaction and coevolution of hosts and gut microbiota.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
This study evaluates the uncertainties of turbulent flux calculation using eddy covariance (EC) and wavelet analysis (WA) methods. First, a non‐stationary data set is concocted by adding periodic ...waves and random perturbations which mimic the large eddies, turbulent intermittency, and asymmetry into an observational stationary data set, and the theoretical “true” fluxes are used to quantitatively evaluate the accuracy of these methods. Results show that EC and Morlet‐wavelet generate biases ranging 50%–100% of the “true” values at different non‐stationarity grades, whereas the Mexican hat (Mexhat) wavelet has a bias of about half of them. Furthermore, there is a high correlation of the Mexhat‐derived fluxes to the benchmark values, the regression slopes of the values of these two can be improved to almost 1 by adding a correction coefficient. The results suggest the potential of using the Mexhat‐wavelet method to calculate turbulent fluxes with high accuracy under non‐stationary conditions.
Plain Language Summary
Eddy covariance (EC) method is the well‐accepted technique to calculate turbulent flux under stationary conditions. However, the observational turbulence data sometimes show non‐stationarity, and in this case, the EC method is not applicable and wavelet analysis (WA) is frequently used. However, because turbulent fluxes are calculated values, and there are no true flux measurements, the accuracy of WA‐calculated fluxes remains unknown. In this study, we constructed a non‐stationary data set and used their theoretical true values to evaluate the accuracy of EC and WA methods in flux calculation under non‐stationary conditions. It is found that EC and Morlet‐wavelet bias 50%–100% of the true values at different non‐stationarity grades, while the Mexican hat (Mexhat) wavelet has the bias about half of them. Besides, there is a high correlation of the Mexhat‐derived fluxes to the true values, and Mexhat‐derived fluxes can be corrected to near true values by adding a correction coefficient. Therefore, the Mexhat‐wavelet method has the potential to be used to calculate turbulent fluxes under non‐stationary conditions.
Key Points
A method to concoct non‐stationary data series is proposed
Eddy covariance and wavelet analysis methods underestimate turbulent momentum flux under non‐stationary condition by about 50%
Mexican hat wavelet method has the potential to accurately calculate flux of non‐stationary turbulence after correction