On-site production of hydrogen peroxide (H2O2) using electrochemical methods could be more efficient than the current industrial process. However, due to the existence of scaling relations for the ...adsorption of reaction intermediates, there is a long established trade-off between the activity and selectivity of the catalysts, as the enhancement of catalytic activity is typically accompanied by a four-electron O2 reduction reaction (ORR), leading to the reduced selectivity for the H2O2 production. Herein, by means of density functional theory (DFT) computations, we reported the feasibility of several classes of important and representative experimentally achievable single-atom catalysts (SACs) toward two-electron ORR, paying attention to their stability, selectivity, and activity at the acidic medium. Starting from 210 two-dimensional (2D) SACs, we demonstrated that 31 SACs have the potential to break the metal-based scaling relations and simultaneously achieve high activity and selectivity toward H2O2 production and screened out 7 SACs with higher activity than the PtHg4 in acidic media. Especially, a noble metal-free SAC, namely, a single Zn atom centered phthalocyanine (Zn@Pc-N4), has a remarkable activity improvement with a small overpotential of 0.15 V. Moreover, using multivariable analysis and machine-learning techniques, we provided a comprehensive understanding of the underlying origin of the selectivity and activity of SACs and unveiled the intrinsic correlations between structure and catalytic performance. This work may pave a way to the design and discovery of more promising materials for H2O2 production.
Hyperspectral microscopy in biology and minerals, unsupervised deep learning neural network denoising SRS photos: hyperspectral resolution enhancement and denoising one hyperspectral picture is ...enough to teach unsupervised method. An intuitive chemical species map for a lithium ore sample is produced using k-means clustering. Many researchers are now interested in biosignals. Uncertainty limits the algorithms’ capacity to evaluate these signals for further information. Even while AI systems can answer puzzles, they remain limited. Deep learning is used when machine learning is inefficient. Supervised learning needs a lot of data. Deep learning is vital in modern AI. Supervised learning requires a large labeled dataset. The selection of parameters prevents over- or underfitting. Unsupervised learning is used to overcome the challenges outlined above (performed by the clustering algorithm). To accomplish this, two processing processes were used: (1) utilizing nonlinear deep learning networks to turn data into a latent feature space (Z). The Kullback–Leibler divergence is used to test the objective function convergence. This article explores a novel research on hyperspectral microscopic picture using deep learning and effective unsupervised learning.
Machine Learning Methods for Attack Detection in the Smart Grid Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay ...
IEEE transaction on neural networks and learning systems,
2016-Aug., 2016-08-00, 2016-8-00, 20160801, Letnik:
27, Številka:
8
Journal Article
Odprti dostop
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this ...approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
Goal: This minireview aims to highlight recent important aspects to consider and evaluate when passive brain-computer interface (pBCI) systems would be developed and used in operational environments, ...and remarks future directions of their applications. Methods: Electroencephalography (EEG) based pBCI has become an important tool for real-time analysis of brain activity since it could potentially provide covertly-without distracting the user from the main task-and objectively-not affected by the subjective judgment of an observer or the user itself-information about the operator cognitive state. Results: Different examples of pBCI applications in operational environments and new adaptive interface solutions have been presented and described. In addition, a general overview regarding the correct use of machine learning techniques (e.g., which algorithm to use, common pitfalls to avoid, etc.) in the pBCI field has been provided. Conclusion: Despite recent innovations on algorithms and neurotechnology, pBCI systems are not completely ready to enter the market yet, mainly due to limitations of the EEG electrodes technology, and algorithms reliability and capability in real settings. Significance: High complexity and safety critical systems (e.g., airplanes, ATM interfaces) should adapt their behaviors and functionality accordingly to the user' actual mental state. Thus, technologies (i.e., pBCIs) able to measure in real time the user's mental states would result very useful in such "high risk" environments to enhance human machine interaction, and so increase the overall safety.
Ternary organic solar cells (OSCs) have progressed significantly in recent years due to the sufficient photon harvesting of the blend photoactive layer including three absorption‐complementary ...materials. With the rapid development of highly efficient ternary OSCs in photovoltaics, the precise energy‐level alignment of the three active components within ternary OSC devices should be taken into account. The machine‐learning technique is a computational method that can effectively learn from previous historical data to build predictive models. In this study, a dataset of 124 fullerene derivatives‐based ternary OSCs is manually constructed from a diverse range of literature along with their frontier molecular orbital theory levels, and device structures. Different machine‐learning algorithms are trained based on these electronic parameters to predict photovoltaic efficiency. Thus, the best predictive capability is provided by using the Random Forest approach beyond other machine‐learning algorithms in the dataset. Furthermore, the Random Forest algorithm yields valuable insights into the crucial role of lowest unoccupied molecular orbital energy levels of organic donors in the performance of ternary OSCs. The outcome of this study demonstrates a smart strategy for extracting underlying complex correlations in fullerene derivatives‐based ternary OSCs, thereby accelerating the development of ternary OSCs and related research fields.
Machine‐learning approaches are utilized to build models for the prediction of efficiency using important frontier molecular orbital energy levels of organic materials as features. Furthermore, a versatile Random Forest model reveals that the lowest unoccupied molecular orbital energy of donor can be considered as a critical feature in design of ternary organic solar cells.
Deep learning (DL) is playing an increasingly important role in our lives. It has already made a huge impact in areas, such as cancer diagnosis, precision medicine, self-driving cars, predictive ...forecasting, and speech recognition. The painstakingly handcrafted feature extractors used in traditional learning, classification, and pattern recognition systems are not scalable for large-sized data sets. In many cases, depending on the problem complexity, DL can also overcome the limitations of earlier shallow networks that prevented efficient training and abstractions of hierarchical representations of multi-dimensional training data. Deep neural network (DNN) uses multiple (deep) layers of units with highly optimized algorithms and architectures. This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time. We delve into the math behind training algorithms used in recent deep networks. We describe current shortcomings, enhancements, and implementations. The review also covers different types of deep architectures, such as deep convolution networks, deep residual networks, recurrent neural networks, reinforcement learning, variational autoencoders, and others.
This paper introduces a novel approach GPTFX, an AI-based mental detection with GPT frameworks. This approach leverages GPT embeddings and the fine-tuning of GPT-3. This approach exhibits superior ...performance in both classifying mental health disorders and generating explanations with accuracy of around 87% in classification and Rouge-L of around 0.75. We utilized GPT embeddings with machine learning models for the classification of mental health disorders. Additionally, GPT-3 was fine-tuned for generating explanations related to the predictions made by these machine learning models. Notably, the proposed algorithm proves well-suited for real-time monitoring of mental health by deploying in AI-IoMT devices, as it has demonstrated greater reliability when compared to traditional algorithms.
Learning and predicting the dynamics of physical systems requires a profound understanding of the underlying physical laws. Recent works on learning physical laws involve the extension of the ...equation discovery frameworks to the discovery of Hamiltonian and Lagrangian of physical systems. While the existing methods parameterize the Lagrangian using neural networks, we propose an alternate framework for learning interpretable Lagrangian descriptions of physical systems from limited data using the sparse Bayesian approach. Unlike existing neural network-based approaches, the proposed approach (a) yields an interpretable description of Lagrangian, (b) exploits Bayesian learning to quantify the epistemic uncertainty due to limited data, (c) automates the distillation of Hamiltonian from the learned Lagrangian using Legendre transformation, and (d) provides ordinary (ODE) and partial differential equation (PDE) based descriptions of the observed systems. Six different examples involving both discrete and continuous systems illustrate the efficacy of the proposed approach.
•A probabilistic framework is introduced for discovering interpretable Lagrangian.•It is applicable to both discrete and continuous systems.•The proposed framework can discover Lagrangian from limited data.•The identified models have long term predictive ability and generalize to high-dimensional systems.