We present a physics‐informed deep neural network (DNN) method for estimating hydraulic conductivity in saturated and unsaturated flows governed by Darcy's law. For saturated flow, we approximate ...hydraulic conductivity and head with two DNNs and use Darcy's law in addition to measurements of hydraulic conductivity and head to train these DNNs. For unsaturated flow, we approximate unsaturated conductivity function and capillary pressure with DNNs and train these DNNs using measurements of capillary pressure and the Richards equation. Because it is difficult to measure unsaturated conductivity in the field, we assume that no measurements of unsaturated conductivity are available. The proposed approach enforces the partial differential equation (PDE) (Darcy or Richards equation) constraints by minimizing the PDE residual at select points in the simulation domain. We demonstrate that physics constraints increase the accuracy of DNN approximations of sparsely observed functions and allow for training DNNs when no direct measurements of the functions of interest are available. For the saturated conductivity estimation problem, we show that the physics‐informed DNN method is more accurate than the state‐of‐the‐art maximum a posteriori probability method. For the unsaturated flow in homogeneous porous media, we find that the proposed method can accurately estimate the pressure‐conductivity relationship based on the capillary pressure measurements only, even in the presence of measurement noise.
Key Points
Physics constraints improve the accuracy of machine learning methods, especially when learning from sparse data
Physics constraints allow learning constitutive relationships without direct observations of the quantities of interest
For considered examples, the proposed physics‐informed neural networks provide a more accurate parameter estimation than the maximum a posteriori probability method
The logic of graph neural networks Grohe, Martin
2021 36th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS),
06/2021
Conference Proceeding
Odprti dostop
Graph neural networks (GNNs) are deep learning architectures for machine learning problems on graphs. It has recently been shown that the expressiveness of GNNs can be characterised precisely by the ...combinatorial Weisfeiler-Leman algorithms and by finite variable counting logics. The correspondence has even led to new, higher-order GNNs corresponding to the WL algorithm in higher dimensions.
The purpose of this paper is to explain these descriptive characterisations of GNNs.
“The most exciting thing about my research is that it works! Our machine learning models are general and robust enough to ‘discover’ new chemistry that we have not thought about beforehand, but which ...often makes a lot of sense a posteriori … I celebrate success by starting to think about what to do next.” Find out more about Lukáš Grajciar in his Introducing … Profile.
Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) ...at the top of research, economic, and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence, and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our article provides a comprehensive survey of the state of the art in the
assurance of ML
, i.e., in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the
machine learning lifecycle
, i.e., of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The article begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research.
With the development of scientific research techniques, drug discovery has shifted from the serendipitous approach of the past to more targeted models based on an understanding of the underlying ...biological mechanisms of disease. However, there are hundreds or more of mechanism of action (MoA) data in the known drugs, which makes this process faced with complicated multi-label classification of text data. Traditional multi-label text classification algorithms will increase the complexity of the model and reduce the accuracy as the number of labels increases. Although deep learning algorithms can solve the problem of model complexity, they are currently only suitable for processing image format data. To overcome these problems, this study proposes a multi-label classification method based on Bayesian deep learning, which can convert non-image data format into image data, making it suitable for Convolutional neural network algorithm requirements. Then in the PyTorch environment, the Bayesian deep learning algorithm and the EfficientNet convolutional neural network are perfectly combined using the BLiTZ library to construct the Bayesian convolutional neural network model which named BCNNM. Not only improves the classification efficiency, this method also solves the problem of imbalanced classification of multi-label data, and fully considers the uncertainty in the neural network. In the process of drug development, this method has important practical significance for processing the multi-label classification of MoA data.
Machine learning (ML) is a burgeoning field of medicine with huge resources being applied to fuse computer science and statistics to medical problems. Proponents of ML extol its ability to deal with ...large, complex and disparate data, often found within medicine and feel that ML is the future for biomedical research, personalized medicine, computer‐aided diagnosis to significantly advance global health care. However, the concepts of ML are unfamiliar to many medical professionals and there is untapped potential in the use of ML as a research tool. In this article, we provide an overview of the theory behind ML, explore the common ML algorithms used in medicine including their pitfalls and discuss the potential future of ML in medicine.
Detecting and recognizing deepfakes is a pressing issue in the digital age. In this study, we first collected a dataset of pristine images and fake ones properly generated by nine different ...Generative Adversarial Network (GAN) architectures and four Diffusion Models (DM). The dataset contained a total of 83,000 images, with equal distribution between the real and deepfake data. Then, to address different deepfake detection and recognition tasks, we proposed a hierarchical multi-level approach. At the first level, we classified real images from AI-generated ones. At the second level, we distinguished between images generated by GANs and DMs. At the third level (composed of two additional sub-levels), we recognized the specific GAN and DM architectures used to generate the synthetic data. Experimental results demonstrated that our approach achieved more than 97% classification accuracy, outperforming existing state-of-the-art methods. The models obtained in the different levels turn out to be robust to various attacks such as JPEG compression (with different quality factor values) and resize (and others), demonstrating that the framework can be used and applied in real-world contexts (such as the analysis of multimedia data shared in the various social platforms) for support even in forensic investigations in order to counter the illicit use of these powerful and modern generative models. We are able to identify the specific GAN and DM architecture used to generate the image, which is critical in tracking down the source of the deepfake. Our hierarchical multi-level approach to deepfake detection and recognition shows promising results in identifying deepfakes allowing focus on underlying task by improving (about \(2\% \) on the average) standard multiclass flat detection systems. The proposed method has the potential to enhance the performance of deepfake detection systems, aid in the fight against the spread of fake images, and safeguard the authenticity of digital media.
Emerging technologies and applications including Internet of Things, social networking, and crowd-sourcing generate large amounts of data at the network edge. Machine learning models are often built ...from the collected data, to enable the detection, classification, and prediction of future events. Due to bandwidth, storage, and privacy concerns, it is often impractical to send all the data to a centralized location. In this paper, we consider the problem of learning model parameters from data distributed across multiple edge nodes, without sending raw data to a centralized place. Our focus is on a generic class of machine learning models that are trained using gradient-descent-based approaches. We analyze the convergence bound of distributed gradient descent from a theoretical point of view, based on which we propose a control algorithm that determines the best tradeoff between local update and global parameter aggregation to minimize the loss function under a given resource budget. The performance of the proposed algorithm is evaluated via extensive experiments with real datasets, both on a networked prototype system and in a larger-scale simulated environment. The experimentation results show that our proposed approach performs near to the optimum with various machine learning models and different data distributions.
Machine learning (ML) models are increasingly being used to aid decision-making in high-risk applications. However, these models can perpetuate biases present in their training data or the systems in ...which they are integrated. When unaddressed, these biases can lead to harmful outcomes, such as misdiagnoses in healthcare 11, wrongful denials of loan applications 9, and over-policing of minority communities 2, 4. Consequently, the fair ML community is dedicated to developing algorithms that minimize the influence of data and model bias.