•A PCE method for SWAT is developed to assess propagation of parameter uncertainty.•A PCE-ANN approach is proposed to enable PCE to generate probabilistic forecasts.•PCE-ANN is compared with Monte ...Carlo simulation for uncertainty quantification.•PCE-ANN can provide similar results but requires much less computational effort.
Soil and Water Assessment Tool (SWAT) is one of the most widely used semi-distributed hydrological models. Assessment of the uncertainties in SWAT outputs is a popular but challenging topic due to the significant number of parameters. The purpose of this study is to investigate the use of Polynomial Chaos Expansion (PCE) in assessing uncertainty propagation in SWAT under the impact of significant parameter sensitivity. Furthermore, for the first time, a machine learning technique (i.e., artificial neural network, ANN) is integrated with PCE to expand its capability in generating probabilistic forecasts of daily flow. The traditional PCE and the proposed PCE-ANN methods are applied to a case study in the Guadalupe watershed in Texas, USA to assess the uncertainty propagation in SWAT for flow prediction during the historical and forecasting periods. The results show that PCE provides similar results as the traditional Monte-Carlo (MC) method, with a coefficient of determination (R2) value of 0.99 for the mean flow, during the historical period; while the proposed PCE-ANN method reproduces MC output with a R2 value of 0.84 for mean flow during the forecasting period. It is also indicated that PCE and PCE-ANN are as reliable as but much more efficient than MC. PCE takes about 1% of the computational time required by MC; PCE-ANN only takes a few minutes to produce probabilistic forecasting, while MC requires running the model for dozens or hundreds, even thousands, of times. Notably, the development of the PCE-ANN framework is the first attempt to explore PCE’s probabilistic forecasting capability using machine learning. PCE-ANN is a promising uncertainty assessment and probabilistic forecasting technique, as it is more efficient in terms of computation time, and it does not cause loss of essential uncertainty information.
This work presents two novel approaches for the symplectic model reduction of high-dimensional Hamiltonian systems using data-driven quadratic manifolds. Classical symplectic model reduction ...approaches employ linear symplectic subspaces for representing the high-dimensional system states in a reduced-dimensional coordinate system. While these approximations respect the symplectic nature of Hamiltonian systems, linear basis approximations can suffer from slowly decaying Kolmogorov N-width, especially in wave-type problems, which then requires a large basis size. We propose two different model reduction methods based on recently developed quadratic manifolds, each presenting its own advantages and limitations. The addition of quadratic terms to the state approximation, which sits at the heart of the proposed methodologies, enables us to better represent intrinsic low-dimensionality in the problem at hand. Both approaches are effective for issuing predictions in settings well outside the range of their training data while providing more accurate solutions than the linear symplectic reduced-order models.
The force modulation of robotic manipulators has been extensively studied for several decades. However, it is not yet commonly used in safety-critical applications due to a lack of accurate ...interaction contact modeling and weak performance guarantees-a large proportion of them concerning the modulation of interaction forces. This study presents a high-level framework for simultaneous trajectory optimization and force control of the interaction between a manipulator and soft environments, which is prone to external disturbances. Sliding friction and normal contact force are taken into account. The dynamics of the soft contact model and the manipulator are simultaneously incorporated in a trajectory optimizer to generate desired motion and force profiles. A constrained optimization framework based on the alternative direction method of multipliers has been employed to efficiently generate real-time optimal control inputs and high-dimensional state trajectories in a model-predictive control fashion. The experimental validation of the model performance is conducted on a soft substrate with known material properties using a Cartesian space force control mode. Results show a comparison of ground truth and real-time model-based contact force and motion tracking for multiple Cartesian motions in the valid range of the friction model. It is shown that a contact-model-based motion planner can compensate for frictional forces and motion disturbances and improve the overall motion and force tracking accuracy. The proposed high-level planner has the potential to facilitate the automation of medical tasks involving the manipulation of compliant, delicate, and deformable tissues.
With the predicted depletion of natural resources and alarming environmental issues, sustainable development has become a popular as well as a much-needed concept in modern process industries. Hence, ...manufacturers are quite keen on adopting novel process monitoring techniques to enhance product quality and process efficiency while minimizing possible adverse environmental impacts. Hardware sensors are employed in process industries to aid process monitoring and control, but they are associated with many limitations such as disturbances to the process flow, measurement delays, frequent need for maintenance, and high capital costs. As a result, soft sensors have become an attractive alternative for predicting quality-related parameters that are ‘hard-to-measure’ using hardware sensors. Due to their promising features over hardware counterparts, they have been employed across different process industries. This article attempts to explore the state-of-the-art artificial intelligence (Al)-driven soft sensors designed for process industries and their role in achieving the goal of sustainable development. First, a general introduction is given to soft sensors, their applications in different process industries, and their significance in achieving sustainable development goals. AI-based soft sensing algorithms are then introduced. Next, a discussion on how AI-driven soft sensors contribute toward different sustainable manufacturing strategies of process industries is provided. This is followed by a critical review of the most recent state-of-the-art AI-based soft sensors reported in the literature. Here, the use of powerful AI-based algorithms for addressing the limitations of traditional algorithms, that restrict the soft sensor performance is discussed. Finally, the challenges and limitations associated with the current soft sensor design, application, and maintenance aspects are discussed with possible future directions for designing more intelligent and smart soft sensing technologies to cater the future industrial needs.
•This paper provides a detailed description on state-of-the-art of soft sensors.•This work discusses how the industry can be sustainable via advanced monitoring.•This provides a good overview on soft sensing of different industries.•This identifies the current challenges in soft sensing to the manufacturing industry.•This work provides some possible directions for improving soft sensors.
We present a computational technique for modeling the evolution of partial differential equations (PDEs) with incomplete data. It is a significant extension of the recent work of data driven learning ...of PDEs, in the sense that we consider two forms of partial data: data are observed only on a subset of the domain, and data are observed only on a subset of the state variables. Both cases resemble more realistic data collection scenarios in real-world applications. Leveraging the recent work on modeling partially-observed dynamical systems, we present a deep neural network (DNN) structure that is suitable for PDE modeling with such kinds of incomplete data. In addition to the mathematical motivation for the DNN structure, we present an extensive set of numerical examples in both one- and two-dimensions to demonstrate the effectiveness of the proposed DNN modeling. In one example, the method can accurately predict the solution when data are only available in less than half (40%) of the domain.
•Proposed a new numerical method and DNN structure for modeling PDEs with incomplete data.•Considered two cases: missing variable and missing domain.•Conducted extensive numerical tests to demonstrate the effectiveness of the method, e.g., up to 60% of missing domain.
Recently, various algorithms for data-driven simulation and control have been proposed based on the Willems' fundamental lemma. However, when collected data are noisy, these methods lead to ...ill-conditioned data-driven model structures. In this article, we present a maximum likelihood framework to obtain an optimal data-driven model, the signal matrix model, in the presence of output noise. Data compression and noise-level estimation schemes are also proposed to apply the algorithm efficiently to large datasets and unknown noise-level scenarios. Two approaches in system identification and receding horizon control are developed based on the derived optimal estimator. The first one identifies a finite impulse response model. This approach improves the least-squares estimator with less restrictive assumptions. The second one applies the signal matrix model as the predictor in predictive control. The control performance is shown to be better than existing data-driven predictive control algorithms, especially under high noise levels. Both approaches demonstrate that the derived estimator provides a promising framework to apply data-driven algorithms to noisy data.
High fidelity (HF) mathematical models describing the generation of active force in the cardiac muscle tissue typically feature a large number of state variables to capture the intrinsically complex ...underlying subcellular mechanisms. With the aim of drastically reducing the computational burden associated with the numerical solution of these models, we propose a machine learning method that builds a reduced order model (ROM); this is obtained as the best-approximation of the HF model within a class of candidate differential equations based on Artificial Neural Networks (ANNs). Within a semiphysical (gray-box) approach, an ANN learns the dynamics of the HF model from input–output pairs generated by the HF model itself (i.e. non-intrusively), being additionally informed with some a priori knowledge about the HF model. The ANN-based ROM, with just two internal variables, can accurately reproduce the results of the HF model, that instead features more than 2000 variables, under several physiological and pathological working regimes of the cell. We then propose a multiscale 3D cardiac electromechanical model, wherein active force generation is described by means of the previously trained ANN. We achieve a very favorable balance between accuracy of the result (order of 10−3 for the main cardiac biomarkers) and computational efficiency (with a speedup of about one order of magnitude), still relying on a biophysically detailed description of the microscopic force generation phenomenon.
•We derive a Reduced Order Model (ROM) for active force generation in cardiomyocytes.•The ROM is built by Machine Learning of a physics-based high-fidelity model.•An Artificial Neural Network is trained within a grey-box approach.•We validate the ROM under several physiological and pathological cell conditions.•We present a 3D cardiac electromechanical model, based at the microscale on the ROM.•We obtain accurate results with a drastic reduction of computational time.
We introduce a data-driven hair capture framework based on example strands generated through hair simulation. Our method can robustly reconstruct faithful 3D hair models from unprocessed input point ...clouds with large amounts of outliers. Current state-of-the-art techniques use geometrically-inspired heuristics to derive global hair strand structures, which can yield implausible hair strands for hairstyles involving large occlusions, multiple layers, or wisps of varying lengths. We address this problem using a voting-based fitting algorithm to discover structurally plausible configurations among the locally grown hair segments from a database of simulated examples. To generate these examples, we exhaustively sample the simulation configurations within the feasible parameter space constrained by the current input hairstyle. The number of necessary simulations can be further reduced by leveraging symmetry and constrained initial conditions. The final hairstyle can then be structurally represented by a limited number of examples. To handle constrained hairstyles such as a ponytail of which realistic simulations are more difficult, we allow the user to sketch a few strokes to generate strand examples through an intuitive interface. Our approach focuses on robustness and generality. Since our method is structurally plausible by construction, we ensure an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.
Although integrated simulation-optimization modeling can provide a comprehensive and reliable analysis for water quality management (WQM), it is usually not easy to implement in practice. This study ...proposed a new efficient simulation-optimization modeling approach by leveraging the power of data-driven modeling, to support WQM under various uncertainties. A water quality simulation model is integrated with the optimization model, and then substituted by a series of numerical surrogate models based on inexact linear regression. The transformation can significantly reduce the computational burden and make it possible to implement uncertainty quantification through hybrid inexact programming. The proposed model incorporates interval quadratic programming and credibility constrained programming to deal with nonlinearity and various uncertainties associated with the management system. The proposed approach is applied to a real case study of the Grand River watershed in Canada for controlling phosphorus concentration in river water. The Grand River Simulation Model (GRSM) is employed as the physical simulation model to estimate the total phosphorus concentration in the river. Interval solutions under different confidence levels of violating the effluent standards were obtained, which can be used to generate optimal phosphorus control strategies. The results indicate the proposed data-driven interval credibility constrained quadratic programming (DICCQP) model is able to provide reliable and robust solutions for WQM by considering nonlinearity and various uncertainties while maintaining a high computational efficiency. The proposed new framework can be extended and applied to the other watersheds. The high efficiency of the proposed model makes it possible to solve large-scale complex water quality management and planning problems.
•A new data-driven modeling based simulation-optimization framework is proposed for WQM.•Various uncertainties are quantified by hybrid inexact programming.•The proposed model improves the optimization efficiency.•The model is applied in the Grand River to control and manage TP loadings.•The model can be extended to other watersheds to solve complex large-scale problems.
Metasurfaces (MSs) show great promise in efficient electromagnetic energy harvesting (EMEH) due to their compactness, high efficiency, and long-distance transmission capabilities. Nonetheless, the ...conventional iterative and time-consuming solving process of MSs significantly escalates computational demands. Furthermore, once processed, the MS shape remains fixed and cannot be adapted to changing requirements. Accordingly, a critical challenge is the development of a new efficient solver for MS real-time tuning. Here, we introduce a class of digital coded MS databases including multiple pre-defined resonant frequency MS. The combination of multiple MS base functions from the database enables swift resonance frequency adjustments to adapt to changing environmental conditions. A topology optimization method based on data-driven modeling is employed to rapidly acquire the optimal digital coding for the corresponding MS at various operating frequencies, facilitating the construction of a database. This approach integrates a convolutional neural network and genetic algorithm (CNNGA). It not only enables more accurate and expedited forward prediction of MSs' electromagnetic (EM) response but also facilitates inverse design based on specified requirements. We employ this method to design a MS that achieves perfect energy harvesting (EH) over a broad range of incident angles and polarization directions. In addition, a data-driven modeling is used to establish an EH efficiency predictive model corresponding to MS combination. This model serves as a guide for real-time MS adjustments as per changing requirements. Compared to previously designed MSs, this model achieves rapid design and adaptive adjustment capabilities. Through the incorporation of various functional MS base functions into the database, this method can be universally applied to MS combinations tailored to specific functions, including EM cloaking, ultra-thin flat lenses, and computational MSs.
Display omitted
•Constructing a digital coded metasurface database for fast frequency adjustment.•Developing a data-driven modeling method for fast digital coded metasurface design.•Realizing rapid combination of metasurfaces for real-time adaptive tuning.•Proposing a combined metasurface modular adaptive adjustment universal design method.