In this work, we present a constrained batch-parallel Bayesian optimization (BO) framework, termed pBO-2GP-3B, to accelerate the optimization process for high-dimensional and computationally ...expensive problems, with known and unknown constraints. Two Gaussian processes (GPs) are simultaneously constructed: one models the objective function, whereas the other models the unknown constraints. The known constraint is penalized directly into the acquisition function. For every iteration, three batches are built in sequential order: the first two are the acquisition hallucination and the exploration batches for the objective GP, respectively, and the third one is the exploration batch for the classification GP. The pBO-2GP-3B optimization framework is demonstrated with three synthetic examples (2D and 6D), as well as a 33D multi-phase solid–liquid computational fluid dynamics (CFD) model for the design optimization of a centrifugal slurry pump impeller.
•Constrained and parallel Bayesian optimization is developed for high-dimensional problems.•Two Gaussian processes for objective and classification of unknown constraints are simultaneously constructed.•Known constraints are penalized in regularized acquisition functions.•Three batches for acquisition hallucination, exploration, and classification are run in parallel.•Design of centrifugal slurry pump impeller based on multi-phase CFD is demonstrated.
High-fidelity complex engineering simulations are often predictive, but also computationally expensive and often require substantial computational efforts. The mitigation of computational burden is ...usually enabled through parallelism in high-performance cluster (HPC) architecture. Optimization problems associated with these applications is a challenging problem due to the high computational cost of the high-fidelity simulations. In this paper, an asynchronous parallel constrained Bayesian optimization method is proposed to efficiently solve the computationally expensive simulation-based optimization problems on the HPC platform, with a budgeted computational resource, where the maximum number of simulations is a constant. The advantage of this method are three-fold. First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime. This efficiency feature is further improved so that when each of the inputs is finished, another input is queried without waiting for the whole batch to complete. Second, the proposed method can handle both known and unknown constraints. Third, the proposed method samples several acquisition functions based on their rewards using a modified GP-Hedge scheme. The proposed framework is termed aphBO-2GP-3B, which means
a
synchronous
p
arallel
h
edge
B
ayesian
o
ptimization with two Gaussian processes and three batches. The numerical performance of the proposed framework aphBO-2GP-3B is comprehensively benchmarked using 16 numerical examples, compared against other 6 parallel Bayesian optimization variants and 1 parallel Monte Carlo as a baseline, and demonstrated using two real-world high-fidelity expensive industrial applications. The first engineering application is based on finite element analysis (FEA) and the second one is based on computational fluid dynamics (CFD) simulations.
Computational fluid dynamics (CFD)-based wear predictions are computationally expensive to evaluate, even with a high-performance computing infrastructure. Thus, it is difficult to provide accurate ...local wear predictions in a timely manner. Data-driven approaches provide a more computationally efficient way to approximate the CFD wear predictions without running the actual CFD wear models. In this paper, a machine learning (ML) approach, termed WearGP, is presented to approximate the 3D local wear predictions, using numerical wear predictions from steady-state CFD simulations as training and testing datasets. The proposed framework is built on Gaussian process (GP) and utilized to predict wear in a much shorter time. The WearGP framework can be segmented into three stages. At the first stage, the training dataset is built by using a number of CFD simulations in the order of O(102). At the second stage, the data cleansing and data mining processes are performed, where the nodal wear solutions are extracted from the solution database to build a training dataset. At the third stage, the wear predictions are made, using trained GP models. Two CFD case studies including 3D slurry pump impeller and casing are used to demonstrate the WearGP framework, in which 144 training and 40 testing data points are used to train and test the proposed method, respectively. The numerical accuracy, computational efficiency and effectiveness between the WearGP framework and CFD wear model for both slurry pump impellers and casings are compared. It is shown that the WearGP framework can achieve highly accurate results that are comparable with the CFD results, with a relatively small size training dataset, with a computational time reduction on the order of 105 to 106.
•Gaussian process predicts erosive wear with significant reduction of computational costs.•Gaussian process predictions based on small dataset can be as accurate as CFD simulations.•Demonstration is in a real-world industrial setting of slurry pump impeller and casing design.
Recent loop testing performed at the GIW Hydraulic Lab1,2 has provided pump performance data for two highly non‐Newtonian slurries with significantly different characteristics: a high clay content ...slurry with minimal coarse solids; and a typical, low clay content, two‐component tailings slurry. The importance of air removal in the sump and pipe loop was demonstrated using a simple, yet novel de‐aeration system. In addition to the measurement of performance losses, determination of the upper limit of “pumpability” for these slurries relative to their concentration and associated yield stress was investigated. However, once the slurry was de‐aerated, no limits could be found, other than those dictated by suction side losses (NPSHA) or excessive pipeline friction gradients, indicating that the only true limit in practice is one of system economics, i.e. pump operating and capital cost.
Experimentally measured pump head and efficiency were compared against corresponding predictions from two different models: the Walker and Goulas technique3 and the Graham et al. technique,4 with special focus given to the dependence of the losses on pump rotary speed.
A pipeline slurry friction loss model consisting of three regimes was initially proposed by Wilson then extended to four components consisting of fluid, pseudo‐homogeneous, heterogeneous, and fully ...stratified regimes. The weighting technique using up to four regime‐related components often works well for friction loss estimations based on simple input data and model parameters. This also holds for the Hydraulic Institute's pump performance derating procedure for settling slurries. The comparisons and discussion focus on coarse particle slurries and some cases where the modelling estimations for pipeline and pump performance were not particularly accurate.
Hydrocyclone separators are some of the most widely used instruments in a variety of industrial applications. Their major functions are to sort, classify and separate solid particles and liquid ...droplets within a multiphase flow system. Research studies employing both experimental and numerical methods recognize that the performance of a cyclone is related to its geometric dimensions as well as the inner flow pattern. Although many successes have been achieved over the past years, the majority of investigations only focused on the simplified design of cyclone geometry and on single-phase flow. Therefore, this study set out to investigate the flow field in a new-design hydrocyclone and used particle image velocimetry (PIV) to explore the complex swirling flow behavior inside the cyclone within an air core. The flow field inside the cyclone was also studied by computational fluid dynamics (CFD) simulation using a commercial CFD package. Different turbulent models were tested in this study with a moderate-size mesh, and the interface of liquid phase and air phase was predicted by the volume of fluid (VOF) model.
High-fidelity complex engineering simulations are highly predictive, but also computationally expensive and often require substantial computational efforts. The mitigation of computational burden is ...usually enabled through parallelism in high-performance cluster (HPC) architecture. In this paper, an asynchronous constrained batch-parallel Bayesian optimization method is proposed to efficiently solve the computationally-expensive simulation-based optimization problems on the HPC platform, with a budgeted computational resource, where the maximum number of simulations is a constant. The advantages of this method are three-fold. First, the efficiency of the Bayesian optimization is improved, where multiple input locations are evaluated massively parallel in an asynchronous manner to accelerate the optimization convergence with respect to physical runtime. This efficiency feature is further improved so that when each of the inputs is finished, another input is queried without waiting for the whole batch to complete. Second, the method can handle both known and unknown constraints. Third, the proposed method considers several acquisition functions at the same time and sample based on an evolving probability mass distribution function using a modified GP-Hedge scheme, where parameters are corresponding to the performance of each acquisition function. The proposed framework is termed aphBO-2GP-3B, which corresponds to asynchronous parallel hedge Bayesian optimization with two Gaussian processes and three batches. The aphBO-2GP-3B framework is demonstrated using two high-fidelity expensive industrial applications, where the first one is based on finite element analysis (FEA) and the second one is based on computational fluid dynamics (CFD) simulations.
Descartes' interests extended to diverse subjects, and one of the most striking subjects he studied was artificial intelligence. At least, the contention that he was considering artificial ...intelligence theory (in the early seventeenth century) is one of the main contentions of this dissertation. In other words, when Descartes talks of ‘thinking machines’ in the Discourse on Method and talks of machines in other places in the Cartesian corpus, Descartes shows that he has an artificial intelligence theory—one in which the very possibility of thinking machines is discounted, and one in which the very term ‘thinking machine’ is held to be a misnomer. I begin by giving a short introduction to the problem, and then I give an exposition of artificial intelligence theories of the ‘micro-world’ variety. I show how these theories typically disallow the possibility of artificial intelligence on the view that, while computers can do some things much better than humans, the success of machines is always limited to one particular micro-world (the world of chess, for example) or other. Whereas humans typically have a wide-ranging ability to perform all sorts of tasks, computers can do only certain particular tasks well, and whereas humans excel in performing tasks in which intuition is required, computers can do no tasks at all where intuition is required. Citing the Discourse on Method as a starting point, I extract an artificial intelligence theory (of the micro-world variety) from Descartes' works. In his body of work, Descartes is theorizing on the difference between minds and machines. I show how this diverse work is truly an artificial intelligence theory. Last, by way of conclusion I engage in a short critique of Descartes' artificial intelligence theory. The critique questions the essential difference between mind and body in the Cartesian philosophy and the acceptability of Descartes' answers to objections on that subject in the Objections and Replies to the Meditations, and questions the Cartesian artificial intelligence theory from a reductionistic and physicalistic point of view.
Treatment of patients with early Lyme disease has trended toward longer duration despite the absence of supporting clinical trials.
To evaluate different durations of oral doxycycline treatment and ...the combination of oral doxycycline and a single intravenous dose of ceftriaxone for treatment of patients with early Lyme disease.
Randomized, double-blind, placebo-controlled trial.
Single-center university hospital.
180 patients with erythema migrans.
Ten days of oral doxycycline, with or without a single intravenous dose of ceftriaxone, or 20 days of oral doxycycline.
Outcome was based on clinical observations and neurocognitive testing. Efficacy was assessed at 20 days, 3 months, 12 months, and 30 months.
At all time points, the complete response rate was similar for the three treatment groups in both on-study and intention-to-treat analyses. In the on-study analysis, the complete response rate at 30 months was 83.9% in the 20-day doxycycline group, 90.3% in the 10-day doxycycline group, and 86.5% in the doxycycline-ceftriaxone group (P > 0.2). The only patient with treatment failure (10-day doxycycline group) developed meningitis on day 18. There were no significant differences in the results of neurocognitive testing among the three treatment groups and a separate control group without Lyme disease. Diarrhea occurred significantly more often in the doxycycline-ceftriaxone group (35%) than in either of the other two groups (P < 0.001).
Extending treatment with doxycycline from 10 to 20 days or adding one dose of ceftriaxone to the beginning of a 10-day course of doxycycline did not enhance therapeutic efficacy in patients with erythema migrans. Regardless of regimen, objective evidence of treatment failure was extremely rare.