This work develops a framework for building machine learning models and machine-learning-based predictive control schemes for batch crystallization processes. We consider a seeded fesoterodine ...fumarate cooling crystallization and dissolution process in a batch reactor and present the methodology and implementation of simulation, modeling, and controller design. Specifically, to address the experimental data scarcity problem, we first develop a one-dimensional population balance model based on published kinetic parameters that were obtained empirically to describe the formation of crystals via nucleation, growth, and agglomeration. Then, recurrent neural network (RNN) and autoencoder–RNN (AERNN) models are developed using data from extensive open-loop simulations of the semi-empirical population balance model under various operating conditions to capture the process dynamic behavior. Two model predictive control (MPC) schemes using the respective RNN and AERNN models are developed to optimize the crystallization process with respect to product yield, crystal size, number of fines in the final product, and energy consumption, while accounting for the constraints on manipulated inputs. Through open- and closed-loop simulations, it is demonstrated that the RNN and AERNN models capture the process dynamics well, and the RNN- and AERNN-based MPCs achieved the desired product yield and crystal size with significantly improved computational efficiency.
A new algorithm for the detection and evaluation of spherical particle objects (called SPODE) has been developed. It can be used to automatically analyze images of disperse flows (e.g., bubbles, ...droplets, sprays) obtained with a shadow graphic inline probe. The algorithm contains a convolutional neural network (CNN) trained with synthetically generated particles. The use of synthetic particles has the advantage that the exact size and position of the particles are known, and therefore the CNN can be validated. The synthetic images are generated using a physical model of light transport in the particle-loaded fluid (i.e., ray tracing). The algorithm was applied to real data of droplets in a pump-mixer and bubbles in an aerated stirred tank. Particles were detected, and their size distributions were determined. The results clearly show that the algorithm can automatically and reliably analyze images of disperse multiphase flow.
The traditional industrial production of functionalized polydimethylsiloxane (PDMS) consists of a two-stage batch process, i.e., the ring-opening polymerization (ROP)/equilibrium of cyclic siloxane ...monomer followed by attaching functional groups to PDMS through hydrosilylation. In this study, we developed a continuous tandem flow process combining the two stages for the synthesis of functionalized PDMS using octamethylcyclotetrasiloxane (D4) as the monomer. We investigated the kinetics of ROP/equilibrium reaction of D4 using a flow tube reactor filled with the cationic resin catalyst Amberlyst 35. The reaction was found to be complete within a short residence time, 2–3 min, giving steady flow-out of the reaction mixture containing ca. 80% nonvolatile content for a period of up to 90 min. The catalyst was reusable for up to 80 times by rinsing with methyl isobutyl ketone (MIBK). The resulting PDMS-SiH, bearing two terminal -SiH groups, was used for subsequent hydrosilylation reaction with different vinyl compounds in the presence of the Speier catalyst. PDMSs possessing functionalities such as ethylene glycol (EG), epoxy (EPO), methacrylate (MA), and hydrocarbon chain were prepared through continuous tandem ROP/equilibrium and hydrosilylation reaction. These products exhibit excellent defoaming performance that is comparable with the commercial products from a batch process. The present study paves the way to a more efficient production of functionalized silicone oil through a flow reaction process.
Renewable energy sources have been viewed as an important approach to solve the energy crisis and are carbon neutral. Compared with other storage mediums, green ammonia has the highest total energy ...efficiency and is an important raw material in the chemical industry. The fluctuation of renewable energy is a huge challenge for green ammonia production; thus, a high-precision prediction model is required to enable the building of an advanced control system for green ammonia production. In this study, a transformer-based multivariable multistep time-series prediction model that includes the temporal scales feature of renewable energy, called MSPTST, is proposed as the first step to solve the problem of unstable green ammonia production. The model outperformed other state-of-the-art models, and its R 2 values were 0.986, 0.9998, and 0.990 under a high load condition, low renewable energy condition, and natural condition, respectively. Stable operation and swift load transitions are facilitated through a model predictive control based on the MSPTST framework. The impact of energy fluctuations on green ammonia synthesis is addressed by further integrating control theory and algorithms.
Probability distributions are often used to characterize the randomness of nature. In stochastic model predictive control (SMPC), disturbances are described by a probability distribution that is used ...within a stochastic optimization problem to construct a feedback control law. While powerful, these probability distributions are themselves subject to their own type of uncertainty, often called distributional uncertainty. In this work, we establish that SMPC, under suitable assumptions, provides a nonzero margin of robustness to this distributional uncertainty. This inherent distributional robustness is afforded by feedback and careful algorithm design. Through a small example, we demonstrate the implications of this result for incorrectly modeled, out-of-sample, and even unmodeled disturbances. This result also covers scenario-based approximations of stochastic optimal control problems and unifies the description of robustness for nominal and stochastic model predictive control.
A five-step method is proposed for determining the design space of mesenchymal stem cell (MSC) cultivation processes incorporating system dynamics and variabilities. The method uses mechanistic ...models to capture the MSC cultivation dynamics while performing stochastic simulation to consider influences of the variabilities at raw material, cell, process specification, and operation levels. Simulated time-dependent changes in critical quality attributes (CQAs) are used to determine a dynamic and probabilistic design space followed by a dynamic global sensitivity analysis. The method was applied to an MSC cultivation case study where a time-dependent design space could be obtained for a given set of CQAs and critical process parameters. Sensitivity analysis revealed maximum cell density was a dominant parameter of interest. The input probability distributions were re-evaluated on the basis of a scenario to change the substrate material specification, which could affect the maximum cell density. The updated calculation yielded a broader and viable design space.
Establishing relations between variables and real-time prediction of quality variables or other key indicators are critical for dynamic processes including industrial and biological processes. In ...this study, a novel multivariate statistical modeling method named “kernel-regularized latent variable regression (KrLVR) approach” is proposed for capturing the dynamics of a process by building KrLVR models with process and quality data. First, a regularization term based on a kernel matrix is incorporated into the objective of the latent variable regression model. Consequently, the proposed KrLVR method has the ability to overcome potential ill-conditioning resulting from collinearity in process data and has a stronger prediction power. Besides, the inner model is consistent with the outer model, which enables the proposed method to predict quality data with fewer latent variables. Second, the prior knowledge of dynamic processes such as exponential stability and smoothness can be integrated into the modeling process by using an appropriate kernel matrix. In addition, to meet the requirement of exponential stability of the model, the weights of the model should decay exponentially, that is, coincident with determining the number of historical observations for data augmentation (identification of the model structure). Therefore, the problem of tuning model complexity is eluded, and it becomes finding appropriate hyper-parameters of the kernel matrix. Moreover, the empirical Bayesian method is utilized for estimating hyper-parameters of the kernel matrix from augmented process and quality data. Three case studies illustrate the performance of the proposed KrLVR method by comparing with several other relevant methods.
Causal discovery approaches are gaining popularity in industrial processes. Existing causal discovery algorithms can indeed find some important causal relationships from industrial data, but, at the ...same time, the algorithms may also give some incorrect causal relationships. In order to deal with this problem, we give four kinds of process knowledge definitions according to the special characteristics of complex industrial processes. Causal discovery algorithms will yield more accurate results and deeper insights if the process knowledge is properly addressed. Based on commercial-scale fluid catalytic cracker unit data, we validate the effectiveness of the proposed methods with some state-of-the-art causal discovery algorithms.
In this work, an optimized process for methanol production using syngas from bi-reforming is proposed. The feed ratio (CH4/CO2/H2O) in the bi-reforming step, the purge stream quantity, and the heat ...recovery were optimized with the overall objective to reduce direct and indirect CO2 emission in the process. The effect of the feed ratio on the rates of simultaneous reactions involved in bi-reforming (i.e., DR, SMR, and WGS) was investigated to understand the balance between the consumption and production of CO2 relative to CH4. Compared to the conventionally used feed ratio of 3:1:2, this study found that the 1:1:2 ratio resulted in 100% CH4 conversion and higher CO2 consumption per mole of CH4 in the bi-reforming step. A plant-wide heat integration approach was adopted using pinch analysis to design a network of 27 heat exchangers. The implementation of a heat exchanger network resulted in the recovery of 221 MW of heat from process streams within the plant. With complementary optimization strategies, the proposed process resulted in ∼0.31 tonnes of CO2 per tonne of methanol production, one of the lowest among the processes published in the literature.
Invariant features characterize the essential nature of things behind the apparently rapid and noisy changes. Thus, learning invariances become one of the key problems of machine learning. ...Slow-feature analysis (SFA) is one such method. However, slowness in the original SFA, which is used as the learning criterion, is defined based on the linear temporal dependency assumption. To overcome this limitation, a new learning principle is introduced in this paper to define slowness that is suitable for nonlinear dynamic systems from an information perspective. This new principle is called EVOLVE·INFOMAX as it seeks to maximize the information preservation of system states during dynamic evolution, while aligning each feature to having the same uncertainty and constraining the features to be quasi-independent. The theoretical properties of this new principle are rigorously justified, which shows the characteristics of the model behavior, the optimality of the induced estimator, and the relationship with maximum likelihood estimation. The equivalence to the original definition of SFA is also analyzed, and the existence of a solution is shown. Two case studies show the potential capabilities and flexibility of the proposed method in both slow-feature extraction and downstream tasks, such as process monitoring.