The sintering process, as a primary modus of the blast furnace ironmaking industry, has enormous economic value and environmental protection significance for the iron and steel enterprises. Recently, ...with the emergence of artificial intelligence and big data, data‐driven modelling methods in the sintering process have increasingly received the researchers' attention. But now, there is still no systematic review of the data‐driven modelling approaches in the sintering process. Therefore, in this article, we conduct a comprehensive overview and prospects on the data‐driven models for the purpose of intelligent sintering. First, the mechanism and characteristics of the sintering process are introduced and analyzed elaborately. Second, the detailed research status of the sintering process is illustrated from four aspects: key parameters prediction, control, optimization, and others. Finally, several challenges and promising modelling methods such as deep learning in the sintering process are outlined and discussed for future research.
Green ammonia is a candidate fuel to decarbonise shipping and other industries. However, ammonia features a lower reactivity compared to conventional fuels and is therefore difficult to burn. To ...resolve this issue, thermo-catalytic cracking of ammonia using waste heat is often employed to produce NH3/H2/N2 blends as fuel. However, on-site operational variations in this process can become sources of uncertainty in the fuel composition, causing randomness of the flame's physicochemical properties and challenging flame stability. In the present work, a surrogate model is built using the polynomial chaos expansion (PCE) method to investigate the impact of fuel composition variability on combustion characteristics at different operating conditions. Impacts of 1.5% deviation in the fuel composition on the flame properties for different initial pressures (Pi) and unburnt fuel temperatures (Tu) are investigated for a wide range of equivalence ratios covering lean and rich mixtures. The uncertainty effects defined by the coefficient of variation (COV) fluctuate for equivalence ratios greater than 1.1, while no fluctuation is observed in COV for near stoichiometric combustion conditions. It is shown that H2 variation in the fuel blend has the strongest effect (over 80%) on the uncertainty of all investigated physicochemical properties of the flame. The least affected property is the adiabatic flame temperature with variations of about 2.5% in richer fuel conditions. The results further show that preheating of the reactants can significantly reduce the COV of laminar flame speed. The consequences of these uncertainties upon different combustion technologies are then discussed and it is argued that moderate and intense low oxygen dilution (MILD) and colourless distributed combustion (CDC) technology may remain resilient.
•Fuel rich mixtures are more affected by small variations in blend composition.•Uncertainties in the blend composition can significantly increase NO formation.•Hydrogen content of the blend is most influential upon combustion uncertainties.•Adiabatic flame temperature is least affected by the uncertainties in blend composition.•MILD and CDC technologies are expected to be resilient to the effects of uncertainties.
Data‐driven soft sensing approaches have been a hot research field for decades and are increasingly used in industrial processes due to their advantages of easy implementation and high efficiency. ...However, nonlinear and time‐varying problems widely exist in practical industrial processes. Just‐in‐time learning (JITL) was proposed to solve these problems and has attracted great attention in practical applications. To present a comprehensive review of JITL‐based soft sensor studies and provide detailed technical guidance for new researchers, this paper introduces the recent research on JITL‐based soft sensor modelling methods in the industrial process from three aspects: similarity criterion, sample subset, and local model, which include the whole process of establishing a JITL‐based soft sensor. Moreover, the future research and innovation directions of JITL‐based soft sensors in industrial processes are also prospected.
Display omitted
•A conceptual and accessible tutorial introduction to ML for biochemical engineers.•Exploration of field specific use cases of ML algorithms over the past 30-years.•Identification of ...common approaches and challenges to application.•Insight into the nature of the core challenges driving application research.•Introduction to advanced and novel techniques that may be translated in the future.
The field of machine learning is comprised of techniques, which have proven powerful approaches to knowledge discovery and construction of ‘digital twins’ in the highly dimensional, nonlinear and stochastic domains common to biochemical engineering. We review the use of machine learning within biochemical engineering over the last 20 years. The most prevalent machine learning methods are demystified, and their impact across individual biochemical engineering subfields is outlined. In doing so we provide insights into the true benefits of each technique, and obstacles for their wider deployment. Finally, core challenges into the application of machine learning in biochemical engineering are thoroughly discussed, and further insight into adoption of innovative hybrid modelling and transfer learning strategies for development of new digital biotechnologies is provided.
•Realistic method for DSM was developed for a district heating network.•The method was applied in a real DHN in Denmark.•Sensitivity of the optimal solution to energy, comfort and pumping costs is ...analyzed.•Result shows up to 11% energy cost saving.
This paper proposes a realistic demand side management mechanism in an urban district heating network (DHN) to improve system efficiency and manage congestion issues. Comprehensive models including the circulating pump, the distribution network, the building space heating (SH) and domestic hot water (DHW) demand were employed to support day-ahead hourly energy schedule optimization for district heating substations. Flexibility in both SH and DHW were fully exploited and the impacts of both weekly pattern and building type were modelled and identified in detail. The energy consumption scheduling problem was formulated for both the individual substations and the district heating operator. Three main features were considered in the formulation: user comfort, the heat market and network congestion. A case study was performed based on a representative urban DHN with a 3.5 MW peak thermal load including both residential and commercial buildings. Results show an up to 11% reduction of energy costs. A sensitivity analysis was conducted which provides decision makers with insights into how sensitive the optimum solution is to any changes in energy, user comfort or pumping costs.
•LSTM models and DRL provide an effective data-driven district energy management.•The proposed approach reduces computational cost compared to a forward modelling.•The coordinated management achieves ...23% of peak reduction compared to baseline RBC.•The DRL controller is capable to optimize comfort, cost and peaks at district level.
Demand side management at district scale plays a crucial role in the energy transition process, being an ideal candidate to balance the needs of both users and grid, by managing the volatility of renewable sources and increasing energy flexibility. The presented study aims to explore the benefits of a coordinated approach for the energy management of a cluster of buildings to optimise the electrical demand profiles and provide services to the grid without penalising indoor comfort conditions. The proposed methodology makes use of a fully data-driven control scheme which exploits Long Short-Term Memory (LSTM) Neural Networks, and Deep Reinforcement Learning (DRL). A simulation environment is introduced to train a DRL controller to manage the operation of heat pumps and chilled and domestic hot water storage for a cluster of four buildings. LSTM models are trained with synthetic data set created in EnergyPlus and are integrated into simulation environment to evaluate the indoor temperature dynamics in each building. The developed DRL controller is tested against a manually optimised Rule Based Controller (RBC). Results show that the DRL algorithm is able to reduce the overall cluster electricity costs, while decreasing the peak energy demand by 23% and the Peak to Average Ratio (PAR) by 20%, without penalizing indoor temperature control.
In order to meet the ambitious emission-reduction targets of the Paris Agreement, energy efficient transition of the building sector requires building retrofit methodologies as a critical part of a ...greenhouse-gas (GHG) emissions mitigation plan, since in 2050 a high proportion of the current global building stock will still be in use. This paper reviews current retrofit methodologies with a focus on the contrast between data-driven approaches that utilize measured building data, acquired through either 1) on-site sensor deployment or 2) from pre-aggregated national repositories of building data. Differentiating between 1) bottom-up approaches that can be divided into white-, grey- and black-box modelling, and 2) top-down approaches that utilize analytical methods of clustering and regression, this paper presents the state-of-the-art in current building retrofit methodologies; outlines their strengths and weaknesses; briefly highlights the challenges in their implementation and concludes by identifying a hybrid approach - of lean in-situ measurements supplemented by modelling for verification - as a potential strategy to develop and implement more robust retrofit methodologies for the building stock.
•A state-of-the art review on data-driven building modelling techniques is presented.•The models are classified into top-down and bottom-up approaches.•Comparative discussion on white-, grey- and black-box models is included.•An outlook on the latest building data collection technologies is also included.
Data-driven prediction and physics-agnostic machine-learning methods have attracted increased interest in recent years achieving forecast horizons going well beyond those to be expected for chaotic ...dynamical systems. In a separate strand of research data-assimilation has been successfully used to optimally combine forecast models and their inherent uncertainty with incoming noisy observations. The key idea in our work here is to achieve increased forecast capabilities by judiciously combining machine-learning algorithms and data assimilation. We combine the physics-agnostic data-driven approach of random feature maps as a forecast model within an ensemble Kalman filter data assimilation procedure. The machine-learning model is learned sequentially by incorporating incoming noisy observations. We show that the obtained forecast model has remarkably good forecast skill while being computationally cheap once trained. Going beyond the task of forecasting, we show that our method can be used to generate reliable ensembles for probabilistic forecasting as well as to learn effective model closure in multi-scale systems.
•Introduces RAFDA for learning dynamical models from noisy observations.•RAFDA allows for markedly increased forecast capabilities over several Lyapunov times.•RAFDA lends itself to perform probabilistic forecasting with reliable ensembles.•RAFDA is capable of learning closure models from noisy observations.
This manuscript addresses the problem of controlling a bioreactor to maximize the production of a desired product while respecting the constraints imposed by the nature of the bio‐process. The ...approach is demonstrated by first building a data‐driven model and then formulating a model predictive controller (MPC) with the results illustrated by implementing a detailed monoclonal antibody production model (the test bed) created by Sartorius Inc. In particular, a recently developed data‐driven modelling approach using an adaptation of subspace identification techniques is utilized that enables the incorporation of known physical relationships in the data‐driven model development (constrained subspace model identification), making the data‐driven model process aware. The resultant controller implementation demonstrates a significant improvement in production compared to the existing proportional integral (PI) controller strategy used in the monoclonal antibody production. Simulation results also demonstrate the superiority of the process‐aware or constrained subspace MPC compared to traditional subspace MPC. Finally, the robustness of the controller design is illustrated via the implementation of a model developed using data from a test bed with a different set of parameters, thus showing the ability of the controller design to maintain good performance in the event of changes such as a different cell line or feed characteristics.
Machine learning (ML) expands traditional data analysis and presents a range of opportunities in ecosystem service (ES) research, offering rapid processing of ‘big data’ and enabling significant ...advances in data description and predictive modelling. Descriptive ML techniques group data with little or no prior domain specific assumptions; they can generate hypotheses and automatically sort data prior to other analyses. Predictive ML techniques allow for the predictive modelling of highly non-linear systems where casual mechanisms are poorly understood, as is often the case for ES. We conducted a review to explore how ML is used in ES research and to identify and quantify trends in the different ML approaches that are used. We reviewed 308 peer-reviewed publications and identified that ES studies implemented machine learning techniques in data description (64%; n = 308) and predictive modelling (44%), with some papers containing both categories. Classification and Regression Trees were the most popular techniques (60%), but unsupervised learning techniques were also used for descriptive tasks such as clustering to group or split data without prior assumptions (19%). Whilst there are examples of ES publications that apply ML with rigour, many studies do not have robust or repeatable methods. Some studies fail to report model settings (43%) or software used (28%), and many studies do not report carrying out any form of model hyperparameter tuning (67%) or test model generalisability (59%). Whilst studies use ML to analyse very large and complex datasets, ES research is generally not taking full advantage of the capacity of ML to model big data (1138 medium number of data points; 13 median quantity of variables). There is great further opportunity to utilise ML in ES research, to make better use of big data and to develop detailed modelling of spatial-temporal dynamics that meet stakeholder demands.
Display omitted
•Machine learning (ML) is increasingly being used in ecosystem service research.•ML is used for describing data and predictive modelling.•Many ecosystem service (ES) studies lack rigour in how ML is used.•Capacity to use ML on big ES data has not been fully realised.•We highlight best practice for ongoing use of machine learning in ES research.