Electrical transport properties of saturated porous media, such as soils, rocks and fractured networks, typically composed of a non-conductive solid matrix and a conductive brine in the pore space, ...have numerous applications in reservoir engineering and petrophysics. One of the widely used electrical conductivity models is the empirical Archie's law that has a practical application in well-log interpretation of reservoir rocks. The Archie equation does not take into account the contributions of clay minerals, isolated porosity, heterogeneity in grains and pores and their distributions, as well as anisotropy. In the literature, either some modifications were presented to apply Archie's law to tight and clay-rich reservoirs or more modern models were developed to describe electrical conductivity in such reservoirs. In the former, a number of empirically derived parameters were proposed, which typically vary from one reservoir to another. In the latter, theoretical improvements by including detailed characteristics of pore space morphology led to developing more complex electrical conductivity models. Such models enabled us to address the electrical properties in a wider range of potential reservoir rocks through theoretical parameters related to key reservoir-defining petrophysical properties. This paper presents a review of the electrical conductivity models developed using fractal, percolation and effective medium theories. Key results obtained by comparing experiential and theoretical models with experiments/simulations, as well as advantages and drawbacks of each model are analyzed. Approaches to obtaining more reasonable electrical conductivity models are discussed. Experiments suggest more complex relationships between electrical conductivity and porosity than experiential models, particularly in low-porosity formations. However, the available theoretical models combined with simulations do provide insight to how microscale physics affects macroscale electrical conductivity in porous media.
•Neural network (NN) models in membrane processes for water treatment.•Limitations and improvements in developed NN in the last two and a half decade.•Comparison of network topology, training ...algorithm, and performance.
The freshwater scarcity is causing a major challenge due to the growing global population. The brackish water and seawater are the biggest sources of water on the planet. Therefore, using desalination and water treatment techniques, household and industrial demands can be met. Microfiltration (MF), ultrafiltration (UF), nanofiltration (NF), reverse osmosis (RO), membrane bioreactor (MBR), and membrane distillation (MD) are some of the membrane processes used in water and wastewater treatment. Artificial intelligence models, such as artificial neural networks (ANN), have recently become a popular alternative to modeling these processes due to several advantages over the conventional model. Therefore, this paper presents a review of ANN models from the last two and a half decades developed for the membrane processes used in wastewater treatment and desalination. Moreover, a complete procedure for the development of two types of ANN models is provided in the paper. The study also discusses the development strategies and comparison of different sorts of ANN models. These models have been applied to several lab-scale, pilot and commercial plants for simulation, optimization, and process control. This work may aid in the development of new ANN models for membrane processes by considering the recent improvements in the field.
•A neural-network model is trained to predict collector contact efficiency of each pore throat (ηt).•Influence of pore structure of porous media is described by pore network models, where the ηt ...occurs as a distribution.•Upscaled values of the deposition rate coefficient are provided by PNMs and compared with the prediction of colloid filtration theory.•High velocity region of flow field neglected in colloid filtration theory leads to the difference in the prediction of deposition rate coefficient.
Conventional colloid filtration theory (CFT) uses the single collector contact efficiency (η) to describe the mass transfer of colloids to a collector surface. However, this approach neglects the full complexity of the pore structure and flow field of real porous media. In this study, the porous medium geometry, flow field, and colloid mass transfer are quantified using a pore-network model (PNM). A database of pore scale η is established by finite-element method to train a Neural-network model (NNM). The reasonable prediction of η indicates the potential of using the developed NNMs as an alternative to correlation equations, which can free the users from repeated numerical simulation. In contrast to the prediction by conventional CFT, the value of η in the PNM occurs as a distribution, which is dependent upon the geometry parameters of the PNM. The mean value of η increases with the standard deviation of pore radius and decreases with the curvature number, but the dependency on coordination number is more complex. Upscaled values of the deposition rate coefficient (kd) corresponding to the distribution of η are calculated by the breakthrough curves by PNMs. The prediction of kd by PNM is then compared with that by CFT. Results show that kd predicted by PNM shows more significant response to velocity change, and less remarkable response to colloid density change than kd predicted by CFT. The comparison between the flow velocity distribution between PNM and CFT shows that the high-velocity region of the flow field in the porous media has been neglected in CFT, which can lead to insufficient consideration of convection. The results of this work imply that it is necessary to consider the influence of the complex pore structure of porous media on the collection of colloids.
Future products will have a higher degree of intelligence, and more complex and changing use environments, so resilience has been introduced into the design and operation of products as a concept ...that helps them cope with high-impact shocks and the damage they cause. With the work on product resilience, there is a need to find methods that can objectively reflect product resilience. Existing methods that can be used to evaluate product resilience are usually based on performance curves, but it is difficult to reflect the multifactorial and stochastic character of product resilience process. Therefore, this paper firstly reviews the resilience related research, and analyzes the factors affecting product resilience and their interrelationships layer by layer to construct the index system of product resilience. Then, a Bayesian network model is established based on the results of the above analysis, and the corresponding calculation method is proposed. Finally, the proposed method is illustrated with a case study of a complex terrain drilling rig and its improvement program. After discussion, the proposed method can be applied to the quantitative evaluation of product resilience, and the possible design direction of resilient products can be suggested using this method.
The applied social science literature using factor and network models continues to grow rapidly. Most work reads like an exercise in model fitting, and falls short of theory building and testing in ...three ways. First, statistical and theoretical models are conflated, leading to invalid inferences such as the existence of psychological constructs based on factor models, or recommendations for clinical interventions based on network models. I demonstrate this inferential gap in a simulation: excellent model fit does little to corroborate a theory, regardless of quality or quantity of data. Second, researchers fail to explicate theories about psychological constructs, but use implicit causal beliefs to guide inferences. These latent theories have led to problematic best practices. Third, explicated theories are often weak theories: imprecise descriptions vulnerable to hidden assumptions and unknowns. Such theories do not offer precise predictions, and it is often unclear whether statistical effects actually corroborate weak theories or not. I demonstrate that these three challenges are common and harmful, and impede theory formation, failure, and reform. Matching theoretical and statistical models is necessary to bring data to bear on theories, and a renewed focus on theoretical psychology and formalizing theories offers a way forward.
Display omitted
•An advanced network model was developed to analyze PM2.5 and O3 transport dynamics.•Spatiotemporal difference in PM2.5 and O3 transport dynamics was revealed.•Spillover pathways of ...PM2.5 and O3 among cities and provinces were identified.•PM2.5 and O3 zones were divided using network weights and the GN algorithm.•The model accuracy was validated by comparing with the WRF-CAMx simulation.
Air pollution exhibits significant spatial spillover effects, complicating and challenging regional governance models. This study innovatively applied and optimized a statistics-based complex network method in atmospheric environmental field. The methodology was enhanced through improvements in edge weighting and threshold calculations, leading to the development of an advanced pollutant transport network model. This model integrates pollution, meteorological, and geographical data, thereby comprehensively revealing the dynamic characteristics of PM2.5 and O3 transport among various cities in China. Research findings indicated that, throughout the year, the O3 transport network surpassed the PM2.5 network in edge count, average degree, and average weighted degree, showcasing a higher network density, broader city connections, and greater transmission strength. Particularly during the warm period, these characteristics of the O3 network were more pronounced, showcasing significant transport potential. Furthermore, the model successfully identified key influential cities in different periods; it also provided detailed descriptions of the interprovincial spillover flux and pathways of PM2.5 and O3 across various time scales. It pinpointed major pollution spillover and receiving provinces, with primary spillover pathways concentrated in crucial areas such as the Beijing-Tianjin-Hebei (BTH) region and its surrounding areas, the Yangtze River Delta, and the Fen-Wei Plain. Building on this, the model divided the O3, PM2.5, and synergistic pollution transmission regions in China into 6, 7, and 8 zones, respectively, based on network weights and the Girvan Newman (GN) algorithm. Such division offers novel perspectives and strategies for regional joint prevention and control. The validity of the model was further corroborated by source analysis results from the WRF-CAMx model in the BTH area. Overall, this research provides valuable insights for local and regional atmospheric pollution control strategies. Additionally, it offers a robust analytical tool for research in the field of atmospheric pollution.
The water retention curve and relative permeability are critical to predict gas and water production from hydrate‐bearing sediments. However, values for key parameters that characterize gas and water ...flows during hydrate dissociation have not been identified due to experimental challenges. This study utilizes the combined techniques of micro‐focus X‐ray computed tomography (CT) and pore‐network model simulation to identify proper values for those key parameters, such as gas entry pressure, residual water saturation, and curve fitting values. Hydrates with various saturation and morphology are realized in the pore‐network that was extracted from micron‐resolution CT images of sediments recovered from the hydrate deposit at the Mallik site, and then the processes of gas invasion, hydrate dissociation, gas expansion, and gas and water permeability are simulated. Results show that greater hydrate saturation in sediments lead to higher gas entry pressure, higher residual water saturation, and steeper water retention curve. An increase in hydrate saturation decreases gas permeability but has marginal effects on water permeability in sediments with uniformly distributed hydrate. Hydrate morphology has more significant impacts than hydrate saturation on relative permeability. Sediments with heterogeneously distributed hydrate tend to result in lower residual water saturation and higher gas and water permeability. In this sense, the Brooks‐Corey model that uses two fitting parameters individually for gas and water permeability properly capture the effect of hydrate saturation and morphology on gas and water flows in hydrate‐bearing sediments.
Key Points:
Water retention curve becomes steeper with increasing hydrate saturation
Relative water and gas permeability is affected by hydrate saturation and morphology
Brooks‐Corey model predicts permeability very well for heterogeneously distributed hydrate system
This research proposes a novel transfer function based on the hyperbolic tangent and the Khalil conformable exponential function. The non-integer order transfer function offers a suitable neural ...network configuration because of its ability to adapt. Consequently, this function was introduced into neural network models for three experimental cases: estimating the annular Nusselt number correlation to a helical double-pipe evaporator, the volumetric mass transfer coefficient in an electrochemical reaction, and the thermal efficiency of a solar parabolic trough collector. We found the new transfer function parameters during the training step of the neural networks. Therefore, weights and biases depend on them. We assessed the models applied to the three cases using the determination coefficient, adjusted determination coefficient, and the slope-intercept test. In addition, the MSE for the training set and the whole database were computed to show that there is no overfitting problem. The best-assessed models showed a relationship of 99%, 97%, and 95% with the experimental data for the first, second, and third cases. This novel proposal made reducing the number of neurons in the hidden layer feasible. Therefore, we show a neural network with a conformable transfer function (ANN-CTF) that learns well enough with less available information from the experimental database during its training.
•Mathematical and computational models are used to predict cases of COVID-19 in Mexico.•The data is obtained through the Daily Technical Report issued by the Mexican Ministry of Health.•Gompertz, ...Logistic and Artificial Neural Network perform the modeling of the cases confirmed by COVID-19 with an R2>0.999.•Logistic, Gompertz and inverse Artificial Neural Network predicts the maximum number of new daily cases on May 8th, June 25th and May12th, 2020, respectively.•The Gompertz, Logistic and inverse Artificial Neural Network models predict different number of cases of COVID-19 at the end of the epidemic.
This work presents the modeling and prediction of cases of COVID-19 infection in Mexico through mathematical and computational models using only the confirmed cases provided by the daily technical report COVID-19 MEXICO until May 8th. The mathematical models: Gompertz and Logistic, as well as the computational model: Artificial Neural Network were applied to carry out the modeling of the number of cases of COVID-19 infection from February 27th to May 8th. The results show a good fit between the observed data and those obtained by the Gompertz, Logistic and Artificial Neural Networks models with an R2 of 0.9998, 0.9996, 0.9999, respectively. The same mathematical models and inverse Artificial Neural Network were applied to predict the number of cases of COVID-19 infection from May 9th to 16th in order to analyze tendencies and extrapolate the projection until the end of the epidemic. The Gompertz model predicts a total of 47,576 cases, the Logistic model a total of 42,131 cases, and the inverse artificial neural network model a total of 44,245 as of May 16th. Finally, to predict the total number of COVID-19 infected until the end of the epidemic, the Gompertz, Logistic and inverse Artificial Neural Network model were used, predicting 469,917, 59,470 and 70,714 cases, respectively.
Forecasting energy prices accurately has always played the important role in the country's energy security and environmental impacts of policies. This paper proposes a novel decomposition-ensemble ...model to predict the energy prices which fluctuate wildly. The several steps process as follows: (1) The original energy prices are decomposed into sublayers with different frequencies by variational mode decomposition (VMD). (2) The autoregression model (AR) predicts the first low frequency component and Elman neural network (ELMAN) forecasts the last high frequency component. Besides, the improved bidirectional long short-term memory (IBiLSTM, the attention-based convolutional neural network and bidirectional long short-term memory) predicts other sublayers. (3) The prediction of the sublayers with different models is reconstructed as final predicted results with the non-linear integration approach. Combining econometric and artificial intelligence methods with the asymmetric feature makes the forecasting performance more accurate. The novel model outperforms other related comparative models under different training sets lengths. In general, experiments on two cases of energy prices: natural gas and carbon futures prices demonstrate the validity and reliability of the proposed model AR-IBiLSTM-ELMAN with VMD. The advanced model could simultaneously exploit the unique advantages of each model which provides an effective forecasting tool for governments and enterprises.
•Adopt a novel hybrid model with decomposition for the energy price prediction.•Combine the traditional and neural network forecasting models comprehensively.•Decompose series through variational mode decomposition with genetic algorithm.•Use the nonlinear integration approach to improve the prediction performance.•Show superiority in forecasting accuracy and robustness over compared models.