A sound evaluation of the cadmium (Cd) mass balance in agricultural soils needs accurate data of Cd leaching. Reported Cd concentrations from in situ studies are often one order of magnitude lower ...than predicted by empirical models, which were calibrated to pore water data from stored soils. It is hypothesized that this discrepancy is related to the preferential flow of water (non-equilibrium) and/or artefacts caused by drying and rewetting soils prior to pore water analysis. These hypotheses were tested on multiple soils (n = 27) with contrasting properties. Pore waters were collected by soil centrifugation from field fresh soil samples and also after incubating the same soils (28 days, 20 °C), following two drying-rewetting cycles, the idea being that chemical equilibrium in the soil is reached after incubation. Incubation increased pore water Cd by a factor 4, on average, and up to a factor 16. That increase was statistically related to the decrease of pore water pH and the increase of nitrate, both mainly related to incubation-induced nitrification. After correcting for both factors, the Cd rise was also highest at higher pore water Ca. This suggests that higher Ca in soil enlarges Cd concentration gradients among pore classes in field fresh soils because high Ca promotes soil aggregation and separation of mobile from immobile water. Several empirical models were used to predict pore water Cd. Predictions exceeded observations up to a factor 30 for the fresh pore waters but matched well with those of incubated soils; again, deviations from the 1:1 line in field fresh soils were largest in high Ca (>0.8 mM) soils, suggesting that local equilibrium conditions in field fresh soils are not found at higher Ca. Our results demonstrate that empirical models need recalibration with field fresh pore water data to make accurate soil Cd mass balances in risk assessments.
Display omitted
•Soil incubation after drying and rewetting alters pore water Cd.•Incubation promotes chemical equilibrium and nitrification.•Low pore water Ca is concurrent with high Cd in pore waters from field fresh soils.•Empirical models based on incubated soils overestimate Cd in solution if equilibrium is unlikely.
Accurate bathymetric data is essential for marine, coastal ecosystems, and related studies. In the past decades, a lot of studies were investigated to obtain bathymetric data in shallow waters using ...satellite remotely sensed data. Satellite multispectral imagery has been widely used to estimate shallow water depths based on empirical models and physics-based models. However, the in-situ water depth information is essential (as the priori) to use the empirical model in a specific area, which limits its application, especially for remote reefs. In this study, the bathymetric maps in shallow waters were produced based on empirical models with only satellite remotely sensed data (i.e., the new ICESat-2 bathymetric points and Sentinel-2 multispectral imagery). The bathymetric points from the spaceborne ICESat-2 lidar were used in place of the in-situ auxiliary bathymetric points to train the classical empirical models (i.e., the linear model and the band ratio model). The bathymetric points were firstly extracted from noisy ICESat-2 raw data photons by an improved point cloud processing algorithm, and then were corrected for bathymetric errors (which were caused by the refraction effect in the water column, the refraction effect on the water surface, and the fluctuation effect on the water surface). With the trained empirical models and Sentinel-2 multispectral images, the bathymetric maps were produced for Yongle Atoll, in the South China Sea and the lagoon near Acklins Island and Long Cay, to the southeast of Bahama with four-date Sentinel-2 images. The bathymetry performance (including the accuracy and consistency of multi-date data) was evaluated and compared with the in-situ measurements. The results indicate that the bathymetric accuracy is well, and the RMSE is lower or close to 10% of the maximum depth for the two models with four-date images in two study areas. The consistency of multi-date data is well with the mean R2 of 0.97. The main novelties of this study are that the accuracy bathymetric points can be obtained from the ICESat-2 raw data using the proposed signal processing and error correction method, and using the ICESat-2 bathymetric points, the satellite multispectral imagery based on empirical models is no longer limited by local priori measurements, which were essential in previous studies. Hence, In the future, with the help of free and open-access satellite data (i.e., ICESat-2 data and Sentinel-2 imagery), this approach can be extended to a larger scale to obtain bathymetric maps in the shallow water of coastal areas, surroundings of islands and reefs, and inland waters.
•Estimating bathymetric topography with only satellite remotely sensed data.•Using new ICESat-2 lidar points and Sentinel-2 multispectral imageries.•Proposing signal detection and bathymetric error correction method for ICESat-2.•Training empirical models by ICESat-2 bathymetric points to estimate water depths.•Drawing and validating bathymetry in two study areas with multi-date datasets.
A new method, combining empirical modeling with time series Interferometric Synthetic Aperture Radar (InSAR) data, is proposed to provide an assessment of potential landslide volume and area. The ...method was developed to evaluate potential landslides in the Heitai river terrace of the Yellow River in central Gansu Province, China. The elevated terrace has a substantial loess cover and along the terrace edges many landslides have been triggered by gradually rising groundwater levels following continuous irrigation since 1968. These landslides can have significant impact on communities, affecting lives and livelihoods. Developing effective landslide risk management requires better understanding of potential landslide magnitude. Fifty mapped landslides were used to construct an empirical power-law relationship linking landslide area (AL) to volume (VL) (VL = 0.333 × AL1.399). InSAR-derived ground displacement ranges from −64 mm/y to 24 mm/y along line of sight (LOS). Further interpretation of patterns based on remote sensing (InSAR & optical image) and field survey enabled the identification of an additional 54 potential landslides (1.9 × 102 m2 ≤ AL ≤ 8.1 × 104 m2). In turn this enabled construction of a map that shows the magnitude of potential landslide activity. This research provides significant further scientific insights to inform landslide hazard and risk management, in a context of ongoing landscape evolution. It also provides further evidence that this methodology can be used to quantify the magnitude of potential landslides and thus contribute essential information towards landslide risk management.
•A new approach combining time-series InSAR with empirical model is proposed.•The volume and area of potential landslides are forecasted.•The approach is validated for recent landslides.•The approach contributes essential information to landslide risk assessment.
Freight train air brake models Wu, Qing; Cole, Colin; Spiryagin, Maksym ...
International journal of rail transportation (Online),
01/2023, Volume:
11, Issue:
1
Journal Article
Peer reviewed
Open access
This paper is an outcome of an international collaborative research initiative. Researchers from 24 institutions across 12 countries were invited to discuss the state-of-the-art in railway train air ...brake modelling with an emphasis on freight trains. Discussed models are classified as empirical, fluid dynamics and fluid-empirical dynamics models. Empirical models are widely used, and advanced versions have been used for train dynamics simulations. Fluid dynamics models are better models to study brake system behaviour but are more complex and slower in computation. Fluid-empirical dynamics models combine fluid dynamics brake pipe models and empirical brake valve models. They are a balance of model fidelity and computational speeds. Depending on research objectives, detailed models of brake rigging, friction blocks and wheel-rail adhesion are also available. To spark new ideas and more research in this field, the challenges and research gaps in air brake modelling are discussed.
The protagonists of the last great phase transition of the universe-cosmic reionization-remain elusive. Faint star-forming galaxies are leading candidates because they are found to be numerous and ...may have significant ionizing photon escape fractions ( ). Here we update this picture via an empirical model that successfully predicts latest observations (e.g., the rapid drop in star-formation density ( at ). We generate an ionizing spectrum for each galaxy in our model and constrain by leveraging latest measurements of the reionization timeline (e.g., Ly damping of quasars and galaxies at z > 7). Assuming a constant across all sources at z > 6, we find < −13.5 galaxies need = to complete reionization. The inferred Intergalactic Medium neutral fraction is 0.9, 0.5, 0.1 at -that is, the bulk of reionization transpires rapidly in 300 Myr, driven by the z > 8 SFR and favored by high neutral fractions (∼60%-90%) measured at z ∼ 7-8. Inspired by the emergent sample of Lyman Continuum (LyC) leakers spanning z ∼ 0-6.6 that overwhelmingly displays higher-than-average star-formation surface density ( ), we propose a physically motivated model relating to and find . Since falls by ∼2.5 dex between z = 8 and z = 0, our model explains the humble upper limits on at lower redshifts and its required evolution to ∼ 0.2 at z > 6. Within this model, strikingly, <5% of galaxies with < −18 and log(M /M ) > 8 (the "oligarchs") account for 80% of the reionization budget-a stark departure from the canonical "democratic" reionization led by copious faint sources. In fact, faint sources ( > −16) must be relegated to a limited role in order to ensure high neutral fractions at z = 7-8. Shallow faint-end slopes of the UV luminosity function ( > −2) and/or distributions skewed toward massive galaxies produce the required late and rapid reionization. We predict that LyC leakers like COLA1 (z = 6.6, ∼ 30%, = −21.5) will become increasingly common toward z ∼ 6 and that the drivers of reionization do not lie hidden across the faint end of the luminosity function but are already known to us.
Abstract
We present a new flexible Bayesian framework for directly inferring the fraction of neutral hydrogen in the intergalactic medium (IGM) during the Epoch of Reionization (EoR,
z
∼ 6–10) from ...detections and non-detections of Lyman Alpha (Ly
α
) emission from Lyman Break galaxies (LBGs). Our framework combines sophisticated reionization simulations with empirical models of the interstellar medium (ISM) radiative transfer effects on Ly
α
. We assert that the Ly
α
line profile emerging from the ISM has an important impact on the resulting transmission of photons through the IGM, and that these line profiles depend on galaxy properties. We model this effect by considering the peak velocity offset of Ly
α
lines from host galaxies’ systemic redshifts, which are empirically correlated with UV luminosity and redshift (or halo mass at fixed redshift). We use our framework on the sample of LBGs presented in Pentericci et al. and infer a global neutral fraction at
z
∼ 7 of
, consistent with other robust probes of the EoR and confirming that reionization is ongoing ∼700 Myr after the Big Bang. We show that using the full distribution of Ly
α
equivalent width detections and upper limits from LBGs places tighter constraints on the evolving IGM than the standard Ly
α
emitter fraction, and that larger samples are within reach of deep spectroscopic surveys of gravitationally lensed fields and
James Webb Space Telescope
NIRSpec.
A population of binary black hole mergers has now been observed in gravitational waves by Advanced LIGO and Virgo. The masses of these black holes appear to show evidence for a pileup between 30 and ...45 M and a cutoff above ∼45 M . One possible explanation for such a pileup and subsequent cutoff are pulsational pair-instability supernovae (PPISNe) and pair-instability supernovae (PISNe) in massive stars. We investigate the plausibility of this explanation in the context of isolated massive binaries. We study a population of massive binaries using the rapid population synthesis software COMPAS, incorporating models for PPISNe and PISNe. Our models predict a maximum black hole mass of 40 M . We expect ∼10% of all binary black hole mergers at redshift z = 0 will include at least one component that went through a PPISN (with mass 30-40 M ), constituting ∼20%-50% of binary black hole mergers observed during the first two observing runs of Advanced LIGO and Virgo. Empirical models based on fitting the gravitational-wave mass measurements to a combination of a power law and a Gaussian find a fraction too large to be associated with PPISNe in our models. The rates of PPISNe and PISNe track the low metallicity star formation rate, increasing out to redshift z = 2. These predictions may be tested both with future gravitational-wave observations and with observations of superluminous supernovae.
This paper presents a new methodology to adjust map-based models to experimental data and reports the main results of a comprehensive experimental campaign of a Dual Source Heat Pump (DSHP) ...prototype. The prototype tested incorporates variable speed components (compressor, circulation pumps, and fan). The novelty of this prototype lies in its ability to select two possible heat sources: air or ground. Thus, it can operate as a geothermal or aerothermal heat pump, as well as a chiller, thanks to the additional capacity to reverse the cycle. Thanks to this hybrid approach, several advantages can be obtained compared to conventional equipment, such as higher efficiency, the requirement of smaller borehole heat exchangers, or the absence of defrost cycles. In a prior study, polynomial models were developed to accurately characterize the DSHP’s performance (i.e., condenser and evaporator capacities and electrical energy consumption). These models were obtained considering the external variables to the unit as independent variables to facilitate their applicability using variables commonly measured in real installations. Due to the complexity of heat pump performance, which in current equipment can be influenced by up to 5 or 6 independent variables, the search for suitable polynomial models required the availability of a complete working map including more than 3000 working points. Thus, this previous work developed these models based only on simulation results. In this sense, this paper concludes the development of these models by focusing on two critical issues concerning empirical model development. The first aspect involves determining the minimum number and location of testing points needed to define the experimental sample for the model adjustment. The reported experimental data were obtained by analyzing the most suitable experimental design methodology to create the experimental matrices in each operating mode of the DSHP. The second aspect focuses on the final adjustment of models using experimental data. A novel fitting approach for empirical models is introduced in the last part of this study. The developed methodology enables the integration of simulation and experimental results for the final fitting of empirical models through a two-step adjustment. The first step involves analyzing and defining polynomial functionals from the complete working maps generated by simulation. Subsequently, in a second step, the polynomial models are refitted to a suitable experimental sample using the methodology presented in this work. The latter allows for the increase of the accuracy of the models and the minimization of experimental costs. This novel approach ensures a robust characterization of systems with many independent variables using a minimum amount of experimental data. Significant benefits can be obtained from its application, such as the reduction of experimental cost and an increase in the model’s accuracy through an effective combination of both experimental and simulated information. Furthermore, it can be considered of general applicability to other engineering problems where the characterization of physical systems influenced by a high number of independent variables is required.
•This paper reports aerothermal and geothermal heat pump performance data.•Characterizing systems with 5 control variables at 5 levels involves 3125 points.•The CCD methodology defines suitable experimental samples with only 30 test points.•A new fitting method introduced allows combining experimental and simulated data.•Reported precise polynomial models to predict HP performance from external variables.
This article focuses on the transformation of current global problems into mathematical data and the drawing of conclusions from the results. After a theoretical introduction that clarifies the ...philosophy of the above approach and provides an innovative description of the mathematical model concept, a method for describing and analyzing crises is presented, with the goal of measuring its influence on the observable system, whether at the EU, global, or other levels. Converting social issues into quantitative data provides a systematic assessment of the severity of crises and allows for comparisons across different crises and systems. The main conclusion of the article is that a mathematical model, such as the one introduced here, could, to some extent, describe, as a first approach, global problems and their interrelationship, and thus be useful to policy makers.
Reference evapotranspiration (ET0) is one of the most important parameters, which is required in many fields such as hydrological, agricultural, and climatological studies. Therefore, its estimation ...via reliable and accurate techniques is a necessity. The present study aims to estimate the monthly ET0 time series of six stations located in Iran. To achieve this objective, gene expression programming (GEP) and support vector regression (SVR) were used as standalone models. A novel hybrid model was then introduced through coupling the classical SVR with an optimization algorithm, namely intelligent water drops (IWD) (i.e., SVR−IWD). Two various types of scenarios were considered, including the climatic data- and antecedent ET0 data-based patterns. In the climatic data-based models, the effective climatic parameters were recognized by using two pre-processing techniques consisting of τ Kendall and entropy. It is worthy to mention that developing the hybrid SVR-IWD model as well as utilizing the τ Kendall and entropy approaches to discern the most influential weather parameters on ET0 are the innovations of current research. The results illustrated that the applied pre-processing methods introduced different climatic inputs to feed the models. The overall results of present study revealed that the proposed hybrid SVR-IWD model outperformed the standalone SVR one under both the considered scenarios when estimating the monthly ET0. In addition to the mentioned models, two types of empirical equations were also used including the Hargreaves−Samani (H−S) and Priestley−Taylor (P−T) in their original and calibrated versions. It was concluded that the calibrated versions showed superior performances compared to their original ones.
•Monthly ET0 time series were estimated at six stations located in Iran.•Applied models were standalone GEP, SVR, and two different empirical models.•A novel hybrid model was proposed by coupling the SVR and IWD.•Two pre-processing techniques, i.e., τ Kendall and entropy were used to determine most effective weather parameters.•Superior performance of proposed hybrid SVR−IWD model was concluded.