The coal-based energy consumption structure will significantly increase carbon emissions. Along with the rise in per-capita income, this energy consumption structure will reduce economy sustainable ...growth. The measure of sustainable economic growth is to use the DEA-SBM super efficiency model of unexpected output and calculate the green total factor productivity (GTFP). The proportion of coal consumption in energy consumption represents the structure of energy consumption . The panel data in this paper are from the records of various provinces and cities in China from 2005 to 2017 (except Tibet). In the results of the mixed panel regression, the impact of energy consumption structure on sustainable economic growth is not significantly positive, and after controlling for time and provinces, the overall impact of energy consumption structure on sustainable economic growth is positive, and significant at the 5% level, and its impact on carbon dioxide emissions is significant at the 1% level. It means that the coal-based energy consumption structure is a significant factor in increasing carbon emissions. Due to the heterogeneity of various provinces, the energy consumption structure of developed provinces and cities will significantly reduce sustainable economic growth at the 1% level, while it will increase significantly at the 5% level in less developed provinces and cities, which has uncertain impact on moderately developed provinces and cities. For developed provinces, reducing the proportion of coal consumption, improving the energy consumption structure can bring about the improvement of GTFP, that is, to promote sustainable economic growth. For other provinces, strategies to change the structure of energy consumption should be put forward according to the stage of economic development. Using the mediating effect model, it is found that both developed and underdeveloped provinces contribute to sustainable economic growth through carbon dioxide emissions. Therefore, controlling carbon dioxide emissions is very important for sustainable economic growth.
The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with ...state‐of‐the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process‐Guided Deep Learning (PGDL) hybrid modeling framework with a use‐case of predicting depth‐specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short‐term memory recurrence), theory‐based feedback (model penalties for violating conversation of energy), and model pretraining to initialize the network with synthetic data (water temperature predictions from a process‐based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process‐based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root‐mean‐square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case‐study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.
Key Points
Process‐Guided Deep Learning (PGDL) models integrate advanced empirical techniques with process knowledge
We used PGDL to accurately predict lake water temperatures for various conditions
PGDL performance improved significantly when pretraining data included diverse conditions generated by an existing process‐based model
Fungal phytotoxic secondary metabolites are poisonous substances to plants produced by fungi through naturally occurring biochemical reactions. These metabolites exhibit a high level of diversity in ...their properties, such as structures, phytotoxic activities, and modes of toxicity. They are mainly isolated from phytopathogenic fungal species in the genera of
,
,
,
,
, and
. Phytotoxins are either host specific or non-host specific phytotoxins. Up to now, at least 545 fungal phytotoxic secondary metabolites, including 207 polyketides, 46 phenols and phenolic acids, 135 terpenoids, 146 nitrogen-containing metabolites, and 11 others, have been reported. Among them, aromatic polyketides and sesquiterpenoids are the main phytotoxic compounds. This review summarizes their chemical structures, sources, and phytotoxic activities. We also discuss their phytotoxic mechanisms and structure-activity relationships to lay the foundation for the future development and application of these promising metabolites as herbicides.
We present several numerical methods and establish their error estimates for the discretization of the nonlinear Dirac equation (NLDE) in the nonrelativistic limit regime, involving a small ...dimensionless parameter 0 〈 ε〈〈1 which is inversely proportional to the speed of light. In this limit regime, the solution is highly oscillatory in time, i.e., there are propagating waves with wavelength O( ε^2) and O(1) in time and space, respectively. We begin with the conservative Crank-Nicolson finite difference (CNFD) method and establish rigorously its error estimate which depends explicitly on the mesh size h and time step τ- as well as the small parameter 0 〈 ε≤1 Based on the error bound, in order to obtain 'correct' numerical solutions in the nonrelativistic limit regime, i.e., 0 〈 ε≤1 , the CNFD method requests the ε-scalability: τ- = O(ε3) and h = O(√ε). Then we propose and analyze two numerical methods for the discretization of NLDE by using the Fourier spectral discretization for spatial derivatives combined with the exponential wave integrator and time- splitting technique for temporal derivatives, respectively. Rigorous error bounds for the two numerical methods show that their ε-scalability is improved to τ = O(ε2) and h = O(1) when 0 〈 ε 〈〈 1. Extensive numerical results are reported to confirm our error estimates.
•Annual lake phosphorus dynamic explained mostly by recycling and sedimentation•Mass balance unable to fully reproduce lake phosphorus cycling•Process-guided machine learning outperforms neural ...network or process alone•Missing long-term trend in lake phosphorus likely due to changes in loading
Phosphorus (P) loading to lakes is degrading the quality and usability of water globally. Accurate predictions of lake P dynamics are needed to understand whole-ecosystem P budgets, as well as the consequences of changing lake P concentrations for water quality. However, complex biophysical processes within lakes, along with limited observational data, challenge our capacity to reproduce short-term lake dynamics needed for water quality predictions, as well as long-term dynamics needed to understand broad scale controls over lake P. Here we use an emerging paradigm in modeling, process-guided machine learning (PGML), to produce a phosphorus budget for Lake Mendota (Wisconsin, USA) and to accurately predict epilimnetic phosphorus over a time range of days to decades. In our implementation of PGML, which we term a Process-Guided Recurrent Neural Network (PGRNN), we combine a process-based model for lake P with a recurrent neural network, and then constrain the predictions with ecological principles. We test independently the process-based model, the recurrent neural network, and the PGRNN to evaluate the overall approach. The process-based model accounted for most of the observed pattern in lake P; however it missed the long-term trend in lake P and had the worst performance in predicting winter and summer P in surface waters. The root mean square error (RMSE) for the process-based model, the recurrent neural network, and the PGRNN was 33.0 μg P L−1, 22.7 μg P L−1, and 20.7 μg P L−1, respectively. All models performed better during summer, with RMSE values for the three models (same order) equal to 14.3 μg P L−1, 10.9 μg P L−1, and 10.7 μg P L−1. Although the PGRNN had only marginally better RMSE during summer, it had lower bias and reproduced long-term decreases in lake P missed by the other two models. For all seasons and all years, the recurrent neural network had better predictions than process alone, with root mean square error (RMSE) of 23.8 μg P L−1 and 28.0 μg P L−1, respectively. The output of PGRNN indicated that new processes related to water temperature, thermal stratification, and long term changes in external loads are needed to improve the process model. By using ecological knowledge, as well as the information content of complex data, PGML shows promise as a technique for accurate prediction in messy, real-world ecological dynamics, while providing valuable information that can improve our understanding of process.
A series of symmetrical donor–acceptor–donor (D–A–D) chromophores bearing various electron-withdrawing groups, such as quinoxaline (Qx), benzogquinoxaline (BQ), phenazine (Pz), benzobphenazine (BP), ...thieno3,4-bpyrazine (TP), and thieno3,4-bquinoxaline (TQ), has been designed and synthesized. Intramolecular charge transfer (ICT) interactions can be found for all the chromophores due to the electron-withdrawing properties of the two imine nitrogens in the pyrazine ring and the electron-donating properties of the other two amine nitrogens in the two triphenylamines. Upon the fusion of either benzene or thiophene ring on the pyrazine acceptor unit, the ICT interactions are strengthened, which results in the bathochromically shifted ICT band. Moreover, the thiophene ring is superior to the benzene ring in enlarging the ICT interaction and expanding the absorption spectrum. Typically, when a thiophene ring is fused on the Qx unit in DQxD, a near-infrared dye is realized in simple chromophore DTQD, which displays the maximum absorption wavelength at 716 nm with the threshold over 900 nm. This is probably due to the enhanced charge density on the acceptor moiety and better orbital overlap, as revealed by theoretical calculation. These results suggest that extending the conjugation of a pyrazine acceptor in an orthogonal direction to the D–A–D backbone can dramatically improve the ICT interactions.
Accurate climate data at fine spatial resolution are essential for scientific research and the development and planning of crucial social systems, such as energy and agriculture. Among them, sea ...surface temperature plays a critical role as the associated El Niño–Southern Oscillation (ENSO) is considered a significant signal of the global interannual climate system. In this paper, we propose an implicit neural representation-based interpolation method with temporal information (T_INRI) to reconstruct climate data of high spatial resolution, with sea surface temperature as the research object. Traditional deep learning models for generating high-resolution climate data are only applicable to fixed-resolution enhancement scales. In contrast, the proposed T_INRI method is not limited to the enhancement scale provided during the training process and its results indicate that it can enhance low-resolution input by arbitrary scale. Additionally, we discuss the impact of temporal information on the generation of high-resolution climate data, specifically, the influence of the month from which the low-resolution sea surface temperature data are obtained. Our experimental results indicate that T_INRI is advantageous over traditional interpolation methods under different enhancement scales, and the temporal information can improve T_INRI performance for a different calendar month. We also examined the potential capability of T_INRI in recovering missing grid value. These results demonstrate that the proposed T_INRI is a promising method for generating high-resolution climate data and has significant implications for climate research and related applications.
River system is critical for the future sustainability of our planet but is always under the pressure of food, water and energy demands. Recent advances in machine learning bring a great potential ...for automatic river mapping using satellite imagery. Surface river mapping can provide accurate and timely water extent information that is highly valuable for solid policy and management decisions. However, accurate large-scale river mapping remains challenging given limited labels, spatial heterogeneity and noise in satellite imagery (e.g., clouds and aerosols). In this paper, we propose a new multi-source data-driven method for large-scale river mapping by combining multi-spectral imagery and synthetic aperture radar data. In particular, we build a multi-source data segmentation model, which uses contrastive learning to extract the common information between multiple data sources while also preserving distinct knowledge from each data source. Moreover, we create the first large-scale multi-source river imagery dataset based on Sentinel-1 and Sentinel-2 satellite data, along with 1013 handmade accurate river segmentation mask (which will be released to the public). In this dataset, our method has been shown to produce superior performance (F1-score is 91.53%) over multiple state-of-the-art segmentation algorithms. We also demonstrate the effectiveness of the proposed contrastive learning model in mapping river extent when we have limited and noisy data.
Accurate and cost-effective quantification of the carbon cycle for agroecosystems at decision-relevant scales is critical to mitigating climate change and ensuring sustainable food production. ...However, conventional process-based or data-driven modeling approaches alone have large prediction uncertainties due to the complex biogeochemical processes to model and the lack of observations to constrain many key state and flux variables. Here we propose a Knowledge-Guided Machine Learning (KGML) framework that addresses the above challenges by integrating knowledge embedded in a process-based model, high-resolution remote sensing observations, and machine learning (ML) techniques. Using the U.S. Corn Belt as a testbed, we demonstrate that KGML can outperform conventional process-based and black-box ML models in quantifying carbon cycle dynamics. Our high-resolution approach quantitatively reveals 86% more spatial detail of soil organic carbon changes than conventional coarse-resolution approaches. Moreover, we outline a protocol for improving KGML via various paths, which can be generalized to develop hybrid models to better predict complex earth system dynamics.
Superalloy Inconel718 is an important material for aircraft preparation because of its excellent performance at high temperatures. However, when cutting Inconel718, a large amount of cutting heat ...will be generated, resulting in excessive tool temperature and serious wear, which accelerates the tool failure. In order to solve this problem, the influence of tool angle on the process of thermal aided machining was studied by simulation model combined with thermal aided machining technology. During the cutting process, the workpiece preheating temperature rises from room temperature 20° C to 500° C, the front tool angle range is − 5° to 10°, and the rear tool angle range is 4° to 16°. By analyzing various parameters, it was found that a smaller tool rake angle can effectively reduce the tool temperature. Additionally, a flank angle of around 12° was found to decrease the maximum wear area of the tool by approximately 10.5%. Moreover, it was observed that implementing heat-assisted machining can result in a significant reduction of tool temperature by 11.1%, as well as a decrease in cutting force ranging from 18 to 22%, particularly at temperatures exceeding 500 °C.