Display omitted
•Uptake of lead(II) was dominated likely by surface complexation.•Kinetics and isotherm were clarified for lead(II) adsorption on nylon microplastics.•Lead(II) adsorption capacity ...decreased with the rise of FA concentration.
Both heavy metals and microplastic pollutants are ubiquitous in the aquatic environment. The uptake of lead(II) ions from aqueous solutions onto aged nylon microplastics was investigated as a function of pH, contact time, temperature, supporting electrolyte concentration and fulvic acid concentration in batch studies. The effect of surface properties on the adsorption behavior of lead(II) was investigated with scanning electron microscope equipped with the energy dispersive X-ray spectroscope (SEM-EDAX), Fourier transform-infrared (FTIR) spectroscopy, thermal gravimetric analysis (TGA), X-ray diffraction (XRD) and differential scanning calorimetric (DSC). The adsorption kinetics conformed to the pseudo-second order equation, Elovich equation and intraparticle diffusion model well. The experimental data of the adsorption process was fitted to the Langmuir and Freundlich adsorption isotherms and the parameters were estimated. The lead(II) uptake on aged nylon microplastics was spontaneous and endothermic in nature. The lead(II) adsorption was significantly dependent on the sodium chloride concentrations, initial solution pH and fulvic acid concentrations. Results of this study highlight the importance of surface carboxyl function group of aged nylon microplastics in controlling lead(II) adsorption.
Deep reinforcement learning (RL) comprehensively uses the psychological mechanisms of "trial and error" and "reward and punishment" in RL as well as powerful feature expression and nonlinear mapping ...in deep learning. Currently, it plays an essential role in the fields of artificial intelligence and machine learning. Since an RL agent needs to constantly interact with its surroundings, the deep Q network (DQN) is inevitably faced with the need to learn numerous network parameters, which results in low learning efficiency. In this paper, a multisource transfer double DQN (MTDDQN) based on actor learning is proposed. The transfer learning technique is integrated with deep RL to make the RL agent collect, summarize, and transfer action knowledge, including policy mimic and feature regression, to the training of related tasks. There exists action overestimation in DQN, i.e., the lower probability limit of action corresponding to the maximum Q value is nonzero. Therefore, the transfer network is trained by using double DQN to eliminate the error accumulation caused by action overestimation. In addition, to avoid negative transfer, i.e., to ensure strong correlations between source and target tasks, a multisource transfer learning mechanism is applied. The Atari2600 game is tested on the arcade learning environment platform to evaluate the feasibility and performance of MTDDQN by comparing it with some mainstream approaches, such as DQN and double DQN. Experiments prove that MTDDQN achieves not only human-like actor learning transfer capability, but also the desired learning efficiency and testing accuracy on target task.
Interrupted-sampling repeater jamming (ISRJ) provides a novel coherent-jamming mode against wideband radar. ISRJ allows the single-antenna jammer to periodically sample and repeat a fraction of the ...intercepted signal, which reduces the sampling rate and achieves transmit-receive isolation. The coherent-jamming signal generated by ISRJ can form multiple verisimilar false targets when received and processed by the victim radar receiver. Moreover, some false targets can precede the real target. This paper surveys the use of ISRJ in linear frequency modulated (LFM) radar jamming. The theory and application of ISRJ has been researched for more than one decade, but what is missing is a completed summary for the framework of this technique. In this paper, mathematic principles of ISRJ against LFM radars, which utilize matched-filter processing, stretch processing, and range-Doppler processing, are developed. The unique jamming effects in radar systems are focused on when the interrupted sampling frequency of the jammer is smaller than the bandwidth of the radar signal. Specifically, the false-target characteristics, including amplitude, space distribution, and phase, are discussed, respectively. On this basis, the key jamming elements, which determine these false-target characteristics, are pointed out and analyzed in detail. At last, simulation and real data are used to verify the correctness of the analyses. Experimental results highlight the potential application of the proposed jamming mode.
•Sensitivity of dust emissions to wind’s variations is higher than those of vegetation and soil moisture.•The residual effect weakens the sensitivity of dust emissions to driving factors.•Key driving ...factor of dust emissions is spatially differentiated in Earth's main drylands.
The identification of driving factors that contribute to dust emissions holds great significance on studying global climate change. In the present study, we constructed a new index, Dust Sensitivity Index, which allowed us to identify the sensitivity of dust emissions responding to variations in driving factors over 2003–2017 in Earth's main drylands. We found that dust emissions were sensitive to driving factor variability in eastern Brazil, the southern drylands of the Sahel, eastern Africa, eastern Australia and parts of northern Eurasia, where the aridity index (AI) is relatively high. The main factor affecting the DSI varies geographically over the Earth's main dryland regions. In total, wind speed made the largest relative contribution to DOD sensitivity (11.3%), followed by soil moisture (10.8%) and vegetation (10.4%). In addition, wind, vegetation, and soil moisture interact to impose complex and varying limitations on dust activity. 39.3% of the Earth's main drylands were limited by vegetation, 31.2% by wind and 29.5% by soil moisture. Our study also demonstrates that the residual effects of previous dust-driving factors have impacts on contemporary conditions. We found that regions characterized by lower DSI values displayed the most prominent residual effects in general.
•A simulator was used to test drivers’ collision avoidance behaviors.•Effects of situational urgency on collision avoidance were investigated.•As situational urgency increase, drivers brake faster ...and harder.•Multi-stage braking behavior was observed at low situational urgency.
Rear-end collisions have been estimated to account for 20–30% of all crashes, and about 10% of all fatal crashes. A thorough investigation of drivers’ collision avoidance behaviors when exposed to rear end collision risks is needed to help guide the development of effective countermeasures. Urgency or criticality of the situation affects drivers’ collision behavior, but has not been systematically investigated. A high fidelity driving simulator was used to examine the effects of differing levels of situational urgency on drivers’ collision avoidance behaviors. Drivers’ braking and steering decisions, perception response times, throttle release response times, throttle to brake transition times, brake delays, maximum brake pedal pressures and peak decelerations were recorded under lead vehicle decelerations of 0.3g, 0.5g, and 0.75g and under headways of 1.5s and 2.5s. Results showed (1) as situational urgency increased, drivers released the accelerator and braked to maximum more quickly; (2) the transition time between initial throttle release and brake initiation was not affected by situational urgency; (3) at low situational urgency, multi-stage braking behavior led to longer delays from brake initiation to full braking. These findings show that effects of situational urgency on drivers’ response times, braking delays, and braking intensity should be considered when developing forward collision warnings systems.
•Fifteen autonomous vehicle (AV) crash patterns were identified using pre-crash scenario.•The proportion of AVs being rear-ended by conventional vehicles was 52.46%.•The differences between AV and ...conventional vehicle scenarios were determined.•An in-depth crash investigation was conducted for main AV pre-crash scenarios.•Perception and planning were important causes for AV crashes.
Data-based research approaches to generate crash scenarios have mainly relied on conventional vehicle crashes and naturalistic driving data, and have not considered differences between the autonomous vehicle (AV) and conventional vehicle crashes. As the AV’s presence on roadways continues to grow, its crash scenarios take on new importance for traffic safety. This study therefore obtained crash patterns using the United States Department of Transportation pre-crash scenario typology, and used statistical analysis to determine the differences between AV and conventional vehicle pre-crash scenarios. Analysis of 122 AV crashes and 2084 conventional vehicle crashes revealed 15 types of scenario for AVs and 26 for conventional vehicles. The two groups showed differences in type of scenario, and differed in the proportion of crashes when the scenario was the same. The most frequent AV pre-crash scenarios were rear-end collisions (52.46%) and lane change collisions (18.85%), with the proportion of AVs rear-ended by conventional vehicles occurring with a frequency 1.6 times that of conventional vehicles. An in-depth crash investigation was conducted of the characteristics and causes of four AV pre-crash scenarios, summarized from the perspectives of perception and path planning. The perception-reaction time (PRT) difference between AVs and human drivers, AV’s inaccurate identification of the intention of other vehicles to change lanes, and AV’s insufficient path planning combining time and space dimensions were found to be important causes for the AV crashes. By increasing understanding of the complex characteristics of AV pre-crash scenarios, this analysis will encourage cooperation with vehicle manufacturers and AV technology companies for further study of crash causation toward the goals of improved test scenario construction and optimization of the AV’s automated driving system (ADS).
A support vector machine (SVM) plays a prominent role in classic machine learning, especially classification and regression. Through its structural risk minimization, it has enjoyed a good reputation ...in effectively reducing overfitting, avoiding dimensional disaster, and not falling into local minima. Nevertheless, existing SVMs do not perform well when facing class imbalance and large-scale samples. Undersampling is a plausible alternative to solve imbalanced problems in some way, but suffers from soaring computational complexity and reduced accuracy because of its enormous iterations and random sampling process. To improve their classification performance in dealing with data imbalance problems, this work proposes a weighted undersampling (WU) scheme for SVM based on space geometry distance, and thus produces an improved algorithm named WU-SVM. In WU-SVM, majority samples are grouped into some subregions (SRs) and assigned different weights according to their Euclidean distance to the hyper plane. The samples in an SR with higher weight have more chance to be sampled and put to use in each learning iteration, so as to retain the data distribution information of original data sets as much as possible. Comprehensive experiments are performed to test WU-SVM via 21 binary-class and six multiclass publically available data sets. The results show that it well outperforms the state-of-the-art methods in terms of three popular metrics for imbalanced classification, i.e., area under the curve, F-Measure, and G-Mean.
•Developed the Dual U-Net residual network for network to reconstruct cardiac magnetic resonance images super-resolution.•Designed the dual U-Net module.•The method effectively improves the ...super-resolution reconstruction effect of cardiac magnetic resonance images.
Heart disease is a vital disease that has threatened human health, and is the number one killer of human life. Moreover, with the added influence of recent health factors, its incidence rate keeps showing an upward trend. Today, cardiac magnetic resonance (CMR) imaging can provide a full range of structural and functional information for the heart, and has become an important tool for the diagnosis and treatment of heart disease. Therefore, improving the image resolution of CMR has an important medical value for the diagnosis and condition assessment of heart disease. At present, most single-image super-resolution (SISR) reconstruction methods have some serious problems, such as insufficient feature information mining, difficulty to determine the dependence of each channel of feature map, and reconstruction error when reconstructing high-resolution image.
To solve these problems, we have proposed and implemented a dual U-Net residual network (DURN) for super-resolution of CMR images. Specifically, we first propose a U-Net residual network (URN) model, which is divided into the up-branch and the down-branch. The up-branch is composed of residual blocks and up-blocks to extract and upsample deep features; the down-branch is composed of residual blocks and down-blocks to extract and downsample deep features. Based on the URN model, we employ this a dual U-Net residual network (DURN) model, which combines the extracted deep features of the same position between the first URN and the second URN through residual connection. It can make full use of the features extracted by the first URN to extract deeper features of low-resolution images.
When the scale factors are 2, 3, and 4, our DURN can obtain 37.86 dB, 33.96 dB, and 31.65 dB on the Set5 dataset, which shows (i) a maximum improvement of 4.17 dB, 3.55 dB, and 3.22dB over the Bicubic algorithm, and (ii) a minimum improvement of 0.34 dB, 0.14 dB, and 0.11 dB over the LapSRN algorithm.
Comprehensive experimental study results on benchmark datasets demonstrate that our proposed DURN can not only achieve better performance for peak signal to noise ratio (PSNR) and structural similarity index (SSIM) values than other state-of-the-art SR image algorithms, but also reconstruct clearer super-resolution CMR images which have richer details, edges, and texture.
Microplastics (MPs) are becoming a major concern due to their great potential to sorb and transport pollutants in the aquatic environment; hexabromocyclododecane (HBCD) is a common chemical additive ...in polystyrene (PS) MPs. However, the underlying mechanisms for the interaction of tetracycline (TC) onto HBCD-PS composites MPs (HBCD-PS MPs) are still not well documented. Our findings showed that the addition of HBCD resulted in a relatively higher hydrophobicity of PS MPs, and significantly enhanced the sorption ability of HBCD-PS MPs for TC. The kinetic models suggested that the sorption of TC onto PS and HBCD-PS MPs were mainly controlled by film diffusion and intra-particle diffusion, respectively. The statistical physics models were used to elucidate the sorption of TC onto PS and HBCD-PS MPs was associated with the formation of the monolayer, and the results indicated the TC was sorbed onto the two MPs by both multi-molecular and non-parallel processes. The TC sorption was solution pH-dependent while the effect of NaCl content on TC sorption was negligible. The presence of Cu(Ⅱ), Pb(Ⅱ), Cd(Ⅱ), and Zn(Ⅱ) ions had different influences on the TC sorption onto both the MPs. Overall, various mechanisms including π-π and hydrophobic interactions jointly regulated the sorption of TC onto both the MPs. Our results provided new insights into the sorption behavior and interaction mechanisms of TC onto both the MPs and highlighted that the addition of HBCD likely increased the enrichment capacity of MPs for pollutants in the environment.
Display omitted
•Tetracycline (TC) sorption was mainly dominated by π-π and hydrophobic interactions.•TC exhibited a relatively higher affinity to be sorbed onto HBCD-PS than PS MPs.•TC sorption onto both the MPs involved multi-molecular and non-parallel processes.•The addition of HBCD altered the effect of heavy metal ions on TC sorption onto MPs.
For the zero-shot image classification with relative attributes (RAs), the traditional method requires that not only all seen and unseen images obey Gaussian distribution, but also the ...classifications on testing samples are made by maximum likelihood estimation. We therefore propose a novel zero-shot image classifier called random forest based on relative attribute. First, based on the ordered and unordered pairs of images from the seen classes, the idea of ranking support vector machine is used to learn ranking functions for attributes. Then, according to the relative relationship between seen and unseen classes, the RA ranking-score model per attribute for each unseen image is built, where the appropriate seen classes are automatically selected to participate in the modeling process. In the third step, the random forest classifier is trained based on the RA ranking scores of attributes for all seen and unseen images. Finally, the class labels of testing images can be predicted via the trained RF. Experiments on Outdoor Scene Recognition, Pub Fig, and Shoes data sets show that our proposed method is superior to several state-of-the-art methods in terms of classification capability for zero-shot learning problems.