Molecular communication (MC) is a new communication engineering paradigm where molecules are employed as information carriers. MC systems are expected to enable new revolutionary applications, such ...as sensing of target substances in biotechnology, smart drug delivery in medicine, and monitoring of oil pipelines or chemical reactors in industrial settings. As for any other kind of communication, simple yet sufficiently accurate channel models are needed for the design, analysis, and efficient operation of MC systems. In this paper, we provide a tutorial review on mathematical channel modeling for diffusive MC systems. The considered end-to-end MC channel models incorporate the effects of the release mechanism, the MC environment, and the reception mechanism on the observed information molecules. Thereby, the various existing models for the different components of an MC system are presented under a common framework and the underlying biological, chemical, and physical phenomena are discussed. Deterministic models characterizing the expected number of molecules observed at the receiver and statistical models characterizing the actual number of observed molecules are developed. In addition, we provide the channel models for time-varying MC systems with moving transmitters and receivers, which are relevant for advanced applications such as smart drug delivery with mobile nanomachines. For complex scenarios, where simple MC channel models cannot be obtained from first principles, we investigate the simulation- and experiment-driven channel models. Finally, we provide a detailed discussion of potential challenges, open research problems, and future directions in channel modeling for diffusive MC systems.
Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for ...covariates. Theory of statistical models is well‐established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change‐in‐estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p‐values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low‐dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.
Underwater images often have severe quality degradation and distortion due to light absorption and scattering in the water medium. A hazy image formation model is widely used to restore the image ...quality. It depends on two optical parameters: the background light (BL) and the transmission map (TM). Underwater images can also be enhanced by color and contrast correction from the perspective of image processing. In this paper, we propose an effective underwater image enhancement method for underwater images in composition of underwater image restoration and color correction. Firstly, a manually annotated background lights (MABLs) database is developed. With reference to the relationship between MABLs and the histogram distributions of various underwater images, robust statistical models of BLs estimation are provided. Next, the TM of R channel is roughly estimated based on the new underwater dark channel prior (NUDCP) via the statistic of clear and high resolution (HD) underwater images, then a scene depth map based on the underwater light attenuation prior (ULAP) and an adjusted reversed saturation map (ARSM) are applied to compensate and modify the coarse TM of R channel. Next, TMs of G-B channels are estimated based on the difference of attenuation ratios between R and G-B channels. Finally, to improve the color and contrast of the restored image with a dehazed and natural appearance, a variation of white balance is introduced as post-processing. In order to guide the priority of underwater image enhancement, sufficient evaluations are conducted to discuss the impacts of the key parameters including BL and TM, and the importance of the color correction. Comparisons with other state-of-the-art methods demonstrate that our proposed underwater image enhancement method can achieve higher accuracy of estimated BLs, lower computation time, overall superior performance, and better information retention.
•Notable spatio-temporal patterns of meteorological influences on PM2.5 concentrations.•Comparison of major methods for quantifying PM2.5-meteorology interactions.•Interaction mechanisms between ...PM2.5 concentrations and eight meteorological factors.•Challenges for better understanding meteorological influences on PM2.5 concentrations.•Major meteorological means for reducing PM2.5 concentrations.
Air pollution over China has attracted wide interest from public and academic community. PM2.5 is the primary air pollutant across China. Quantifying interactions between meteorological conditions and PM2.5 concentrations are essential to understand the variability of PM2.5 and seek methods to control PM2.5. Since 2013, the measurement of PM2.5 has been widely made at 1436 stations across the country and more than 300 papers focusing on PM2.5-meteorology interactions have been published. This article is a comprehensive review on the meteorological impact on PM2.5 concentrations. We start with an introduction of general meteorological conditions and PM2.5 concentrations across China, and then seasonal and spatial variations of meteorological influences on PM2.5 concentrations. Next, major methods used to quantify meteorological influences on PM2.5 concentrations are checked and compared. We find that causality analysis methods are more suitable for extracting the influence of individual meteorological factors whilst statistical models are good at quantifying the overall effect of multiple meteorological factors on PM2.5 concentrations. Chemical Transport Models (CTMs) have the potential to provide dynamic estimation of PM2.5 concentrations by considering anthropogenic emissions and the transport and evolution of pollutants. We then comprehensively examine the mechanisms how major meteorological factors may impact the PM2.5 concentrations, including the dispersion, growth, chemical production, photolysis, and deposition of PM2.5. The feedback effects of PM2.5 concentrations on meteorological factors are also carefully examined. Based on this review, suggestions on future research and major meteorological approaches for mitigating PM2.5 pollution are made finally.
Over the last decade, the Super Dual Auroral Radar Network (SuperDARN) has undergone a dramatic expansion in the Northern Hemisphere with the addition of more than a dozen radars offering improved ...coverage at mid‐latitudes (50°–60° magnetic latitude) and in the polar cap (80°–90° magnetic latitude). In this study, we derive a statistical model of ionospheric convection (TS18) using line‐of‐sight velocity measurements from the complete network of mid‐latitude, high‐latitude, and polar radars for the years 2010–2016. These climatological patterns are organized by solar wind, interplanetary magnetic field (IMF), and dipole tilt angle conditions. We find that for weak solar wind driving conditions the TS18 model patterns are largely similar to the average patterns obtained using high‐latitude radar data only. For stronger solar wind driving the inclusion of mid‐latitude radar data at the equatorward extent of the ionospheric convection can increase the measured cross‐polar cap potential (ΦPC) by as much as 40%. We also derive an alternative model organized by the Kp index to better characterize the statistical convection under a range of magnetic activity conditions. These Kp patterns exhibit similar IMF By dependencies as the TS18 model results and demonstrate a linear increase in ΦPC with increasing Kp for a given IMF orientation. Overall, the mid‐latitude radars provide a better specification of the flows within the nightside Harang reversal region for moderate to strong solar wind driving or geomagnetic activity, while the polar radars improve the quality of velocity measurements in the deep polar cap under all conditions.
Key Points
We derive an empirical model of ionospheric convection including mid‐latitude and polar SuperDARN HF radar velocity measurements for the first time
Inclusion of mid‐latitude radar data can increase the total measured cross‐polar cap potential drop by as much as 40%
Model provides a better specification of plasma flows in the deep polar cap and nightside Harang reversal region
A prediction model based on XGBoost is proposed for ultrasonic degradation of micropollutants' kinetic constants. After parameter optimization through iteration, the model achieves Evaluation metrics ...with R2 and SMAPE reaching 0.99 and 2.06%, respectively. The impact of design parameters on predicting kinetic constants for ultrasound degradation of trace pollutants was assessed using Shapley additive explanations (SHAP). Results indicate that power density and frequency significantly impact the predictive performance. The database was sorted based on power density and frequency values. Subsequently, 800 raw data were split into small databases of 200 each. After confirming that reducing the database size doesn't affect prediction accuracy, ultrasound degradation experiments were conducted for five pollutants, yielding experimental data. A small database with experimental conditions within the numerical range was selected. Data meeting both feature conditions were filtered, resulting in an optimized 60-data group. After incorporating experimental data, a model was trained for prediction. Degradation kinetic constants for experiments (kE) were compared with predicted constants (for 800 data-based model: kP-800 and for 60 data-based model: kP-60). Results showed ibuprofen, bisphenol A, carbamazepine, and 17β-Estradiol performed better on the 60-data group (kP-60/kE: 1.00, 0.99, 1.00, 1.00), while caffeine suited the model trained on the 800-data group (kP-800/kE: 1.02).
Display omitted
•Database of kinetic constants of micro pollutants degradation by ultrasound•Comparative analysis of US degradation kinetic constant prediction using ML and MLR models•Validate ML models with experimental data
Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time ...(persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subjectâs age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that ( i ) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and ( ii ) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.
Summary
The unique structures and foundations of a dam make its safety monitoring a complex task. As the most intuitive effect of dams, deformation contains important information on dam evolution. ...Actual response has the purpose of diagnosis and early warning compared with model prediction. Given the poor generalization ability of the conventional statistical model, establishing a dam deformation monitoring model is thus essential. The prediction of concrete dam deformation using statistical model and random forest regression (RFR) model is studied. To build an optimized RFR model, the statistical model is used to establish input variables, select the appropriate parameters Mtry and Ntree according to out‐of‐bag error, and extract strong explanatory variables. The model's advantage is that the influence factors can describe concrete dam deformation, and RF can serve as a sensible new data mining tool. The importance of variables for deformation prediction is measured by RF. The RFR method can extract representative influencing factors based on variable importance. The methods are applied to an actual concrete dam. Results indicate that the RFR model can be applied for analysis and prediction of other structural behavior.
Under periods of strong solar wind driving, the magnetopause can become compressed, playing a significant role in draining electrons from the outer radiation belt. Also termed “magnetopause ...shadowing,” this loss process has traditionally been attributed to a combination of magnetospheric compression and outward radial diffusion of electrons. However, the drift paths of relativistic electrons and the location of the magnetopause are usually calculated from statistical models and, as such, may not represent the time‐varying nature of this highly dynamic process. In this study, we construct a database ∼20,000 spacecraft crossings of the dayside magnetopause to quantify the accuracy of the commonly used Shue et al. (1998, https://doi.org/10.1029/98JA01103) model. We find that, for the majority of events (74%), the magnetopause model can be used to estimate magnetopause location to within ±1 RE. However, if the magnetopause is compressed below 8 RE, the observed magnetopause is greater than 1 RE inside of the model location on average. The observed magnetopause is also significantly displaced from the model location during storm sudden commencements, when measurements are on average 6% closer to the radiation belts, with a maximum of 42%. We find that the magnetopause is rarely close enough to the outer radiation belt to cause direct magnetopause shadowing, and hence rapid outward radial transport of electrons is also required. We conclude that statistical magnetopause parameterizations may not be appropriate during dynamic compressions. We suggest that statistical models should only be used during quiescent solar wind conditions and supplemented by magnetopause observations wherever possible.
Key Points
Measured magnetopause location is statistically closer to the Earth than Shue et al. (1998) modeled for storm sudden commencements (SYM‐H ≥15 nT)
When the magnetopause is compressed below 8 RE, the average measured location is >1 RE inside of the Shue et al. (1998) model location
Extreme magnetopause compressions rarely reach the outer radiation belt, therefore rapid outward radial transport is required to fully explain most shadowing events
Statistical model improvement consists of model calibration, validation, and refinement techniques. It aims to increase the accuracy of computational models. Although engineers in industrial fields ...are expanding the use of computational models in the process of product development, many field engineers still hesitate to perform statistical model improvement due to its practical aspects. Therefore, this paper describes research aimed at addressing three practical issues that hinder statistical model improvement in industrial fields: (1) lack of experimental data for quantifying uncertainties of true responses, (2) numerical input variables for propagating uncertainties of the computational model, and (3) model form uncertainties in the computational model. Issues 1 and 2 deal with difficulties in uncertainty quantification of experimental and computational responses. Issue 3 focuses on model form uncertainties, which are due to the excessive simplification of computational modeling; simplification is employed to reduce the calculation cost. Furthermore, the paper outlines solutions to address these three issues, specifically: (1) kernel density estimation with estimated bounded data, (2–1) variance-based variable screening, (2–2) surrogate modeling, and (3) a model refinement approach. By examining the computational model of an automobile steering column, these techniques are shown to demonstrate efficient statistical model improvement. This case study shows that the suggested approaches can actively reduce the burden in statistical model improvement and increase the accuracy of computational modeling, thereby encouraging its use in industry.