•MRI radiomics analysis based on T2- and T1-weighted images may be useful for the prediction of the 1p/19q status in LGG.•Cubic and linear interpolation methods showed similar performance for the ...prediction of the 1p/19q status in LGG.•The proposed algorithm has a satisfactory clinical utility value for screening patients with 1p-19q non-co-deletion status.
The 1p/19q co-deletion status has been demonstrated to be a prognostic biomarker in lower grade glioma (LGG). The objective of this study was to build a magnetic resonance (MRI)-derived radiomics model to predict the 1p/19q co-deletion status.
209 pathology-confirmed LGG patients from 2 different datasets from The Cancer Imaging Archive were retrospectively reviewed; one dataset with 159 patients as the training and discovery dataset and the other one with 50 patients as validation dataset.
Radiomics features were extracted from T2- and T1-weighted post-contrast MRI resampled data using linear and cubic interpolation methods.
For each of the voxel resampling methods a three-step approach was used for feature selection and a random forest (RF) classifier was trained on the training dataset. Model performance was evaluated on training and validation datasets and clinical utility indexes (CUIs) were computed. The distributions and intercorrelation for selected features were analyzed.
Seven radiomics features were selected from the cubic interpolated features and five from the linear interpolated features on the training dataset. The RF classifier showed similar performance for cubic and linear interpolation methods in the training dataset with accuracies of 0.81 (0.75−0.86) and 0.76 (0.71−0.82) respectively; in the validation dataset the accuracy dropped to 0.72 (0.6−0.82) using cubic interpolation and 0.72 (0.6−0.84) using linear resampling. CUIs showed the model achieved satisfactory negative values (0.605 using cubic interpolation and 0.569 for linear interpolation).
MRI has the potential for predicting the 1p/19q status in LGGs. Both cubic and linear interpolation methods showed similar performance in external validation.
For the current coal spontaneous combustion fire risk evaluation method has the problems of single evaluation dimension, incomplete evaluation index and unreliable evaluation results. However, the ...coal spontaneous combustion fire risk evaluation can effectively guide coal mine fire prevention and control. Therefore, a multi-indicator quantitative risk evaluation method combining analytic hierarchy process (AHP) and linear interpolation for different periods of coal spontaneous combustion is proposed as a way to improve the rationality of coal spontaneous combustion fire risk evaluation. The continuous segmental evaluation model was established based on the factors that influence the three periods of coal spontaneous combustion (Latent Period, Self-heating Period and Combustion Period). The latent period evaluation model was established based on the fire-prone nature of coal, coal seam occurrence, mining technology, and fire prevention measures. The self-heating period evaluation model was established based on the degree of self-heating of coal, absolute CO generation, and Graham index. The combustion period, there was already a spontaneous combustion fire, directly set the fire risk score to 0. The score could fully reflect the level of coal spontaneous combustion fire risk. The higher the score, the relatively smaller the risk of coal spontaneous combustion, and vice versa, the relatively larger. According to this risk evaluation model, the risk of coal spontaneous combustion in the 2
−2
coal seam of a coal mine in Shaanxi Province was evaluated. The result shows that the level of coal spontaneous combustion risk of 2
−2
coal seam in this coal mine is in a safer state, and this result is consistent with the actual situation. The multi-indicator quantitative evaluation of different periods of coal spontaneous combustion effectively improves the accuracy of fire risk evaluation. Therefore, the evaluation model has certain theoretical guidance for the control of coal spontaneous combustion risk in coal mines.
Spatial predictions of drift deposits on soil surface were conducted using eight different spatial interpolation methods i.e. classical approaches like the Thiessen method and kriging, and some ...advanced methods like spatial vine copulas, Karhunen-Loève expansion and INLA. In order to investigate the impact of the number of locations on the prediction, all spatial predictions were conducted using a set of 39 and 47 locations respectively. The analysis revealed that taking more locations into account increases the accuracy of the prediction and the extreme behavior of the data is better modeled. Leave-one-out cross-validation was used to assess the accuracy of the prediction. The Thiessen method has the highest prediction errors among all tested methods. Linear interpolation methods were able to better reproduce the extreme behavior at the first meters from the sprayed border and exhibited lower prediction errors as the number of data points increased. Especially the spatial copula method exhibited an obvious increase in prediction accuracy. The Karhunen-Loève expansion provided similar results as universal kriging and IDW, although showing a stronger change in the prediction as the number of locations increased. INLA predicted the pesticide dispersion to be smooth over the whole study area. Using Delaunay triangulation of the study area, the total pesticide concentration was estimated to be between 2.06% and 2.97% of the total Uranine applied. This work is a first attempt to completely understand and model the uncertainties of the mass balance, therefore providing a basis for future studies.
Display omitted
•Airborne pesticide drift deposition is important in non-mechanized farming.•Accurate spatial distribution of airborne deposits is needed for the mass balance.•Eight methods are presented and compared to assess drift spatial distribution.•The key information to better model the extreme data is the amount of locations.
Removal and restoration of hair and hair-like regions within skin lesion images is needed so features within lesions can be more effectively analyzed for benign lesions, cancerous lesions, and for ...cancer discrimination. This paper refers to “melanoma texture” as a rationale for supporting the need for the proposed hair detection and repair techniques, which incompletely represents why hair removal is an important operation for skin lesion analysis. A comparative study of the state-of-the-art hair-repaired methods with a novel algorithm is also proposed by morphological and fast marching schemes. The hair-repaired techniques are evaluated in terms of computational, performance and tumor-disturb patterns (
TDP) aspects. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (
PDE) and (iii) exemplar-based repairing techniques. The performance analysis of hair detection quality, was based on the evaluation of the hair detection error (
HDE), quantified by statistical metrics and manually used to determine the hair lines from a dermatologist as the ground truth. The results are presented on a set of 100 dermoscopic images. For the two characteristics measured in the experiments the best method is the fast marching hair removal algorithm (
HDE: 2.98%,
TDP: 4.21%). This proposed algorithm repaired the texture of the melanoma, which becomes consistent with human vision. The comparisons results obtained, indicate that hair-repairing algorithm based on the fast marching method achieve an accurate result.
Signal processing is an essential process in every mobile system. Standard signal processing is at a fixed rate, and it causes a pointless rise in system processing activity. Consequently, adaptive ...rate signal acquisition, segmentation, and denoising tactics are proposed. The system regulates parameters such as acquisition rate and denoising filter order by following the temporal disparities of the incoming signal, providing adequate tuning of the system processing activity. A speech database is employed to evaluate and compare the performance of the proposed solution with that of the traditional counter approach. Results demonstrate that the proposed method achieves a more than second order of magnitude gain over conventional counterparts while delivering a similar output quality. This confirms the benefit of integrating the designed solution into current mobile systems to improve their computational efficiency.
Pulse shaping is an important step in the unfolding-synthesis technique. In this paper we present efficient digital pulse-shaping algorithms that utilize repeated sum polynomials. These algorithms ...address the most common constraint in pulse shape synthesis — the finite duration of the pulses. The presented digital methods for efficient real-time synthesis use only basic digital signal processing functions (addition, constant multiplication, and shift), thereby minimizing required signal processing resources. A differentiation technique to decompose pulse shapes defined by polynomials is presented and used to synthesize arbitrary trapezoidal/triangular pulse shapes. The synthesis of rational, exponential, trigonometric and other non-polynomial defined pulse shapes can be approximated in real time. A methodology to approximate non-polynomial defined pulse shapes is described, and Gaussian and sinusoidal pulse shapes are synthesized via polynomial approximation and linear interpolation. The pulse shape synthesis algorithms are presented in recursive form and are suitable for efficient implementation by using integer only arithmetic.
This article introduces a new discrete-time (DT) charge-sharing (CS) low-pass filter (LPF) that achieves high-order filtering and improves its stopband rejection while maintaining a reasonable duty ...cycle of the main clock at 20%. It proposes two key innovations: 1) a linear interpolation of the sampling capacitor and 2) a charge re-circulation of the history capacitors for deep stopband rejection. Fabricated in 28-nm CMOS, the proposed IIR LPF demonstrates a 1-9.9-MHz bandwidth (BW) programmability and achieves a record-high 120-dB stopband rejection at 100 MHz while consuming merely 0.92 mW. The in/out-of-band IIP3 is +17.7 dBm/+26.6 dBm, and the input-referred noise is 3.5 nV/<inline-formula> <tex-math notation="LaTeX">\sqrt {\mathrm{ Hz}} </tex-math></inline-formula>.
Wideband ambiguity function (WBAF) plays an important role in radar, sonar, and GPS. However, the computational complexity of the conventional algorithms for WBAF is high. Moreover, because the ...signal is discretized the values between two sampling points of the sampled signal are unknown. These situations cause significant difficulty for the computation of the WBAF with Doppler stretch changing. To solve these problems, an algorithm based on linear interpolation is proposed to estimate the WBAF. According to the values of Doppler stretch and time delay, the locations of interpolation can be determined Based on these interpolation locations, the algorithm calculates the signal values, which are unknown but essential for WBAF, with linear interpolation; and then use the recovered values and original sampled values to estimate the WBAF. By using the linear interpolation, the algorithm does not need to utilize a great deal of the matched filters, and the conventional multirate sampling method is avoided. Therefore, the computational complexity of the algorithm can be reduced greatly. We analyzed the estimation error of WBAF to examine the performance of the algorithm. Moreover, we obtained the following results: the estimation error of the WBAF is mainly affected by the linear interpolation error, while the linear interpolation error depends on the time delay, Doppler stretch and sampling frequency. The estimation error is acceptable in the case of high sampling frequency. Numerical experiments verified the validity of the algorithm.