Vegetation greening profoundly impacts the water cycle, and recent concerns about greening impacts have focused on various hydrological cycle components. However, the impacts of greening on catchment ...runoff signatures reflecting magnitude, low/high flow frequency, low/high flow duration and flow dynamics remain poorly understood. To properly simulate these runoff signatures, we use five modified hydrological models incorporating vegetation dynamics and further derive three ensemble approaches to obtain eight runoff time series outputs in a major tributary of the Yellow River Basin. Multiple validations suggest that the log‐based weighted ensemble (LWE) approach is robust for depicting the impact of greening on selected runoff signatures. This is especially true for the low flow part of the runoff time series and the overall performance of the selected signatures since LWE explicitly reduces the low flow bias. With this approach, five experiments were designed to isolate the impact of vegetation greening on runoff signatures, and the comparisons among the experiments indicate that greening noticeably decreases runoff magnitude, increases low flow frequency/duration and decreases high flow frequency/duration signatures. However, greening has little influence on runoff dynamic signatures. Each percent increase in leaf area index results in (a) changes of −0.2 ± 0.1% for magnitude signatures; (b) changes of −0.34 ± 0.30% and 0.56 ± 0.28% with wide ranges for annual high flow days and annual low flow days, respectively; and (c) marginal change on flow dynamic signatures. This study provides new insights by disentangling greening impacts on various runoff signatures using a trade‐off ensemble method.
Key Points
Isolate vegetation greening impact on runoff signatures by comparing modeling experiments applying a log‐based weighted ensemble approach
Increasing leaf area index decreased flow magnitude, increased low flow duration, and resulted in little change in flow dynamic signatures
Flow frequency and duration signatures showed a greater response than magnitude signatures
Recently, LDPC code have become very important research area in wireless communication due to its ability to increase the capacity in a wireless fading environment, with low implementation ...complexity. In this paper, LDPC are combined with Multi User OFDM Orthogonal Chaotic Vector Shift Keying (MU-OFDM-OCVSK) communication system to improve the BER performance over multi-path Rayleigh fading channels. Two types of LDPC decoder are introduced that are Log-Domain and Min-Sum decoder. The system is simulated using MATLAB program version 2019a for different scenarios which include different number of iterations, different block lengths, different number of users and different number of spreading factor. The results show that a coding gain in a range of (4.5 – 7) dB is achieved between the coded and uncoded MU-OFDM-OCVSK system. The results also show that the Min-Sum decoder outperform the Log-Domain decoder in all scenarios.
Predicting system failures can be of great benefit to managers that get a better command over system performance. Data that systems generate in the form of logs is a valuable source of information to ...predict system reliability. As such, there is an increasing demand of tools to mine logs and provide accurate predictions. However, interpreting information in logs poses some challenges. This study discusses how to effectively mining sequences of logs and provide correct predictions. The approach integrates different machine learning techniques to control for data brittleness, provide accuracy of model selection and validation, and increase robustness of classification results. We apply the proposed approach to log sequences of 25 different applications of a software system for telemetry and performance of cars. On this system, we discuss the ability of three well-known support vector machines - multilayer perceptron, radial basis function and linear kernels - to fit and predict defective log sequences. Our results show that a good analysis strategy provides stable, accurate predictions. Such strategy must at least require high fitting ability of models used for prediction. We demonstrate that such models give excellent predictions both on individual applications - e.g., 1 % false positive rate, 94 % true positive rate, and 95 % precision - and across system applications - on average, 9 % false positive rate, 78 % true positive rate, and 95 % precision. We also show that these results are similarly achieved for different degree of sequence defectiveness. To show how good are our results, we compare them with recent studies in system log analysis. We finally provide some recommendations that we draw reflecting on our study.
In this article, we compare two parallel systems of heterogeneous-independent Log-Lindley distributed components using the concept of matrix majorization. The comparisons are carried out with respect ...to the usual stochastic ordering when each component receives a random shock. It is proved that for two parallel systems with a common shape parameter vector, the majorized matrix of the scale and shock parameters leads to better system reliability. Results related to the comparison of two parallel systems having heterogeneous -dependent Log-Lindley component are also presented in terms of usual stochastic ordering.
Under a pioneer work in integral geometry, Lutwak, Xi, Yang, and Zhang 19 established recently the variational formula for chord integral Iq(K), for q>0, and defined the q-th chord measure. ...Meanwhile, they provided sufficient and necessary conditions for the existence of a solution to the chord Minkowski problem, for q>0, and sufficient conditions to solve the symmetric case of the chord log-Minkowski problem when 1≤q≤n+1. It is well known that for q>1, Iq(K) is the Riesz potential of characteristic function of convex body K. In the case of 0<q<1, we discover the new relationship between nonlocal energy and chord integral Iq(K), which gives the representation formula of Iq(K) in the sense of Riesz potential, and we also give a new definition of chord measure by making use of its representation formula of Riesz potentials for 0<q<1. Finally, we solve the symmetric case of chord log-Minkowski problem under a sufficient condition for 0<q<1.
In this article, we concentrate on modelling heavy-tailed data which can be subjected to left-truncation. We modify an existing procedure for modelling left-truncated data via a compound ...non-homogeneous Poisson process to make it systematically applicable in the context heavy-tailed data. The introduced procedure can be applied when the underlying severities of the process follow Burr type XII, Generalised Pareto and Generalised Extreme Value distributions by using the Maximum Product of Spacings (MPS) parameter estimation technique. As a natural consequence of the MPS technique, we consider how Moran’s log spacings statistic for testing goodness-of-fit of the severity distributions can be adapted to suit left-truncated data. Thereafter, we compare the performance of this new fitting procedure against traditional maximum likelihood estimation in the context of natural catastrophe loss data, and evidence in favour of MPS is found. Within the context of these data, we also compare our procedure to a one that does not account for left-truncation. We end our contribution by proposing, for our modelling procedure, a Monte Carlo importance sampling algorithm which ensures that large losses are satisfactorily simulated. In closing, we illustrate the potential usage of both the new fitting and simulation procedures by presenting catastrophe bond prices with a trigger based on the analysed heavy-tailed data.
•A rigorous methodology for modelling heavy-tailed left-truncated data is introduced.•A modification of the maximum product of spacings estimation technique is studied.•An importance-sampling algorithm for heavy-tailed distributions is proposed.•Catastrophe bond prices with the trigger based on the PCS loss index are presented.
Underwater Towed Vehicle navigation and localization in deep sea environments are particularly challenging without high-precision inertial sensors. In this paper, the multi-sensors navigation fusion ...method of a towed underwater vehicle based on the ultrashort baseline (USBL) system, Dopplervelocitylog (DVL) and a pressure gauge in the deep sea is investigated. Combined with GPS real-time tide detection technology, the positioning principle of the USBL based on the depth constraint is given, and a straight-line model be proposed to estimate the position of the vehicle used the average heading when underwater towed vehicle is working normally in deep sea without DVL. The interacting multiple model (IMM) algorithm based on smoothing algorithm is proposed to improve the positioning accuracy in the deep sea, and the nonlinearity of the model, precision and stability of filtering are considered simultaneously using the adaptive robust square root cubature Kalman filter. The results of the simulated and practical experiments at 4800 m verify that the new algorithm can take full use of high-precision observation information and greatly improve the positioning accuracy and robustness of a deep sea underwater towed system.
We classify all finite energy solutions of an equation which arises as the Euler–Lagrange equation of a conformally invariant logarithmic Sobolev inequality on the sphere due to Beckner. Our proof ...uses an extension of the method of moving spheres from Rn to Sn and a classification result of Li and Zhu. Along the way we prove a small volume maximum principle and a strong maximum principle for the underlying operator which is closely related to the logarithmic Laplacian.
Geostatistics involves the fitting of spatially continuous models to spatially discrete data. Preferential sampling arises when the process that determines the data locations and the process being ...modelled are stochastically dependent. Conventional geostatistical methods assume, if only implicitly, that sampling is non-preferential. However, these methods are often used in situations where sampling is likely to be preferential. For example, in mineral exploration, samples may be concentrated in areas that are thought likely to yield high grade ore. We give a general expression for the likelihood function of preferentially sampled geostatistical data and describe how this can be evaluated approximately by using Monte Carlo methods. We present a model for preferential sampling and demonstrate through simulated examples that ignoring preferential sampling can lead to misleading inferences. We describe an application of the model to a set of biomonitoring data from Galicia, northern Spain, in which making allowance for preferential sampling materially changes the results of the analysis.