In the above article <xref ref-type="bibr" rid="ref1">1 , Table I(b) cited an incorrect reference number. Reference 12 should have been given as 13, provided here as <xref ref-type="bibr" ...rid="ref2">2 .
In the above article <xref ref-type="bibr" rid="ref1">1 , an error in the units of measurement in <xref rid="fig1" ref-type="fig">Fig. 12 was identified. This error does not invalidate the discussion ...of the article findings and conclusions, as they were put in the form of comparison among different feeders. <xref rid="fig1" ref-type="fig">Fig. 12 with the corrected measurements units is included in this correction document, together with an explanation of the error and a calculation example.
In this article, a novel direct geometric algorithm for worst-case error analysis of six-port reflectometer is proposed. Due to the inevitable power measurement uncertainty, six-port reflectometer ...shows an inherent deviation that can be represented as an irregular area based on the classical geometrical theory. The previous worst-case error analysis geometric algorithm approximates the boundary of the irregular area with straight lines and utilizes several special points in comparison. Without approximation, points on the boundary of the irregular error distribution area can be directly obtained in the proposed algorithm. After traversing all points, the maximum errors of the reflection coefficient in magnitude and phase around the entire Smith Chart can be determined accurately. This algorithm is first verified by comparing state-of-the-art approximating algorithm in simulation. After plotting the error distributions of five classical six-port models, a novel common conclusion is obtained that the area that shows the minimum error in the entire Smith Chart can be found in geometry for an arbitrary six-port reflectometer. A practical six-port reflectometer system is set up, calibrated, and utilized for the algorithm verification. The consistent results show the availability and the fundamental guiding significance on specific six-port reflectometer applications design of the proposed algorithm.
This paper presents a review of the literature on state estimation (SE) in power systems. While covering works related to SE in transmission systems, the main focus of this paper is distribution ...system SE (DSSE). The critical topics of DSSE, including mathematical problem formulation, application of pseudo-measurements, metering instrument placement, network topology issues, impacts of renewable penetration, and cyber-security are discussed. Both conventional and modern data-driven and probabilistic techniques have been reviewed. This paper can provide researchers and utility engineers with insights into the technical achievements, barriers, and future research directions of DSSE.
For heterogeneous data sets containing numerical and symbolic feature values, feature selection based on fuzzy neighborhood multigranulation rough sets (FNMRS) is a very significant step to ...preprocess data and improve its classification performance. This article presents an FNMRS-based feature selection approach in neighborhood decision systems. First, some concepts of fuzzy neighborhood rough sets and neighborhood multigranulation rough sets are given, and then the FNMRS model is investigated to construct uncertainty measures. Second, the optimistic and pessimistic FNMRS models are built by using fuzzy neighborhood multigranulation lower and upper approximations from algebra view, and some fuzzy neighborhood entropy-based uncertainty measures are developed in information view. Inspired by both algebra and information views based on the FNMRS model, the fuzzy neighborhood pessimistic multigranulation entropy is proposed. Third, the Fisher score model is utilized to delete irrelevant features to decrease the complexity of high-dimensional data sets, and then, a forward feature selection algorithm is provided to promote the performance of heterogeneous data classification. Experimental results on 12 data sets show that the presented model is effective for selecting important features with the higher stability of classification in neighborhood decision systems.
Prior studies on covert communication with noise uncertainty adopted a worst-case approach from the warden's perspective. That is, the worst-case detection performance of the warden is used to assess ...covertness, which is overly optimistic. Instead of simply considering the worst limit, we take the distribution of noise uncertainty into account to evaluate the overall covertness in a statistical sense. Specifically, we define new metrics for measuring the covertness, which are then adopted to analyze the maximum achievable rate for a given covertness requirement under both bounded and unbounded noise uncertainty models.
Omnidirectional video records a scene in all directions around one central position. It allows users to select viewing content freely in all directions. Assuming that viewing directions are uniformly ...distributed, the isotropic observation space can be regarded as a sphere. Omnidirectional video is commonly represented by different projection formats with one or multiple planes. To measure objective quality of omnidirectional video in observation space more accurately, a weighted-to-spherically-uniform quality evaluation method is proposed in this letter. The error of each pixel on projection planes is multiplied by a weight to ensure the equivalent spherical area in observation space, in which pixels with equal mapped spherical area have the same influence on distortion measurement. Our method makes the quality evaluation results more accurate and reliable since it avoids error propagation caused by the conversion from resampling representation space to observation space. As an example of such quality evaluation method, weighted-to-spherically-uniform peak signal-to-noise ratio is described and its performance is experimentally analyzed.
A popular approach for estimating an unknown signal <inline-formula> <tex-math notation="LaTeX"> \mathbf {x}_{0}\in \mathbb {R} ^{n} </tex-math></inline-formula> from noisy, linear measurements ...<inline-formula> <tex-math notation="LaTeX"> \mathbf {y}= \mathbf {A} \mathbf {x} _{0}+ \mathbf {z}\in \mathbb {R}^{m} </tex-math></inline-formula> is via solving a so called regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimator: <inline-formula> <tex-math notation="LaTeX">\hat{\mathbf {x}} :=\arg \min _ \mathbf {x} \mathcal {L} (\mathbf {y}- \mathbf {A} \mathbf {x})+\lambda f(\mathbf {x}) </tex-math></inline-formula>. Here, <inline-formula> <tex-math notation="LaTeX"> \mathcal {L} </tex-math></inline-formula> is a convex loss function, <inline-formula> <tex-math notation="LaTeX">f </tex-math></inline-formula> is a convex (typically, non-smooth) regularizer, and <inline-formula> <tex-math notation="LaTeX">\lambda > 0 </tex-math></inline-formula> is a regularizer parameter. We analyze the squared error performance <inline-formula> <tex-math notation="LaTeX">\|\hat{\mathbf {x}} - \mathbf {x}_{0}\|_{2}^{2} </tex-math></inline-formula> of such estimators in the high-dimensional proportional regime where <inline-formula> <tex-math notation="LaTeX">m,n\rightarrow \infty </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">m/n\rightarrow \delta </tex-math></inline-formula>. The design matrix <inline-formula> <tex-math notation="LaTeX"> \mathbf {A} </tex-math></inline-formula> is assumed to have entries iid Gaussian; only minimal and rather mild regularity conditions are imposed on the loss function, the regularizer, and on the noise and signal distributions. We show that the squared error converges in probability to a nontrivial limit that is given as the solution to a minimax convex-concave optimization problem on four scalar optimization variables. We identify a new summary parameter, termed the expected Moreau envelope to play a central role in the error characterization. The precise nature of the results permits an accurate performance comparison between different instances of regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimators and allows to optimally tune the involved parameters (such as the regularizer parameter and the number of measurements). The key ingredient of our proof is the convex Gaussian min-max theorem which is a tight and strengthened version of a classical Gaussian comparison inequality that was proved by Gordon in 1988.
Display omitted
•Uncertainty evaluation of first-order kinetc constants from catalytic photodegradation.•Monte Carlo simulation of asymmetric distribution of estimated kinetic constant ...values.•Catalysts efficiency comparison using the uncertainty of kinetic constants difference.•Synthesised TiO2 nanoparticles more efficient than purchased TiO2 for 99% confidence.•Ruthenium-doped titanate nanowires more efficient than pristine titanate nanowires.
The comparison of the efficiency of a pair of catalysts for the photodegradation of a compound in the same irradiation conditions is affected by many independent and correlated random and systematic effects. These comparisons become objective and indisputable if the efficiency difference is estimated with uncertainty, producing an interval encompassing the “true” difference value with a known probability. This work presents a tool for the detailed evaluation of the uncertainty of estimated kinetic constant values and differences by the Monte Carlo Method simulation of all relevant uncertainty components. First-order kinetics were quantified from regression of the concentrations of studied compounds over various degradation times. The efficiency of methylene blue (MB) and sulfamethazine (SMZ) photodegradation using various solid-state catalysts was quantified and compared. Synthesised TiO2 nanoparticles are more efficient than the purchased TiO2 for MB degradation. For SMZ degradation, ruthenium-doped titanate nanowires and ruthenium-doped titanate nanotubes are equally efficient and more efficient than pristine titanate nanowires. The efficiency equivalence can be challenged by quantifying larger and/or less uncertain SMZ concentrations. First-order kinetics was tested by taking the simulated confidence limits of the intercept of the linear regression used to describe this kinetic order. Kinetic constants with asymmetric distribution and a relative expanded uncertainty for a 95 % confidence level between 3.6% and 17% were quantified. Three successively more uncertain procedures for preparing calibrators for the quantification of MB by UV/Vis spectroscopy were tested, concluding that since the same calibration curve is used to quantify all MB solutions, kinetic quantification is not proportionally or significantly affected by this uncertainty component.