In the above article <xref ref-type="bibr" rid="ref1">1 , Table I(b) cited an incorrect reference number. Reference 12 should have been given as 13, provided here as <xref ref-type="bibr" ...rid="ref2">2 .
In the above article <xref ref-type="bibr" rid="ref1">1 , an error in the units of measurement in <xref rid="fig1" ref-type="fig">Fig. 12 was identified. This error does not invalidate the discussion ...of the article findings and conclusions, as they were put in the form of comparison among different feeders. <xref rid="fig1" ref-type="fig">Fig. 12 with the corrected measurements units is included in this correction document, together with an explanation of the error and a calculation example.
The focus in deep learning research has been mostly to push the limits of prediction accuracy. However, this was often achieved at the cost of increased complexity, raising concerns about the ...interpretability and the reliability of deep networks. Recently, an increasing attention has been given to untangling the complexity of deep networks and quantifying their uncertainty for different computer vision tasks. Differently, the task of depth completion has not received enough attention despite the inherent noisy nature of depth sensors. In this work, we thus focus on modeling the uncertainty of depth data in depth completion starting from the sparse noisy input all the way to the final prediction. We propose a novel approach to identify disturbed measurements in the input by learning an input confidence estimator in a self-supervised manner based on the normalized convolutional neural networks (NCNNs). Further, we propose a probabilistic version of NCNNs that produces a statistically meaningful uncertainty measure for the final prediction. When we evaluate our approach on the KITTI dataset for depth completion, we outperform all the existing Bayesian Deep Learning approaches in terms of prediction accuracy, quality of the uncertainty measure, and the computational efficiency. Moreover, our small network with 670k parameters performs on-par with conventional approaches with millions of parameters. These results give strong evidence that separating the network into parallel uncertainty and prediction streams leads to state-of-the-art performance with accurate uncertainty estimates.
In this article, a novel direct geometric algorithm for worst-case error analysis of six-port reflectometer is proposed. Due to the inevitable power measurement uncertainty, six-port reflectometer ...shows an inherent deviation that can be represented as an irregular area based on the classical geometrical theory. The previous worst-case error analysis geometric algorithm approximates the boundary of the irregular area with straight lines and utilizes several special points in comparison. Without approximation, points on the boundary of the irregular error distribution area can be directly obtained in the proposed algorithm. After traversing all points, the maximum errors of the reflection coefficient in magnitude and phase around the entire Smith Chart can be determined accurately. This algorithm is first verified by comparing state-of-the-art approximating algorithm in simulation. After plotting the error distributions of five classical six-port models, a novel common conclusion is obtained that the area that shows the minimum error in the entire Smith Chart can be found in geometry for an arbitrary six-port reflectometer. A practical six-port reflectometer system is set up, calibrated, and utilized for the algorithm verification. The consistent results show the availability and the fundamental guiding significance on specific six-port reflectometer applications design of the proposed algorithm.
This paper presents a review of the literature on state estimation (SE) in power systems. While covering works related to SE in transmission systems, the main focus of this paper is distribution ...system SE (DSSE). The critical topics of DSSE, including mathematical problem formulation, application of pseudo-measurements, metering instrument placement, network topology issues, impacts of renewable penetration, and cyber-security are discussed. Both conventional and modern data-driven and probabilistic techniques have been reviewed. This paper can provide researchers and utility engineers with insights into the technical achievements, barriers, and future research directions of DSSE.
Prior studies on covert communication with noise uncertainty adopted a worst-case approach from the warden's perspective. That is, the worst-case detection performance of the warden is used to assess ...covertness, which is overly optimistic. Instead of simply considering the worst limit, we take the distribution of noise uncertainty into account to evaluate the overall covertness in a statistical sense. Specifically, we define new metrics for measuring the covertness, which are then adopted to analyze the maximum achievable rate for a given covertness requirement under both bounded and unbounded noise uncertainty models.
A popular approach for estimating an unknown signal <inline-formula> <tex-math notation="LaTeX"> \mathbf {x}_{0}\in \mathbb {R} ^{n} </tex-math></inline-formula> from noisy, linear measurements ...<inline-formula> <tex-math notation="LaTeX"> \mathbf {y}= \mathbf {A} \mathbf {x} _{0}+ \mathbf {z}\in \mathbb {R}^{m} </tex-math></inline-formula> is via solving a so called regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimator: <inline-formula> <tex-math notation="LaTeX">\hat{\mathbf {x}} :=\arg \min _ \mathbf {x} \mathcal {L} (\mathbf {y}- \mathbf {A} \mathbf {x})+\lambda f(\mathbf {x}) </tex-math></inline-formula>. Here, <inline-formula> <tex-math notation="LaTeX"> \mathcal {L} </tex-math></inline-formula> is a convex loss function, <inline-formula> <tex-math notation="LaTeX">f </tex-math></inline-formula> is a convex (typically, non-smooth) regularizer, and <inline-formula> <tex-math notation="LaTeX">\lambda > 0 </tex-math></inline-formula> is a regularizer parameter. We analyze the squared error performance <inline-formula> <tex-math notation="LaTeX">\|\hat{\mathbf {x}} - \mathbf {x}_{0}\|_{2}^{2} </tex-math></inline-formula> of such estimators in the high-dimensional proportional regime where <inline-formula> <tex-math notation="LaTeX">m,n\rightarrow \infty </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">m/n\rightarrow \delta </tex-math></inline-formula>. The design matrix <inline-formula> <tex-math notation="LaTeX"> \mathbf {A} </tex-math></inline-formula> is assumed to have entries iid Gaussian; only minimal and rather mild regularity conditions are imposed on the loss function, the regularizer, and on the noise and signal distributions. We show that the squared error converges in probability to a nontrivial limit that is given as the solution to a minimax convex-concave optimization problem on four scalar optimization variables. We identify a new summary parameter, termed the expected Moreau envelope to play a central role in the error characterization. The precise nature of the results permits an accurate performance comparison between different instances of regularized <inline-formula> <tex-math notation="LaTeX">M </tex-math></inline-formula>-estimators and allows to optimally tune the involved parameters (such as the regularizer parameter and the number of measurements). The key ingredient of our proof is the convex Gaussian min-max theorem which is a tight and strengthened version of a classical Gaussian comparison inequality that was proved by Gordon in 1988.
Display omitted
•Uncertainty evaluation of first-order kinetc constants from catalytic photodegradation.•Monte Carlo simulation of asymmetric distribution of estimated kinetic constant ...values.•Catalysts efficiency comparison using the uncertainty of kinetic constants difference.•Synthesised TiO2 nanoparticles more efficient than purchased TiO2 for 99% confidence.•Ruthenium-doped titanate nanowires more efficient than pristine titanate nanowires.
The comparison of the efficiency of a pair of catalysts for the photodegradation of a compound in the same irradiation conditions is affected by many independent and correlated random and systematic effects. These comparisons become objective and indisputable if the efficiency difference is estimated with uncertainty, producing an interval encompassing the “true” difference value with a known probability. This work presents a tool for the detailed evaluation of the uncertainty of estimated kinetic constant values and differences by the Monte Carlo Method simulation of all relevant uncertainty components. First-order kinetics were quantified from regression of the concentrations of studied compounds over various degradation times. The efficiency of methylene blue (MB) and sulfamethazine (SMZ) photodegradation using various solid-state catalysts was quantified and compared. Synthesised TiO2 nanoparticles are more efficient than the purchased TiO2 for MB degradation. For SMZ degradation, ruthenium-doped titanate nanowires and ruthenium-doped titanate nanotubes are equally efficient and more efficient than pristine titanate nanowires. The efficiency equivalence can be challenged by quantifying larger and/or less uncertain SMZ concentrations. First-order kinetics was tested by taking the simulated confidence limits of the intercept of the linear regression used to describe this kinetic order. Kinetic constants with asymmetric distribution and a relative expanded uncertainty for a 95 % confidence level between 3.6% and 17% were quantified. Three successively more uncertain procedures for preparing calibrators for the quantification of MB by UV/Vis spectroscopy were tested, concluding that since the same calibration curve is used to quantify all MB solutions, kinetic quantification is not proportionally or significantly affected by this uncertainty component.
Multi-View Intact Space Learning Xu, Chang; Tao, Dacheng; Xu, Chao
IEEE transactions on pattern analysis and machine intelligence,
2015-Dec.-1, 2015-Dec, 2015-12-1, 20151201, Volume:
37, Issue:
12
Journal Article
Peer reviewed
Open access
It is practical to assume that an individual view is unlikely to be sufficient for effective multi-view learning. Therefore, integration of multi-view information is both valuable and necessary. In ...this paper, we propose the Multi-view Intact Space Learning (MISL) algorithm, which integrates the encoded complementary information in multiple views to discover a latent intact representation of the data. Even though each view on its own is insufficient, we show theoretically that by combing multiple views we can obtain abundant information for latent intact space learning. Employing the Cauchy loss (a technique used in statistical learning) as the error measurement strengthens robustness to outliers. We propose a new definition of multi-view stability and then derive the generalization error bound based on multi-view stability and Rademacher complexity, and show that the complementarity between multiple views is beneficial for the stability and generalization. MISL is efficiently optimized using a novel Iteratively Reweight Residuals (IRR) technique, whose convergence is theoretically analyzed. Experiments on synthetic data and real-world datasets demonstrate that MISL is an effective and promising algorithm for practical applications.