Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these ...methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose a weighted penalized corrected quantile estimator for regression parameters in linear regression models with additive measurement errors, where unobservable covariate is nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in a high dimensional sparse setup where the dimensionality can grow exponentially with the sample size. We provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in the paper.
We present a reduction-consistent and thermodynamically consistent formulation and an associated numerical algorithm for simulating the dynamics of an isothermal mixture consisting of N (N⩾2) ...immiscible incompressible fluids with different physical properties (densities, viscosities, and pair-wise surface tensions). By reduction consistency we refer to the property that if only a set of M (1⩽M⩽N−1) fluids are present in the system then the N-phase governing equations and boundary conditions will exactly reduce to those for the corresponding M-phase system. By thermodynamic consistency we refer to the property that the formulation honors the thermodynamic principles. Our N-phase formulation is developed based on a more general method that allows for the systematic construction of reduction-consistent formulations, and the method suggests the existence of many possible forms of reduction-consistent and thermodynamically consistent N-phase formulations. Extensive numerical experiments have been presented for flow problems involving multiple fluid components and large density ratios and large viscosity ratios, and the simulation results are compared with the physical theories or the available physical solutions. The comparisons demonstrate that our method produces physically accurate results for this class of problems.
We develop a method for modeling and simulating a class of two-phase flows consisting of two immiscible incompressible dielectric fluids and their interactions with imposed external electric fields ...in two and three dimensions. We first present a thermodynamically-consistent and reduction-consistent phase field model for two-phase dielectric fluids. The model honors the conservation laws and thermodynamic principles, and has the property that, if only one fluid component is present in the system, the two-phase formulation will exactly reduce to that of the corresponding single-phase system. In particular, this model accommodates an equilibrium solution that is compatible with the zero-velocity requirement based on physics. This property leads to a simpler method for simulating the equilibrium state of two-phase dielectric systems. We further present an efficient numerical algorithm, together with a spectral-element (for two dimensions) or a hybrid Fourier-spectral/spectral-element (for three dimensions) discretization in space, for simulating this class of problems. This algorithm computes different dynamic variables successively in an un-coupled fashion, and involves only coefficient matrices that are time-independent in the resultant linear algebraic systems upon discretization, even when the physical properties (e.g. permittivity, density, viscosity) of the two dielectric fluids are different. This property is crucial and enables us to employ fast Fourier transforms for three-dimensional problems. Ample numerical simulations of two-phase dielectric flows under imposed voltage are presented to demonstrate the performance of the method herein and to compare the simulation results with theoretical models and experimental data.
•Develop thermodynamically-consistent and reduction-consistent phase field model for two- phase dielectric fluid flows.•Model leads to equilibrium solution compatible with zero-velocity requirement, suggesting simpler method for computing equilibrium states.•Numerical algorithm de-couples dynamic variables and involves pre-computable coefficient matrices, under variable mixture properties.•Method captures well the interaction between two-phase dielectric fluids and imposed external electric fields.
Owing to the advantages of low storage cost and high query efficiency, cross-modal hashing has received increasing attention recently. As failing to bridge the inherent modality gap between ...modalities, most existing cross-modal hashing methods have limited capability to explore the semantic consistency information between different modality data, leading to unsatisfactory search performance. To address this problem, we propose a novel deep hashing method named Multi-Task Consistency-Preserving Adversarial Hashing (CPAH) to fully explore the semantic consistency and correlation between different modalities for efficient cross-modal retrieval. First, we design a consistency refined module (CR) to divide the representations of different modality into two irrelevant parts, i.e., modality-common and modality-private representations. Then, a multi-task adversarial learning module (MA) is presented, which can make the modality-common representation of different modalities close to each other on feature distribution and semantic consistency. Finally, the compact and powerful hash codes can be generated from modality-common representation. Comprehensive evaluations conducted on three representative cross-modal benchmark datasets illustrate our method is superior to the state-of-the-art cross-modal hashing methods.
In working condition of battery packs, the battery pack consistency has a great impact on the overall performance of the battery pack. In order to build an accurate battery pack model, we need to ...build a battery pack consistency model. Firstly, we used a Gaussian mixture model to fit the statistical characteristics of a single parameter. This method can accurately fit the skewness in the parameter distribution and fit the multi-peak characteristics that may appear. Secondly, we constructed a nonparametric battery pack consistency model using a Generative Adversarial Networks (GAN). Our consistency model can accurately describe the statistical characteristics of a single parameter and fits the correlation coefficient between parameters. The battery pack model substituted into the GAN-generated battery parameters exhibits a very high similarity to the experimental data. The relative errors of the simulation results are less than 0.6 % for the terminal voltage and less than 0.3 % for the energy utilization efficiency (EUE), proving the advantages of the GAN consistency model in fitting the distribution of the battery parameters. Finally, we implemented the GAN consistency model in an embedded system with limited computing resources, which proves that our proposed model has the ability to run normally on existing BMS.
•Battery pack parameters distribution was fitted by Gaussian mixture distribution.•Generative adversarial networks for battery pack parameters consistency modeling.•The generated parameters have a high degree of similarity to the measured parameters.•The consistency model was simplified and can run in an open-source embedded system.
In this paper, the thermal consistency and electrochemical performance of batteries were comprehensively considered to improve the test and ensure the consistency of the power battery pack for ...automotive applications. At the same time, a safer and more efficient device IS established for testing and evaluation of battery consistency for automotive applications to achieve the real-time monitoring, assessment, prediction, and control of its consistency.
Weakly supervised object localization (WSOL) is a challenging and promising task that aims to localize objects solely based on the supervision of image category labels. In the absence of annotated ...bounding boxes, WSOL methods must employ the intrinsic properties of the image classification task pipeline to generate object localizations. In this work, we propose a WSOL method for exploring the Intrinsic Discrimination and Consistency in the image classification task pipeline, and call it as IDC. First, we develop a Triplet Metrics Based Foreground Modeling (TMFM) framework to directly predict object foreground regions using intrinsic discrimination. Unlike Class Activation Map (CAM) based methods that also rely on intrinsic discrimination, our TMFM framework alleviates the problem of only focusing on the most discriminative parts by optimizing foreground and background regions synergistically. Second, we design a Dual Geometric Transformation Consistency Constraints (DGTC2) training strategy to introduce additional supervision and regularization constraints for WSOL by leveraging intrinsic geometric transformation consistency. The proposed pixel-wise and object-wise consistency constraint losses cost-effectively provide spontaneous supervision for WSOL. Extensive experiments show that our IDC method achieves significant and consistent performance gains compared to existing state-of-the-art WSOL approaches. Code is available at: https://github.com/vignywang/IDC.
The great content diversity of real-world digital images poses a grand challenge to image quality assessment (IQA) models, which are traditionally designed and validated on a handful of commonly used ...IQA databases with very limited content variation. To test the generalization capability and to facilitate the wide usage of IQA techniques in real-world applications, we establish a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them. Instead of collecting the mean opinion score for each image via subjective testing, which is extremely difficult if not impossible, we present three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test (P-test). We compare 20 well-known IQA models using the proposed criteria, which not only provide a stronger test in a more challenging testing environment for existing models, but also demonstrate the additional benefits of using the proposed database. For example, in the P-test, even for the best performing no-reference IQA model, more than 6 million failure cases against the model are "discovered" automatically out of over 1 billion test pairs. Furthermore, we discuss how the new database may be exploited using innovative approaches in the future, to reveal the weaknesses of existing IQA models, to provide insights on how to improve the models, and to shed light on how the next-generation IQA models may be developed. The database and codes are made publicly available at: https://ece.uwaterloo.ca/~k29ma/exploration/.