In this paper we propose a new no-reference (NR) image quality assessment (IQA) metric using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired ...features. The features used can be divided into three groups. The first involves the features inspired by the free energy principle and the structural degradation model. Furthermore, the free energy theory also reveals that the HVS always tries to infer the meaningful part from the visual stimuli. In terms of this finding, we first predict an image that the HVS perceives from a distorted image based on the free energy theory, then the second group of features is composed of some HVS-inspired features (such as structural information and gradient magnitude) computed using the distorted and predicted images. The third group of features quantifies the possible losses of "naturalness" in the distorted image by fitting the generalized Gaussian distribution to mean subtracted contrast normalized coefficients. After feature extraction, our algorithm utilizes the support vector machine based regression module to derive the overall quality score. Experiments on LIVE, TID2008, CSIQ, IVC, and Toyama databases confirm the effectiveness of our introduced NR IQA metric compared to the state-of-the-art.
A fast reliable computational quality predictor is eagerly desired in practical image/video applications, such as serving for the quality monitoring of real-time coding and transcoding. In this ...paper, we propose a new perceptual image quality assessment (IQA) metric based on the human visual system (HVS). The proposed IQA model performs efficiently with convolution operations at multiscales, gradient magnitude, and color information similarity, and a perceptual-based pooling. Extensive experiments are conducted using four popular large-size image databases and two multiply distorted image databases, and results validate the superiority of our approach over modern IQA measures in efficiency and efficacy. Our metric is built on the theoretical support of the HVS with lately designed IQA methods as special cases.
In this paper, we propose a new no-reference (NR)/ blind sharpness metric in the autoregressive (AR) parameter space. Our model is established via the analysis of AR model parameters, first ...calculating the energy- and contrast-differences in the locally estimated AR coefficients in a pointwise way, and then quantifying the image sharpness with percentile pooling to predict the overall score. In addition to the luminance domain, we further consider the inevitable effect of color information on visual perception to sharpness and thereby extend the above model to the widely used YIQ color space. Validation of our technique is conducted on the subsets with blurring artifacts from four large-scale image databases (LIVE, TID2008, CSIQ, and TID2013). Experimental results confirm the superiority and efficiency of our method over existing NR algorithms, the state-of-the-art blind sharpness/blurriness estimators, and classical full-reference quality evaluators. Furthermore, the proposed metric can be also extended to stereoscopic images based on binocular rivalry, and attains remarkably high performance on LIVE3D-I and LIVE3D-II databases.
Objective Quality Evaluation of Dehazed Images Min, Xiongkuo; Zhai, Guangtao; Gu, Ke ...
IEEE transactions on intelligent transportation systems,
2019-Aug., 2019-8-00, Volume:
20, Issue:
8
Journal Article
Peer reviewed
Vision-based intelligent systems like automatic driving or driving assistance can be improved by enhancing the visibility of the scenes captured in bad weather conditions. In particular, many image ...dehazing algorithms (DHAs) have been proposed to facilitate such applications in hazy weather. Contrary to the substantial progress of DHA developing, the quality evaluation of DHAs falls behind. Generally, DHAs can be evaluated qualitatively by human subjects or quantitatively by objective quality measures. Compared with the subjective evaluation which is time consuming and difficult to apply, objective measures with quantitative results are more needed in practical systems. But in the literature, very few measures are widely utilized, and even less measures correlate well with the overall dehazing quality (DHQ). In this paper, we study the DHQ evaluation using real hazy images systematically. We first construct a DHQ database, which is the largest of its kind so far and includes 1750 dehazed images generated from 250 real hazy images of various haze densities using seven representative DHAs. A subjective quality evaluation study is subsequently conducted on the DHQ database. Then, we propose an objective DHQ index (DHQI) by extracting and fusing three groups of features, including: 1) haze-removing features; 2) structure-preserving features; and 3) over-enhancement features, which have captured the most key aspects of dehazing. DHQI can be utilized to evaluate DHAs or optimize practical dehazing systems. Validations on the constructed DHQ database and three other databases with synthetic haze have verified the effectiveness of DHQI. Finally, we give an overview of the current DHA quality evaluation strategies, discuss their merits and demerits, and give some suggestions on systematic DHA quality evaluation. The DHQ database and the code of DHQI will be released to facilitate further research.
Recent years have witnessed a growing number of image and video centric applications on mobile, vehicular, and cloud platforms, involving a wide variety of digital screen content images. Unlike ...natural scene images captured with modern high fidelity cameras, screen content images are typically composed of fewer colors, simpler shapes, and a larger frequency of thin lines. In this paper, we develop a novel blind/no-reference (NR) model for accessing the perceptual quality of screen content pictures with big data learning. The new model extracts four types of features descriptive of the picture complexity, of screen content statistics, of global brightness quality, and of the sharpness of details. Comparative experiments verify the efficacy of the new model as compared with existing relevant blind picture quality assessment algorithms applied on screen content image databases. A regression module is trained on a considerable number of training samples labeled with objective visual quality predictions delivered by a high-performance full-reference method designed for screen content image quality assessment (IQA). This results in an opinion-unaware NR blind screen content IQA algorithm. Our proposed model delivers computational efficiency and promising performance. The source code of the new model will be available at: https://sites.google.com/site/guke198701/publications.
Digital images in the real world are created by a variety of means and have diverse properties. A photographical natural scene image (NSI) may exhibit substantially different characteristics from a ...computer graphic image (CGI) or a screen content image (SCI). This casts major challenges to objective image quality assessment, for which existing approaches lack effective mechanisms to capture such content type variations, and thus are difficult to generalize from one type to another. To tackle this problem, we first construct a cross-content-type (CCT) database, which contains 1,320 distorted NSIs, CGIs, and SCIs, compressed using the high efficiency video coding (HEVC) intra coding method and the screen content compression (SCC) extension of HEVC. We then carry out a subjective experiment on the database in a well-controlled laboratory environment. Moreover, we propose a unified content-type adaptive (UCA) blind image quality assessment model that is applicable across content types. A key step in UCA is to incorporate the variations of human perceptual characteristics in viewing different content types through a multi-scale weighting framework. This leads to superior performance on the constructed CCT database. UCA is training-free, implying strong generalizability. To verify this, we test UCA on other databases containing JPEG, MPEG-2, H.264, and HEVC compressed images/videos, and observe that it consistently achieves competitive performance.
To improve the issue of low-frequency and high-frequency components from feature maps being treated equally in existing image super-resolution reconstruction methods, the paper proposed an image ...super-resolution reconstruction method using attention mechanism with feature map to facilitate reconstruction from original low-resolution images to multi-scale super-resolution images. The proposed model consists of a feature extraction block, an information extraction block, and a reconstruction module. Firstly, the extraction block is used to extract useful features from low-resolution images, with multiple information extraction blocks being combined with the feature map attention mechanism and passed between feature channels. Secondly, the interdependence is used to adaptively adjust the channel characteristics to restore more details. Finally, the reconstruction module reforms different scales high-resolution images. The experimental results can demonstrate that the proposed method can effectively improve not only the visual effect of images but also the results on the Set5, Set14, Urban100, and Manga109. The results can demonstrate the proposed method has structurally similarity to the image reconstruction methods. Furthermore, the evaluating indicator of Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) has been improved to a certain degree, while the effectiveness of using feature map attention mechanism in image super-resolution reconstruction applications is useful and effective.
Full text
Available for:
CEKLJ, EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
Air quality is currently arousing drastically increasing attention from the governments and populace all over the world. In this paper, we propose a heuristic recurrent air quality predictor (RAQP) ...to infer air quality. The RAQP exploits some key meteorology- and pollution-related variables to infer air pollutant concentrations (APCs), e.g. the fine particulate matter (PM2.5). It is natural that the meteorological factors and APCs at the current time have strong influences on air quality the next adjacent moment, that is to say, there exist high correlations between them. With this consideration, applying simple machine learners to the current meteorology- and pollution-related factors can reliably predict the air quality indices at a time later. However, owing to the nonlinear and chaotic reasons, the above correlations decline with the time interval enlarged. In such cases, it fails to forecast the air quality after several hours by only using simple machine learners and the current measurements of meteorology- and pollution-related variables. To solve the problem, our RAQP method recurrently applies the 1-h prediction model, which learns the current records of meteorology- and pollution-related factors to predict the air quality 1 h later, to then estimate the air quality after several hours. Via extensive experiments, results confirm that the RAQP predictor is superior to the relevant state-of-the-art techniques and nonrecurrent methods when applied to air quality prediction.
The general purpose of seeing a picture is to attain information as much as possible. With it, we in this paper devise a new no-reference/blind metric for image quality assessment (IQA) of contrast ...distortion. For local details, we lirst roughly remove predicted regions in an image since unpredicted remains are of much information. We then compute entropy of particular unpredicted areas of maximum information via visual saliency. From global perspective, we compare the image histogram with the uniformly distributed histogram of maximum information via the symmetric Kullback-Leibler divergence. The proposed blind IQA method generates an overall quality estimation of a contrast-distorted image by properly combining local and global considerations. Thorough experiments on live databases/subsets demonstrate the superiority of our training-free blind technique over state-of-the-art fulland no-reference IQA methods. Furthermore, the proposed model is also applied to amend the performance of general-purpose blind quality metrics to a sizable margin.
In this paper, we investigate the problem of image contrast enhancement. Most existing relevant technologies often suffer from the drawback of excessive enhancement, thereby introducing ...noise/artifacts and changing visual attention regions. One frequently used solution is manual parameter tuning, which is, however, impractical for most applications since it is labor intensive and time consuming. In this research, we find that saliency preservation can help produce appropriately enhanced images, i.e., improved contrast without annoying artifacts. We therefore design an automatic contrast enhancement technology with a complete histogram modification framework and an automatic parameter selector. This framework combines the original image, its histogram equalized product, and its visually pleasing version created by a sigmoid transfer function that was developed in our recent work. Then, a visual quality judging criterion is developed based on the concept of saliency preservation, which assists the automatic parameters selection, and finally properly enhanced image can be generated accordingly. We test the proposed scheme on Kodak and Video Quality Experts Group databases, and compare with the classical histogram equalization technique and its variations as well as state-of-the-art contrast enhancement approaches. The experimental results demonstrate that our technique has superior saliency preservation ability and outstanding enhancement effect.