We propose the concept of quality-aware image , in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a ...distorted version of such an image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image. To demonstrate the idea, we build a practical quality-aware image encoding, decoding and quality analysis system, which employs: 1) a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.
Rate-distortion optimization (RDO) is widely applied in video coding, which aims at minimizing the coding distortion at a target bitrate. Conventionally, RDO is performed independently on each ...individual frame to avoid high computational complexity. However, extensive use of temporal/spatial predictions result in strong coding dependencies among neighboring frames, which make the current RDO be non-optimally used. To further improve video coding performance, it would be desirable to perform global RDO among a group of neighboring frames while maintaining approximately the same coding complexity. In this paper, the problem of global RDO is studied by jointly determining the quantization parameters (QPs) for a group of neighboring frames. Specifically, an adaptive frame-level QP selection algorithm is proposed for the H.265/HEVC random access coding by taking into account the inter-frame dependency. To measure the inter-frame dependency, a model based on the energy of prediction residuals is first established. With the help of the model, the problem of global RDO is then analyzed for the hierarchical coding structure in H.265/HEVC. Finally, the QP and the corresponding Lagrangian multiplier for each coding frame are determined adaptively by considering the total impact of its coding distortion on that of future frames in the encoding order. Experimental results show that in comparison with HM-16.0, the proposed algorithm reduces, on average, the BD-rate by 3.49% with negligible increase of encoding time. In addition, the quality fluctuation of the coded video by the proposed algorithm is lower than that by HM-16.0.
One of the most important challenges in cognitive radio is how to measure or sense the existence of a signal transmission in a specific channel, that is, how to conduct spectrum sensing. In this ...letter, we first formulate spectrum sensing as a goodness of fit testing problem, and then apply the Anderson-Darling test, one of goodness of fit tests, to derive a sensing method called Anderson-Darling sensing. It is shown by both analysis and numerical results that under the same sensing conditions and channel environments, Anderson-Darling sensing has much higher sensitivity to detect an existing signal than energy detector-based sensing, especially in a case where the received signal has a low signal-to-noise ratio (SNR) without prior knowledge of primary user signals.
The impact of JPEG compression on deep learning (DL) in image classification is revisited. Given an underlying deep neural network (DNN) pre-trained with pristine ImageNet images, it is demonstrated ...that, if, for any original image, one can select, among its many JPEG compressed versions including its original version, a suitable version as an input to the underlying DNN, then the classification accuracy of the underlying DNN can be improved significantly while the size in bits of the selected input is, on average, reduced dramatically in comparison with the original image. This is in contrast to the conventional understanding that JPEG compression generally degrades the classification accuracy of DL. Specifically, for each original image, consider its 10 JPEG compressed versions with their quality factor (QF) values from {100,90,80,70,60,50,40,30,20,10}. Under the assumption that the ground truth label of the original image is known at the time of selecting an input, but unknown to the underlying DNN, we present a selector called Highest Rank Selector (HRS). It is shown that HRS is optimal in the sense of achieving the highest Top k accuracy on any set of images for any k among all possible selectors. When the underlying DNN is Inception V3 or ResNet-50 V2, HRS improves, on average, the Top 1 classification accuracy and Top 5 classification accuracy on the whole ImageNet validation dataset by 5.6% and 1.9%, respectively, while reducing the input size in bits dramatically—the compression ratio (CR) between the size of the original images and the size of the selected input images by HRS is 8 for the whole ImageNet validation dataset. When the ground truth label of the original image is unknown at the time of selection, we further propose a new convolutional neural network (CNN) topology which is based on the underlying DNN and takes the original image and its 10 JPEG compressed versions as 11 parallel inputs. It is demonstrated that the proposed new CNN topology, even when partially trained, can consistently improve the Top 1 accuracy of Inception V3 and ResNet-50 V2 by approximately 0.4% and the Top 5 accuracy of Inception V3 and ResNet-50 V2 by 0.32% and 0.2%, respectively. Other selectors without the knowledge of the ground truth label of the original image are also presented. They maintain the Top 1 accuracy, the Top 5 accuracy, or the Top 1 and Top 5 accuracy of the underlying DNN, while achieving CRs of 8.8, 3.3, and 3.1, respectively.
In comparison with H.264/Advanced Video Coding, the newest video coding standard, High Efficiency Video Coding (HEVC), improves video coding rate-distortion (RD) performance, but at the price of ...significant increase in its encoding complexity, especially, in intra-mode decision due to the adoption of more complex block partitions and more candidate intra-prediction modes (IPMs). To reduce the mode decision complexity in HEVC intra-frame coding, while maintaining its RD performance, in this paper, we first formulate the mode decision problem in intra-frame coding as a Bayesian decision problem based on the newly proposed transparent composite model (TCM) for discrete cosine transform coefficients, and then present an outlier-based fast intra-mode decision (OIMD) algorithm. The proposed OIMD algorithm reduces the complexity using outliers identified by TCM to make a fast coding unit split/nonsplit decision and reduce the number of IPMs to be compared. To further take advantage of the outlier information furnished by TCM, we also refine entropy coding in HEVC by encoding the outlier information first, and then the actual mode decision conditionally given the outlier information. The proposed OIMD algorithm can work with and without the proposed entropy coding refinement. Experiments show that for the all-intra-main test configuration of HEVC: 1) when applied alone, the proposed OIMD algorithm reduces, on average, the encoding time (ET) by 50% with 0.7% Bjontegaard distortion (BD)-rate increase and 2) when applied in conjunction with the proposed entropy coding refinement, it reduces, on average, both the ET by 50% and BD-rate by 0.15%.
Both inter‐ and intraspecific trait variation are critical to species distribution along environmental gradients, but our understanding of these patterns predominantly relies upon species‐level trait ...means and variances. Trait integration, defined as how strongly multiple traits covary with one another, is a key indicator of the dimensionality of functional space for accommodating biodiversity in communities. As trait covariance can differ dramatically at the interspecific versus intraspecific levels, how intraspecific trait variability alters the strength of trait integration and eventually modulates biodiversity along environmental gradients has been rarely tested. Here, we measured nine functional traits (leaf area, specific leaf area, leaf and stem dry‐matter content, leaf nitrogen and phosphorus contents, specific stem length, Huber value and maximum height) paired with site‐specific soil fertility for 70 woody communities in subtropical Chinese forests. All species‐by‐site combinations were sampled to ensure a sufficient representation of intraspecific trait variation across sites. Community‐level trait integration was quantified from the variance of eigenvalues of the trait correlation matrix. The direct and/or indirect effects of soil fertility and trait integration on species richness and trait diversity were assessed through path analyses. Trait integration quantified from both inter‐ and intraspecific variances was on average 21.7% weaker than that from only interspecific variance, indicating a crucial role of intraspecific trait variability in promoting niche dimensionality. Whether accounting for intraspecific variation or not, less fertile sites had stronger trait integration, which in turn depressed both taxonomic and functional diversity, supporting the assumption that higher environmental stress demands stronger tradeoffs among multiple functions in viable strategies. Importantly, the negative association between trait integration and species richness became stronger when accounting for intraspecific variation, suggesting that species distribution and occurrence can be a consequence of intraspecific trait variability. This study highlights the importance of intraspecific trait variability in understanding functional tradeoffs underlying biodiversity patterns.
This paper revisits the problem of rate distortion optimization (RDO) with focus on inter-picture dependence. A joint RDO framework which incorporates the Lagrange multiplier as one of parameters to ...be optimized is proposed. Simplification strategies are demonstrated for practical applications. To make the problem tractable, we consider an approach where prediction residuals of pictures in a video sequence are assumed to be emitted from a finite set of sources. Consequently the RDO problem is formulated as finding optimal coding parameters for a finite number of sources, regardless of the length of the video sequence. Specifically, in cases where a hierarchical prediction structure is used, prediction residuals of pictures at the same prediction layer are assumed to be emitted from a common source. Following this approach, we propose an iterative algorithm to alternatively optimize the selections of quantization parameters (QPs) and the corresponding Lagrange multipliers. Based on the results of the iterative algorithm, we further propose two practical algorithms to compute QPs and the Lagrange multipliers for the RA(random access) hierarchical video coding: the first practical algorithm uses a fixed formula to compute QPs and the Lagrange multipliers, and the second practical algorithm adaptively adjusts both QPs and the Lagrange multipliers. Experimental results show that these three algorithms, integrated into the HM 16.20 reference software of HEVC, can achieve considerable RD improvements over the standard HM 16.20 encoder, in the common RA test configuration.
Rate distortion (RD) optimization for H.264 interframe coding with complete baseline decoding compatibility is investigated on a frame basis. Using soft decision quantization (SDQ) rather than the ...standard hard decision quantization, we first establish a general framework in which motion estimation, quantization, and entropy coding (in H.264) for the current frame can be jointly designed to minimize a true RD cost given previously coded reference frames. We then propose three RD optimization algorithms-a graph-based algorithm for near optimal SDQ in H.264 baseline encoding given motion estimation and quantization step sizes, an algorithm for near optimal residual coding in H.264 baseline encoding given motion estimation, and an iterative overall algorithm to optimize H.264 baseline encoding for each individual frame given previously coded reference frames-with them embedded in the indicated order. The graph-based algorithm for near optimal SDQ is the core; given motion estimation and quantization step sizes, it is guaranteed to perform optimal SDQ if the weak adjacent block dependency utilized in the context adaptive variable length coding of H.264 is ignored for optimization. The proposed algorithms have been implemented based on the reference encoder JM82 of H.264 with complete compatibility to the baseline profile. Experiments show that for a set of typical video testing sequences, the graph-based algorithm for near optimal SDQ, the algorithm for near optimal residual coding, and the overall algorithm achieve on average, 6%, 8%, and 12%, respectively, rate reduction at the same PSNR (ranging from 30 to 38 dB) when compared with the RD optimization method implemented in the H.264 reference software.
•Co-gasification of sewage sludge and MSW through fluidized bed was studied.•Natural olivine was applied as the bed material for improving H2 yield.•Effects of metal/olivine and operating variables ...on syngas composition were studied.•Olivine modified with 1 wt% Fe showed superior H2 production efficiency.•The H2 production ratio reached 26.99 mol% under the condition of this study.
Theabundant mineral olivine was applied as second-stage-gasifier bed materials to increase the H2 content in the syngas produced during the co-gasification of the sewage sludge and the simulated municipal solid waste. Regardless of the second-stage bed material used, the hydrogen-to-syngas ratio in the syngas output from the second stage increased when the S/B ratio increased from 0.02 to 0.13. This demonstrates that adding steam enhances the hydrogen-to-syngas ratio in the second-stage syngas. Under the same S/B conditions, the H2 content in the syngas varied when the original and modified olivine were separately used as bed materials in the second-stage gasifier. The H2 content was 23.12 mol.% when the original olivine was used and increased to 25.27 and 26.99 % mol.% when the olivine was respectively impregnated with 1 wt% Ni and 1 wt% Fe. These results evidenced that Ni and Fe could efficiently catalyzed the co-gasification of sludge and simulated municipal solid waste and improved the hydrogen production efficiency.
Distributed arithmetic coding (DAC) is a variant of AC that can realize Slepian-Wolf coding in a nonlinear way. In our previous work, we defined codebook cardinality spectrum (CCS) and Hamming ...distance spectrum (HDS) for DAC. In this paper, we make use of CCS and HDS to analyze tailed DAC, which is a form of DAC that, as traditional AC, maps the last few symbols of each source block onto non-overlapped intervals. First, we derive the exact HDS formula for tailless DAC, a form of DAC that maps all the symbols of each source block onto overlapped intervals, and show that the HDS formula previously given is in fact approximation. Then, the HDS formula is extended to tailed DAC. Using CCS, we also deduce the average codebook cardinality, which is closely related to decoding complexity, and rate loss of tailed DAC. The effects of tail length are extensively analyzed. It is revealed that by increasing tail length to a value not close to the bitstream length, closely spaced codewords within the same codebook can be removed at the cost of a higher decoding complexity and a larger rate loss. Finally, theoretical analyses are verified by experiments.