In this work, a Universal Reversible Data Embedding method applicable to any Encrypted Domain (urDEED) is proposed. urDEED operates completely in the encrypted domain and requires no feature of the ...signal prior to the encryption process. In particular, urDEED exploits the coding redundancy of the encrypted signal by partitioning it into segments referred to as imaginary codewords (IC's). Then, IC's are entropy encoded by using Golomb–Rice codewords (GRC's). Finally, each GRC is modified to accommodate two bits from the augmented payload. urDEED is designed to preserve the same file-size as that of the original input (encrypted) signal by embedding the quotient part of the GRC's as side information. Moreover, urDEED is consistently reversible and universally applicable to any digital signal encrypted by any encryption method. Experimental results show that urDEED achieves an average embedding capacity of ∼0.169 bit per every bit of the encrypted (host) signal.
•Designing a data embedding method that is universally applicable to any encrypted signal.•Realizing data embedding universally in encrypted audio, image and text signals with consistent carrier capacity.•Achieving consistent and perfect reversible functionality.•Preserving file-size of the output to that of the original encrypted (input) signal.•Offering higher embedding capacity and outperforming the most relevant method in the literature.
Display omitted
•Design of WCE compression scheme with low complexity suitable for capsule endoscopy.•Finding suitable chrominance color space to convert RGB endoscopic images.•New chroma subsampling ...patterns and encoders for the compression module of wireless capsule endoscopy.•First of it's kind to use images in WEO Clinical Endoscopy Atlas for WCE image compression.•Achieving better compression performance than the state-of-the-art techniques for WCE image compression problem.
The data generated in wireless capsule endoscopy during the examination of the gastrointestinal tract is huge and demands large storage and processing time. One of the essential components of the capsule is the compression module that needs to be low power and a low complexity one. This paper aims to bring significant improvement in pre-processing, chroma subsampling and encoding stages of image compression. Among various chrominance based color spaces, YEF being a best alternative for the RGB endoscopic image is analyzed in terms of statistical measures. New chroma subsampling patterns are tested, which has shown improved data reduction. In the encoder part, the combination of proposed predictive coder and the modified Golomb Rice coder generated better compressed bit stream. The cubic spline interpolation is the method used for image reconstruction during decompression. The endoscopic images from databases namely Gastrolab and WEO Clinical Endoscopy Atlas are utilized in the experimentation. The results show that the proposed near-lossless compression scheme performs better than the competing techniques in terms of compression rate, peak signal to noise ratio and structural similarity index.
This article deals with compression of binary sequences with a given number of ones, which can also be considered as a list of indexes of a given length. The first part of the article shows that the ...entropy
of random
-element binary sequences with exactly
elements equal one satisfies the inequalities klog2(0.48·n/k)<H<klog2(2.72·n/k). Based on this result, we propose a simple coding using fixed length words. Its main application is the compression of random binary sequences with a large disproportion between the number of zeros and the number of ones. Importantly, the proposed solution allows for a much faster decompression compared with the Golomb-Rice coding with a relatively small decrease in the efficiency of compression. The proposed algorithm can be particularly useful for database applications for which the speed of decompression is much more important than the degree of index list compression.
In this paper, we propose a lossless electrocardiogram (ECG) compression method using a prediction error-based adaptive linear prediction technique. This method combines the adaptive linear ...prediction, which minimizes the prediction error in the ECG signal prediction, and the modified Golomb-Rice coding, which encodes the prediction error to the binary code as the compressed data. We used the PTB Diagnostic ECG database, the European ST-T database, and the MIT-BIH Arrhythmia database for the evaluation and achieved the average compression ratios for single-lead ECG signals of 3.16, 3.75, and 3.52, respectively, despite different signal acquisition setup in each database. As the prediction order is very crucial for this particular problem, we also investigate the validity of the popular linear prediction coefficients that are generally used in ECG compression by determining the prediction coefficients from the three databases using the autocorrelation method. The findings are in agreement with the previous works in that the second-order linear prediction is suitable for the ECG compression application.
This brief presents a VLSI implementation of an efficient lossless compression scheme for electrocardiogram (ECG) data encoding to save storage space and reduce transmission time. As compression ...algorithm is able to save storage space and reduce transmission time, this opportunity has been seized by implementing memory-less design while working at a high clock speed in VLSI. ECG compression algorithm comprises two parts: an adaptive linear prediction technique and content-adaptive Golomb Rice code. An efficient and low power VLSI implementation of compression algorithm has been presented. To improve the performance, the proposed VLSI design uses bit shifting operations as a replacement for the different arithmetic operations. VLSI implementation has been applied to the MIT-BIH arrhythmia database which is able to achieve a lossless bit compression rate of 2.77. Moreover, VLSI architecture contains 3.1 K gate count and core of the chip consumes 27.2 nW of power while working at 1 KHz frequency. The core area is 0.05 mm 2 in 90 nm CMOS process.
•In the hybrid coding scheme, DPCM–VLC is used for the low-pass band and 1D SPIHT is used for the high-pass band.•DPCM–VLC is improved by the additional sub-schemes.•Target bit length of each block ...is determined by the block-based bit allocation scheme.•Block complexity is estimated by using the coding information of the upper block.•The proposed 1D SPIHT algorithm is implemented in hardware design.
In general, to achieve high compression efficiency, a 2D image or a 2D block is used as the compression unit. However, 2D compression requires a large memory size and long latency when input data are received in a raster scan order that is common in existing TV systems. To address this problem, a 1D compression algorithm that uses a 1D block as the compression unit is proposed. 1D set partitioning in hierarchical trees (SPIHT) is an effective compression algorithm that fits the encoded bit length to the target bit length precisely. However, the 1D SPIHT can have low compression efficiency because 1D discrete wavelet transform (DWT) cannot make use of the redundancy in the vertical direction. This paper proposes two schemes for improving compression efficiency in the 1D SPIHT. First, a hybrid coding scheme that uses different coding algorithms for the low and high frequency bands is proposed. For the low-pass band, a differential pulse code modulation–variable length coding (DPCM–VLC) is adopted, whereas a 1D SPIHT is used for the high-pass band. Second, a scheme that determines the target bit length of each block by using spatial correlation with a minimal increase in complexity is proposed. Experimental results show that the proposed algorithm improves the average peak signal to noise ratio (PSNR) by 2.97dB compared with the conventional 1D SPIHT algorithm. With the hardware implementation, the throughputs of both encoder and decoder designs are 6.15Gbps, and gate counts of encoder and decoder designs are 42.8K and 57.7K, respectively.
This paper presents a method for wireless ECG compression and zero lossless decompression using a combination of three different techniques in order to increase storage space while reducing ...transmission time. The first technique used in the proposed algorithm is an adaptive linear prediction; it achieves high sensitivity and positive prediction. The second technique is content-adaptive Golomb-Rice coding, used with a window size to encode the residual of prediction error. The third technique is the use of a suitable packing format; this enables the real-time decoding process. The proposed algorithm is evaluated and verified using over 48 recordings from the MIT-BIH arrhythmia database, and it shown to be able to achieve a lossless bit compression rate of <inline-formula> <tex-math notation="LaTeX">2.83{\times} </tex-math></inline-formula> in Lead V1 and <inline-formula> <tex-math notation="LaTeX">2.77{\times} </tex-math></inline-formula> in Lead V2. The proposed algorithm shows better performance results in comparison to previous lossless ECG compression studies in real time; it can be used in data transmission methods for superior biomedical signals for bounded bandwidth across e-health devices. The overall compression system is also built with an ARM M4 processor, which ensures high accuracy performance and consistent results in the timing operation of the system.
This paper presents a novel hardware-oriented image compression algorithm and its very large-scale integration (VLSI) implementation for wireless sensor networks. The proposed novel image compression ...algorithm consists of a fuzzy decision, block partition, digital halftoning, and block truncation coding (BTC) techniques. A novel variable-size block partition technique was used in the proposed algorithm to improve image quality and compression performance. In addition, eight different types of blocks were encoded by Huffman coding according to probability to increase the compression ratio further. In order to achieve the low-cost and low-power characteristics, a novel iteration-based BTC training module was created to get representative levels and meet the requirement of wireless sensor networks. A prediction and a modified Golomb-Rice coding modules were designed to encode the information of representative levels to achieve higher compression performance. The proposed algorithm was realized by a VLSI technique with an UMC 0.18-μm CMOS process. The synthesized gate counts and core area of this design were 6.4 k gate counts and 60 000 μm 2 , respectively. The operating frequency and power consumption were 100 MHz and 3.11 mW respectively. Compared with previous JPEG, JPEG-LS, and fixed-size BTC-based designs, this work reduced 20.9% gate counts more than previous designs. Moreover, the proposed design required only a one-line-buffer memory rather than a frame-buffer memory required by previous designs.
Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since ...more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS's performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost.
This paper proposes a high-throughput lossless image-compression algorithm based on Golomb-Rice coding and its hardware architecture. The proposed solution increases compression ratios (CRs) while ...preserving the throughput by taking advantage of a novel parallel variable-length sign coding (PVSC) algorithm that reduces the sign bits to achieve a higher CR. In addition, the proposed solution adopts and modifies the two existing compression algorithms to improve the overall compression performance. The experimental results show that the proposed solution yields an average CR of 3.12, which is higher than those achieved with the previous algorithms. The hardware implementation of the proposed solution for an 8x8 block unit achieves a throughput of 18 GBps and 24 GBps when encoding and decoding, respectively. This hardware performance is enough to handle 7680 x 4320@240-Hz image processing.