Video applications have become one of the major services in the engineering field, which are implemented by server–client systems connected via the Internet, broadcasting services for mobile devices ...such as smartphones and surveillance cameras for security. Recently, the majority of video encoding mechanisms to reduce the data rate are mainly lossy compression methods such as the MPEG format. However, when we consider special needs for high-speed communication such as display applications and object detection ones with high accuracy from the video stream, we need to address the encoding mechanism without any loss of pixel information, called visually lossless compression. This paper focuses on the Adaptive Differential Pulse Code Modulation (ADPCM) that encodes a data stream into a constant bit length per data element. However, the conventional ADPCM does not have any mechanism to control dynamically the encoding bit length. We propose a novel ADPCM that provides a mechanism with a variable bit-length control, called ADPCM-VBL, for the encoding/decoding mechanism. Furthermore, since we expect that the encoded data from ADPCM maintains low entropy, we expect to reduce the amount of data by applying a lossless data compression. Applying ADPCM-VBL and a lossless data compression, this paper proposes a video transfer system that controls throughput autonomously in the communication data path. Through evaluations focusing on the aspects of the encoding performance and the image quality, we confirm that the proposed mechanisms effectively work on the applications that needs visually lossless compression by encoding video stream in low latency.
Modern daily life activities produced lots of information for the advancement of telecommunication. It is a challenging issue to store them on a digital device or transmit it over the Internet, ...leading to the necessity for data compression. Thus, research on data compression to solve the issue has become a topic of great interest to researchers. Moreover, the size of compressed data is generally smaller than its original. As a result, data compression saves storage and increases transmission speed. In this article, we propose a text compression technique using GPT-2 language model and Huffman coding. In this proposed method, Burrows-Wheeler transform and a list of keys are used to reduce the original text file’s length. Finally, we apply GPT-2 language mode and then Huffman coding for encoding. This proposed method is compared with the state-of-the-art techniques used for text compression. Finally, we show that the proposed method demonstrates a gain in compression ratio compared to the other state-of-the-art methods.
This paper presents a novel electrocardiogram (ECG) processing technique for joint data compression and QRS detection in a wireless wearable sensor. The proposed algorithm is aimed at lowering the ...average complexity per task by sharing the computational load among multiple essential signal-processing tasks needed for wearable devices. The compression algorithm, which is based on an adaptive linear data prediction scheme, achieves a lossless bit compression ratio of 2.286x. The QRS detection algorithm achieves a sensitivity (Se) of 99.64% and positive prediction (+P) of 99.81% when tested with the MIT/BIH Arrhythmia database. Lower overall complexity and good performance renders the proposed technique suitable for wearable/ambulatory ECG devices.
Despite the growing availability of high-capacity computational platforms, implementation complexity still has been a great concern for the real-world deployment of neural networks. This concern is ...not exclusively due to the huge costs of state-of-the-art network architectures, but also due to the recent push towards edge intelligence and the use of neural networks in embedded applications. In this context, network compression techniques have been gaining interest due to their ability for reducing deployment costs while keeping inference accuracy at satisfactory levels. The present paper is dedicated to the development of a novel compression scheme for neural networks. To this end, a new form of ℓ
-norm-based regularization is firstly developed, which is capable of inducing strong sparseness in the network during training. Then, targeting the smaller weights of the trained network with pruning techniques, smaller yet highly effective networks can be obtained. The proposed compression scheme also involves the use of ℓ
-norm regularization to avoid overfitting as well as fine tuning to improve the performance of the pruned network. Experimental results are presented aiming to show the effectiveness of the proposed scheme as well as to make comparisons with competing approaches.
High-throughput sequencing technologies produce large collections of data, mainly DNA sequences with additional information, requiring the design of efficient and effective methodologies for both ...their compression and storage. In this context, we first provide a classification of the main techniques that have been proposed, according to three specific research directions that have emerged from the literature and, for each, we provide an overview of the current techniques. Finally, to make this review useful to researchers and technicians applying the existing software and tools, we include a synopsis of the main characteristics of the described approaches, including details on their implementation and availability. Performance of the various methods is also highlighted, although the state of the art does not lend itself to a consistent and coherent comparison among all the methods presented here.
This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the ...principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.
Currently, the applications of the Internet of Things (IoT) generate a large amount of sensor data at a very high pace, making it a challenge to collect and store the data. This scenario brings about ...the need for effective data compression algorithms to make the data manageable among tiny and battery-powered devices and, more importantly, shareable across the network. Additionally, considering that, very often, wireless communications (e.g., low-power wide-area networks) are adopted to connect field devices, user payload compression can also provide benefits derived from better spectrum usage, which in turn can result in advantages for high-density application scenarios. As a result of this increase in the number of connected devices, a new concept has emerged, called TinyML. It enables the use of machine learning on tiny, computationally restrained devices. This allows intelligent devices to analyze and interpret data locally and in real time. Therefore, this work presents a new data compression solution (algorithm) for the IoT that leverages the TinyML perspective. The new approach is called the Tiny Anomaly Compressor (TAC) and is based on data eccentricity. TAC does not require previously established mathematical models or any assumptions about the underlying data distribution. In order to test the effectiveness of the proposed solution and validate it, a comparative analysis was performed on two real-world datasets with two other algorithms from the literature (namely Swing Door Trending (SDT) and the Discrete Cosine Transform (DCT)). It was found that the TAC algorithm showed promising results, achieving a maximum compression rate of 98.33%. Additionally, it also surpassed the two other models regarding the compression error and peak signal-to-noise ratio in all cases.
To deal with the exponential increase of digital data, new compression technologies are needed for more efficient representation of information. We introduce a physics-based transform that enables ...image compression by increasing the spatial coherency. We also present the Stretched Modulation Distribution, a new density function that provides the recipe for the proposed image compression. Experimental results show pre-compression using our method can improve the performance of JPEG 2000 format.
The underlying theme of this paper is to explore the various facets of power systems data through the lens of graph signal processing (GSP), laying down the foundations of the Grid-GSP framework. ...Grid-GSP provides an interpretation for the spatio-temporal properties of voltage phasor measurements, by showing how the well-known power systems modeling supports a generative low-pass graph filter model for the state variables, namely the voltage phasors. Using the model we formalize the empirical observation that voltage phasor measurement data lie in a low-dimensional subspace and tie their spatio-temporal structure to generator voltage dynamics. The Grid-GSP generative model is then successfully employed to investigate the problems, pertaining to the grid, of data sampling and interpolation, network inference, detection of anomalies and data compression. Numerical results on a large synthetic grid that mimics the real-grid of the state of Texas, ACTIVSg2000, and on real-world measurements from ISO-New England verify the efficacy of applying Grid-GSP methods to electric grid data.