We propose a framework for iterative joint source-channel decoding for communication with a fidelity criterion. We consider a class of source models that are used in current state-of-the-art ...transform image coding schemes. We construct a global graphical model that includes both the channel coding redundancy and the source model and we apply the sum-product algorithm to estimate the transmitted signal with minimum distortion. Our results show the promise of our framework for improving over existing techniques of digital communication.
Distributed coding for wireless audio sensors Majumdar, A.; Ramchandran, K.; Kozintsev, I.
2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (IEEE Cat. No.03TH8684),
2003
Conference Proceeding
Future multimedia systems will use multiple audio and video input and output streams to enhance user experience. Those multiple input streams may be captured using a network of distributed sensors ...and transmitted to a central location for processing. We address the problem of efficient joint compression of audio sources that are noisy filtered versions of the same audio signal. Deployed in a wireless bandwidth constrained network, communication between the sources, if any, is restricted to a bare minimum. By exploiting the correlations between the remote sources, we develop algorithms for distributed compression of these audio sources, attempting to achieve the gains predicted in theory. Our scheme shows a significant improvement in reconstructed signal quality for a given bandwidth as compared to an independent compression approach. The algorithms are based on the distributed source coding using syndromes (DISCUS) framework and incorporate the use of perceptual masks.
The introduction of low power general purpose processors (like the Intelreg Atomtrade processor) expands the capability of handheld and mobile Internet devices (MIDs) to include compelling visual ...computing applications. One rapidly emerging visual computing usage model is known as mobile augmented reality (MAR). In the MAR usage model, the user is able to point the handheld camera to an object (like a wine bottle) or a set of objects (like an outdoor scene of buildings or monuments) and the device automatically recognizes and displays information regarding the object(s). Achieving this on the handheld requires significant compute processing resulting in a response time in the order of several seconds. In this paper, we analyze a MAR workload and identify the primary hotspot functions that incur a large fraction of the overall response time. We also present a detailed architectural characterization of the hotspot functions in terms of CPI, MPI, etc. We then implement and analyze the benefits of several software optimizations: (a) vectorization, (b) multi-threading, (c) cache conflict avoidance and (d) miscellaneous code optimizations that reduce the number of computations. We show that a 3X performance improvement in execution time can be achieved by implementing these optimizations. Overall, we believe our analysis provides a detailed understanding of the processing for a new domain of visual computing workloads (i.e. MAR) running on low power handheld compute platforms.
We consider the problem of efficient image transmission over noisy time-varying channels subject to a low transmission energy constraint (and fixed bandwidth/delay constraints). We examine the limits ...of desirability of a highly compressed representation using a joint source-channel coding (JSCC) framework. Specifically, invoking as a platform a state-of-the-art wavelet image coder, we demonstrate how the resulting highly compressed digital stream, appropriately protected against channel noise, is not always the best solution. We show how a hybrid scheme based on simple partitioning of the wavelet image representation into "compressed" and "uncompressed" components can lead to significantly improved performance (of the order of 3 dB in PSNR for Rayleigh channels) over popular JSCC schemes which are based on compressed, entropy-coded, and appropriately unequal error-protected (UEP) source representations.
Most of the existing research work in the area of media adaptation is concentrated on content adaptation, transcoding and delivery mechanisms without addressing the actual input and output of ...multimedia data. However, it is the I/O stage of media processing that humans are concerned about. Up until now, most multimedia applications have relied on standalone I/O devices (microphone, headphones, monitor, camera) to capture or render multimedia data. This situation is about to change. Nowadays we are surrounded by a vast number of audio/video (AV) sensors and actuators. They are built into our cellular phones, PDAs, tablets, laptops, and surveillance systems. A natural idea that comes out of this fact is to combine multiple I/O devices into a distributed array of sensors and actuators. The paper shows the feasibility of this idea and shifts media adaptation research away from a single device/stream paradigm towards array multimedia processing. We demonstrate how to transform a network of off-the-shelf devices into a distributed I/O array by providing common time (with tens of microseconds precision) and 3D space coordinates (with a few centimetres precision). We also discuss the implications and potentials of self-calibrating distributed AV-sensor/actuator networks for improved media adaptation.
We consider the problem of robust multicast of video to clients having disparate bandwidths and channel loss profiles. We pose this problem as an optimization problem in a multiple description (MD) ...coding framework and offer a solution that jointly considers the rate-distortion characteristics of the multicast video stream and the receiver channel parameters. In this work, we confine ourselves to a single multicast stream, though extensions to more than one stream are straightforward. Our problem is particularly applicable to the wireless LAN environment where client mobility results in channels with diverse loss and bandwidth characteristics, with the sender unable to adapt dynamically to these changes due to feedback implosion in the multicast setting.
We consider the problem of efficient image transmission over noisy time-varying channels subject to a low transmission energy constraint (and fixed bandwidth/delay constraints). We examine the limits ...of desirability of a highly compressed representation using a joint source-channel coding (JSCC) framework. Specifically, invoking as a platform a state-of-the-art wavelet image coder, we demonstrate how the resulting highly compressed digital stream, appropriately protected against channel noise, is not always the best solution. We show how a hybrid scheme based on simple partitioning of the wavelet image representation into "compressed" and "uncompressed" components can lead to significantly improved performance (of the order of 3 dB in PSNR for Rayleigh channels) over popular JSCC schemes which are based on compressed, entropy-coded, and appropriately unequal error-protected (UEP) source representations.
Block cyclic redundancy check (CRC) codes represent a popular and powerful class of error detection techniques in modern data communication systems. Though efficient, CRCs can detect errors only ...after an entire block of data has been received and processed. We propose a new "continuous" error detection scheme using arithmetic coding that provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. We demonstrate how the new error detection framework improves the overall performance of transmission systems, and show how sizeable performance gains can be attained. We focus on two popular scenarios: (i) automatic repeat request (ARQ) based transmission; and (ii) forward error correction frameworks based on (serially) concatenated coding systems involving an inner error-correction code and an outer error-detection code.