Interference is usually viewed as an obstacle to communication in wireless networks. This paper proposes a new strategy, compute-and-forward, that exploits interference to obtain significantly higher ...rates between users in a network. The key idea is that relays should decode linear functions of transmitted messages according to their observed channel coefficients rather than ignoring the interference as noise. After decoding these linear equations, the relays simply send them towards the destinations, which given enough equations, can recover their desired messages. The underlying codes are based on nested lattices whose algebraic structure ensures that integer combinations of codewords can be decoded reliably. Encoders map messages from a finite field to a lattice and decoders recover equations of lattice points which are then mapped back to equations over the finite field. This scheme is applicable even if the transmitters lack channel state information.
A single memoryless Gaussian source is observed by many terminals, subject to independent Gaussian observation noises. The terminals are linked to a fusion center via a standard Gaussian ...multiple-access channel. The fusion center needs to recover the underlying Gaussian source with respect to mean-squared error. In this correspondence, a theorem of Witsenhausen is shown to imply that an optimal communication strategy is uncoded transmission, i.e., each terminal's channel input is merely a scaled version of its noisy observation.
Capacity is often studied under constraints on the channel input signals. This paper investigates the behavior of capacity when constraints are placed on the channel output signal (as well as ...generalizations thereof). While such a change in perspective leaves the point-to-point problem (essentially) unchanged, the main conclusion is that in certain network scenarios, including multiple-access and relay situations, both the structure of the problem and the conclusions change. For example, capacity results are found for the many-user Gaussian multiple-access channel (MAC) with arbitrarily dependent sources, cooperation, or feedback, and for the nondegraded Gaussian relay network. The investigations are motivated by recent questions arising in spectrum sharing and dynamic spectrum allocation: Multiple independent networks share the same frequency band, but are spatially mostly disjoint. One approach to grant coexistence is via spatial interference power restrictions, imposed at the network level, rather than at the device level. The corresponding capacity question is posed and partially answered in this paper
Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they ...compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.
The problem of reliably reconstructing a function of sources over a multiple-access channel (MAC) is considered. It is shown that there is no source-channel separation theorem even when the ...individual sources are independent. Joint source-channel strategies are developed that are optimal when the structure of the channel probability transition matrix and the function are appropriately matched. Even when the channel and function are mismatched, these computation codes often outperform separation-based strategies. Achievable distortions are given for the distributed refinement of the sum of Gaussian sources over a Gaussian multiple-access channel with a joint source-channel lattice code. Finally, computation codes are used to determine the multicast capacity of finite-field multiple-access networks, thus linking them to network coding.
Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional ...setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models.
The capacity of a particular large Gaussian relay network is determined in the limit as the number of relays tends to infinity. Upper bounds are derived from cut-set arguments, and lower bounds ...follow from an argument involving uncoded transmission. It is shown that in cases of interest, upper and lower bounds coincide in the limit as the number of relays tends to infinity. Hence, this paper provides a new example where a simple cut-set upper bound is achievable, and one more example where uncoded transmission achieves optimal performance. The findings are illustrated by geometric interpretations. The techniques developed in this paper are then applied to a sensor network situation. This is a network joint source-channel coding problem, and it is well known that the source-channel separation theorem does not extend to this case. The present paper extends this insight by providing an example where separating source from channel coding does not only lead to suboptimal performance-it leads to an exponential penalty in performance scaling behavior (as a function of the number of nodes). Finally, the techniques developed in this paper are extended to include certain models of ad hoc wireless networks, where a capacity scaling law can be established: When all nodes act purely as relays for a single source-destination pair, capacity grows with the logarithm of the number of nodes.
Ergodic Interference Alignment Nazer, B.; Gastpar, M.; Jafar, S. A. ...
IEEE transactions on information theory,
10/2012, Letnik:
58, Številka:
10
Journal Article
Recenzirano
Odprti dostop
This paper develops a new communication strategy, ergodic interference alignment, for the K -user interference channel with time-varying fading. At any particular time, each receiver will see a ...superposition of the transmitted signals plus noise. The standard approach to such a scenario results in each transmitter-receiver pair achieving a rate proportional to 1/ K its interference-free ergodic capacity. However, given two well-chosen time indices, the channel coefficients from interfering users can be made to exactly cancel. By adding up these two observations, each receiver can obtain its desired signal without any interference. If the channel gains have independent, uniform phases, this technique allows each user to achieve at least 1/2 its interference-free ergodic capacity at any signal-to-noise ratio. Prior interference alignment techniques were only able to attain this performance as the signal-to-noise ratio tended to infinity. Extensions are given for the case where each receiver wants a message from more than one transmitter as well as the "X channel" case (with two receivers) where each transmitter has an independent message for each receiver. Finally, it is shown how to generalize this strategy beyond Gaussian channel models. For a class of finite field interference channels, this approach yields the ergodic capacity region.
For a class of sensor networks, the task is to monitor an underlying physical phenomenon over space and time through an imperfect observation process. The sensors can communicate back to a central ...data collector over a noisy channel. The key parameters in such a setting are the fidelity (or distortion) at which the underlying physical phenomenon can be estimated by the data collector, and the cost of operating the sensor network. This is a network joint source-channel communication problem, involving both compression and communication. It is well known that these two tasks may not be addressed separately without sacrificing optimality, and the optimal performance is generally unknown. This paper presents a lower bound on the best achievable end-to-end distortion as a function of the number of sensors, their total transmit power, the number of degrees of freedom of the underlying source process, and the spatio-temporal communication bandwidth. Particular coding schemes are studied, and it is shown that in some cases, the lower bound is tight in a scaling-law sense. By contrast, it is shown that the standard practice of separating source from channel coding may incur an exponential penalty in terms of communication resources, as a function of the number of sensors. Hence, such code designs effectively prevent scalability. Finally, it is outlined how the results extend to cases involving missing synchronization and channel fading.
What makes a source-channel communication system optimal? It is shown that in order to achieve an optimal cost-distortion tradeoff, the source and the channel have to be matched in a probabilistic ...sense. The match (or lack of it) involves the source distribution, the distortion measure, the channel conditional distribution, and the channel input cost function. Closed-form necessary and sufficient expressions relating the above entities are given. This generalizes both the separation-based approach as well as the two well-known examples of optimal uncoded communication. The condition of probabilistic matching is extended to certain nonergodic and multiuser scenarios. This leads to a result on optimal single-source broadcast communication.