Distributed tensor decomposition (DTD) is a fundamental data-analytics technique that extracts latent important properties from high-dimensional multi-attribute datasets distributed over edge ...devices. Conventionally its wireless implementation follows a one-shot approach that first computes local results at devices using local data and then aggregates them to a server with communication-efficient techniques such as over-the-air computation (AirComp) for global computation. Such implementation is confronted with the issues of limited storage-and-computation capacities and link interruption, which motivates us to propose a framework of on-the-fly communication-and-computing (FlyCom 2 ) in this work. The proposed framework enables streaming computation with low complexity by leveraging a random sketching technique and achieves progressive global aggregation through the integration of progressive uploading and multiple-input-multiple-output (MIMO) AirComp. To develop FlyCom 2 , an on-the-fly sub-space estimator is designed to take real-time sketches accumulated at the server to generate online estimates for the decomposition. Its performance is evaluated by deriving both deterministic and probabilistic error bounds using the perturbation theory and concentration of measure. Both results reveal that the decomposition error is inversely proportional to the population of sketching observations received by the server. To further rein in the noise effect on the error, we propose a threshold-based scheme to select a subset of sufficiently reliable received sketches for DTD at the server. Experimental results validate the performance gain of the proposed selection algorithm and show that compared to its one-shot counterparts, the proposed FlyCom 2 achieves comparable (even better in the case of large eigen-gaps) decomposition accuracy besides dramatically reducing devices' complexity costs.
Fast Blind Recognition of Channel Codes Moosavi, Reza; Larsson, Erik G.
IEEE transactions on communications,
05/2014, Letnik:
62, Številka:
5
Journal Article
Recenzirano
Odprti dostop
We present a fast algorithm that, for a given input sequence and a linear channel code, computes the syndrome posterior probability (SPP) of the code, i.e., the probability that all parity check ...relations of the code are satisfied. According to this algorithm, the SPP can be computed blindly, i.e., given the soft information on a received sequence we can compute the SPP for the code without first decoding the bits. We show that the proposed scheme is efficient by investigating its computational complexity. We then consider two scenarios where our proposed SPP algorithm can be used. The first scenario is when we are interested in finding out whether a certain code was used to encode a data stream. We formulate a statistical hypothesis test and we investigate its performance. We also compare the performance of our scheme with that of an existing scheme. The second scenario deals with how we can use the algorithm for reducing the computational complexity of a blind decoding process. We propose a heuristic sequential statistical hypotheses test to use the fact that in real applications, the data arrives sequentially, and we investigate its performance using system simulations.
This paper considers the use of non-orthogonal-multiple-access (NOMA) in multiuser MIMO systems in practical scenarios where channel state information (CSI) is acquired through pilot signaling. A new ...NOMA scheme that uses shared pilots is proposed. Achievable rate analysis is carried out for different pilot signaling schemes, including both uplink and downlink pilots. The achievable rate performance of the proposed NOMA scheme with shared pilot within each group is compared with the traditional orthogonal access scheme with orthogonal pilots. Our proposed scheme is a generalization of the orthogonal scheme, and can be reduced to the orthogonal scheme when appropriate power allocation parameters are chosen. Numerical results show that when downlink CSI is available at the users, our proposed NOMA scheme outperforms orthogonal schemes. However with more groups of users present in the cell, it is preferable to use multi-user beamforming instead of NOMA.
In this letter, we consider the uplink of a cell-free Massive multiple-input multiple-output (MIMO) network where each user is decoded by a subset of access points (APs). An additional step is ...introduced in the cell-free Massive MIMO processing: each AP in the uplink locally implements soft MIMO detection and then shares the resulting bit log-likelihoods on the front-haul link. The decoding of the data is performed at the central processing unit (CPU), collecting the data from the APs. The non-linear processing at the APs consists of the approximate computation of the posterior density for each received data bit, exploiting only local channel state information. The proposed method offers good performance in terms of frame-error-rate and considerably lower complexity than the optimal maximum-likelihood demodulator.
We consider the problem of coordinating two competing multiple-antenna wireless systems (operators) that operate in the same spectral band. We formulate a rate region which is achievable by scalar ...coding followed by power allocation and beamforming. We show that all interesting points on the Pareto boundary correspond to transmit strategies where both systems use the maximum available power. We then argue that there is a fundamental need for base station cooperation when performing spectrum sharing with multiple transmit antennas. More precisely, we show that if the systems do not cooperate, there is a unique Nash equilibrium which is inefficient in the sense that the achievable rate is bounded by a constant, regardless of the available transmit power. An extension of this result to the case where the receivers use successive interference cancellation (SIC) is also provided. Next we model the problem of agreeing on beamforming vectors as a non-transferable utility (NTU) cooperative gametheoretic problem, with the two operators as players. Specifically we compute numerically the Nash bargaining solution, which is a likely resolution of the resource conflict assuming that the players are rational. Numerical experiments indicate that selfish but cooperating operators may achieve a performance which is close to the maximum-sum-rate bound.
A fundamental algorithm for data analytics at the edge of wireless networks is distributed principal component analysis (DPCA), which finds the most important information embedded in a distributed ...high-dimensional dataset by distributed computation of a reduced-dimension data subspace, called principal components (PCs). In this paper, to support one-shot DPCA in wireless systems, we propose a framework of analog MIMO transmission featuring the uncoded analog transmission of local PCs for estimating the global PCs. To cope with channel distortion and noise, two maximum-likelihood (global) PC estimators are presented corresponding to the cases with and without receive channel state information (CSI). The first design, termed coherent PC estimator, is derived by solving a Procrustes problem and reveals the form of regularized channel inversion where the regulation attempts to alleviate the effects of both receiver noise and data noise. The second one, termed blind PC estimator, is designed based on the subspace channel-rotation-invariance property and computes a centroid of received local PCs on a Grassmann manifold. Using the manifold-perturbation theory, tight bounds on the mean square subspace distance (MSSD) of both estimators are derived for performance evaluation. The results reveal simple scaling laws of MSSD concerning device population, data and channel signal-to-noise ratios (SNRs), and array sizes. More importantly, both estimators are found to have identical scaling laws, suggesting the dispensability of CSI to accelerate DPCA. Simulation results validate the derived results and demonstrate the promising latency performance of the proposed analog MIMO.
Wind turbines are often plagued by premature component failures, with drivetrain bearings being particularly subjected to these failures. To identify failing components, vibration condition ...monitoring has emerged and grown substantially. The fast Fourier transform (FFT) is the major signal processing method of vibrations. Recently, the wavelet transforms have been used more frequently in bearing vibration research, with one alternative being the discrete wavelet transform (DWT). Here, the low‐frequency component of the signal is repeatedly decomposed into approximative and detailed coefficients using a predefined mother wavelet. An extension to this is the wavelet packet transform (WPT), which decomposes the entire frequency domain and stores the wavelet coefficients in packets. How wavelet transforms and FFT compare regarding fault detection in wind turbine drivetrain bearings has been largely overlooked in literature when applied on field data, with non‐ideal placement of sensors and uncertain parameters influencing the measurements. This study consists of a comprehensive comparison of the FFT, a three‐level DWT, and the WPT when applied on enveloped vibration measurements from two 2.5‐MW wind turbine gearbox bearing failures. The frequency content is compared by calculating a robust condition indicator by summation of the harmonics and shaft speed sidebands of the bearing fault frequencies. Results show a higher performance of the WPT when used as a field vibration measurement analysis tool compared with the FFT as it detects one bearing failure earlier and more clearly, leading to a more stable alarm setting and avoidable, costly false alarms.
This paper considers the sum spectral efficiency (SE) optimization problem in multi-cell Massive MIMO systems with a varying number of active users. This is formulated as a joint pilot and data power ...control problem. Since the problem is non-convex, we first derive a novel iterative algorithm that obtains a stationary point in polynomial time. To enable real-time implementation, we also develop a deep learning solution. The proposed neural network, PowerNet, only uses the large-scale fading information to predict both the pilot and data powers. The main novelty is that we exploit the problem structure to design a single neural network that can handle a dynamically varying number of active users; hence, PowerNet is simultaneously approximating many different power control functions with varying number inputs and outputs. This is not the case in prior works and thus makes PowerNet an important step towards a practically useful solution. Numerical results demonstrate that PowerNet only loses 2% in sum SE, compared to the iterative algorithm, in a nine-cell system with up to 90 active users per in each coherence interval, and the runtime was only 0.03 ms on a graphics processing unit (GPU). When good data labels are selected for the training phase, PowerNet can yield better sum SE than by solving the optimization problem with one initial point.
This paper considers the jointly optimal pilot and data power allocation in single-cell uplink massive multiple-input-multiple-output systems. Using the spectral efficiency (SE) as performance metric ...and setting a total energy budget per coherence interval, the power control is formulated as optimization problems for two different objective functions: the weighted minimum SE among the users and the weighted sum SE. A closed form solution for the optimal length of the pilot sequence is derived. The optimal power control policy for the former problem is found by solving a simple equation with a single variable. Utilizing the special structure arising from imperfect channel estimation, a convex reformulation is found to solve the latter problem to global optimality in polynomial time. The gain of the optimal joint power control is theoretically justified, and is proved to be large in the low-SNR regime. Simulation results also show the advantage of optimizing the power control over both pilot and data power, as compared to the cases of using full power and of only optimizing the data powers as done in previous work.
In this paper, we study the effect of channel aging on the uplink and downlink performance of an FDD massive MIMO system, as the system dimension increases. Since the training duration scales ...linearly with the number of transmit dimensions, channel estimates become increasingly outdated in the communication phase, leading to performance degradation. To quantify this degradation, we first derive bounds on the mean squared channel estimation error. We use the bounds to derive deterministic equivalents of the receive SINRs, which yields a lower bound on the achievable uplink and downlink spectral efficiencies. For the uplink, we consider maximal ratio combining and MMSE detectors, while for the downlink, we consider matched filter and regularized zero forcing precoders. We show that the effect of channel aging can be mitigated by optimally choosing the frame duration. It is found that using all the base station antennas can lead to negligibly small achievable rates in high user mobility scenarios. Finally, numerical results are presented to validate the accuracy of our expressions and illustrate the dependence of the performance on the system dimension and channel aging parameters.