Link scheduling in device-to-device (D2D) networks is usually formulated as a non-convex combinatorial problem, which is generally NP-hard and difficult to get the optimal solution. Traditional ...methods to solve this problem are mainly based on mathematical optimization techniques, where accurate channel state information (CSI), usually obtained through channel estimation and feedback, is needed. To overcome the high computational complexity of the traditional methods and eliminate the costly channel estimation stage, machine leaning (ML) has been introduced recently to address the wireless link scheduling problems. In this article, we propose a novel graph embedding based method for link scheduling in D2D networks. We first construct a fully-connected directed graph for the D2D network, where each D2D pair is a node while interference links among D2D pairs are the edges. Then we compute a low-dimensional feature vector for each node in the graph. The graph embedding process is based on the distances of both communication and interference links, therefore without requiring the accurate CSI. By utilizing a multi-layer classifier, a scheduling strategy can be learned in a supervised manner based on the graph embedding results for each node. We also propose an unsupervised manner to train the graph embedding based method to further reinforce the scalability and develop a K-nearest neighbor graph representation method to reduce the computational complexity. Extensive simulation demonstrates that the proposed method is near-optimal compared with the existing state-of-art methods but is with only hundreds of training network layouts. It is also competitive in terms of scalability and generalizability to more complicated scenarios.
It has been a long-held belief that judicious resource allocation is critical to mitigating interference, improving network efficiency, and ultimately optimizing wireless communication performance. ...The traditional wisdom is to explicitly formulate resource allocation as an optimization problem and then exploit mathematical programming to solve the problem to a certain level of optimality. Nonetheless, as wireless networks become increasingly diverse and complex, for example, in the high-mobility vehicular networks, the current design methodologies face significant challenges and thus call for rethinking of the traditional design philosophy. Meanwhile, deep learning, with many success stories in various disciplines, represents a promising alternative due to its remarkable power to leverage data for problem solving. In this article, we discuss the key motivations and roadblocks of using deep learning for wireless resource allocation with application to vehicular networks. We review major recent studies that mobilize the deep-learning philosophy in wireless resource allocation and achieve impressive results. We first discuss deep-learning-assisted optimization for resource allocation. We then highlight the deep reinforcement learning approach to address resource allocation problems that are difficult to handle in the traditional optimization framework. We also identify some research directions that deserve further investigation.
Orthogonal frequency-division multiplexing (OFDM) effectively mitigates intersymbol interference (ISI) caused by the delay spread of wireless channels. Therefore, it has been used in many wireless ...systems and adopted by various standards. In this paper, we present a comprehensive survey on OFDM for wireless communications. We address basic OFDM and related modulations, as well as techniques to improve the performance of OFDM for wireless communications, including channel estimation and signal detection, time- and frequency-offset estimation and correction, peak-to-average power ratio reduction, and multiple-input-multiple-output (MIMO) techniques. We also describe the applications of OFDM in current systems and standards.
Recently, deep learned enabled end-to-end communication systems have been developed to merge all physical layer blocks in the traditional communication systems, which make joint transceiver ...optimization possible. Powered by deep learning, natural language processing has achieved great success in analyzing and understanding a large amount of language texts. Inspired by research results in both areas, we aim to provide a new view on communication systems from the semantic level. Particularly, we propose a deep learning based semantic communication system, named DeepSC, for text transmission. Based on the Transformer, the DeepSC aims at maximizing the system capacity and minimizing the semantic errors by recovering the meaning of sentences, rather than bit- or symbol-errors in traditional communications. Moreover, transfer learning is used to ensure the DeepSC applicable to different communication environments and to accelerate the model training process. To justify the performance of semantic communications accurately, we also initialize a new metric, named sentence similarity. Compared with the traditional communication system without considering semantic information exchange, the proposed DeepSC is more robust to channel variation and is able to achieve better performance, especially in the low signal-to-noise (SNR) regime, as demonstrated by the extensive simulation results.
This paper investigates beam training for extremely large-scale multiple-input multiple-output systems. By considering both the near field and far field, a triple-refined hybrid-field beam training ...scheme is proposed, where high-accuracy estimates of channel parameters are obtained through three steps of progressive beam refinement. First, the hybrid-field beam gain (HFBG)-based first refinement method is developed. Based on the analysis of the HFBG, the first-refinement codebook is designed and the beam training is performed accordingly to narrow down the potential region of the channel path. Then, the maximum likelihood (ML)-based and principle of stationary phase (PSP)-based second refinement methods are developed. By exploiting the measurements of the beam training, the ML is used to estimate the channel parameters. To avoid the high computational complexity of ML, closed-form estimates of the channel parameters are derived according to the PSP. Moreover, the Gaussian approximation (GA)-based third refinement method is developed. The hybrid-field neighboring search is first performed to identify the potential region of the main lobe of the channel steering vector. Afterwards, by applying the GA, a least-squares estimator is developed to obtain the high-accuracy channel parameter estimation. Simulation results verify the effectiveness of the proposed scheme.
Semantic communications focus on the transmission of semantic features. In this letter, we consider a task-oriented multi-user semantic communication system for multimodal data transmission. ...Particularly, partial users transmit images while the others transmit texts to inquiry the information about the images. To exploit the correlation among the multimodal data from multiple users, we propose a deep neural network enabled semantic communication system, named MU-DeepSC, to execute the visual question answering (VQA) task as an example. Specifically, the transceiver for MU-DeepSC is designed and optimized jointly to capture the features from the correlated multimodal data for task-oriented transmission. Simulation results demonstrate that the proposed MU-DeepSC is more robust to channel variations than the traditional communication systems, especially in the low signal-to-noise (SNR) regime.
In this paper, we develop a novel decentralized resource allocation mechanism for vehicle-to-vehicle (V2V) communications based on deep reinforcement learning, which can be applied to both unicast ...and broadcast scenarios. According to the decentralized resource allocation mechanism, an autonomous "agent," a V2V link or a vehicle, makes its decisions to find the optimal sub-band and power level for transmission without requiring or having to wait for global information. Since the proposed method is decentralized, it incurs only limited transmission overhead. From the simulation results, each agent can effectively learn to satisfy the stringent latency constraints on V2V links while minimizing the interference to vehicle-to-infrastructure communications.
The combination of large bandwidth at terahertz (THz) and the large number of antennas in massive MIMO results in the non-negligible spatial wideband effect in time domain or the corresponding beam ...squint issue in frequency domain, which will cause severe performance degradation if not properly treated. In particular, for a phased array based hybrid transceiver, there exists a contradiction between the requirement of mitigating the beam squint issue and the hardware implementation of the analog beamformer/combiner, which makes the accurate beamforming an enormous challenge. In this paper, we propose two wideband hybrid beamforming approaches, based on the virtual sub-array and the true-time-delay (TTD) lines, respectively, to eliminate the impact of beam squint. The former one divides the whole array into several virtual sub-arrays to generate a wider beam and provides an evenly distributed array gain across the whole operating frequency band. To further enhance the beamforming performance and thoroughly address the aforementioned contradiction, the latter one introduces the TTD lines and propose a new hardware implementation of analog beamformer/combiner. This TTD-aided hybrid implementation enables the wideband beamforming and achieves the near-optimal performance close to full-digital transceivers. Analytical and numerical results demonstrate the effectiveness of two proposed wideband beamforming approaches.
In this article, multiuser beam training based on hierarchical codebook for millimeter wave massive multi-input multi-output is investigated, where the base station (BS) simultaneously performs beam ...training with multiple user equipments (UEs). For the UEs, an alternative minimization method with a closed-form expression (AMCF) is proposed to design the hierarchical codebook under the constant modulus constraint. To speed up the convergence of the AMCF, an initialization method based on Zadoff-Chu sequence is proposed. For the BS, a simultaneous multiuser beam training scheme based on an adaptively designed hierarchical codebook is proposed, where the codewords in the current layer of the codebook are designed according to the beam training results of the previous layer. The codewords at the BS are designed with multiple mainlobes, each covering a spatial region for one or more UEs. Simulation results verify the effectiveness of the proposed hierarchical codebook design schemes and show that the proposed multiuser beam training scheme can approach the performance of the beam sweeping but with significantly reduced beam training overhead.
Deep Learning in Physical Layer Communications Qin, Zhijin; Ye, Hao; Li, Geoffrey Ye ...
IEEE wireless communications,
2019-April, 2019-4-00, 20190401, Letnik:
26, Številka:
2
Journal Article
Recenzirano
DL has shown great potential to revolutionize communication systems. This article provides an overview of the recent advancements in DL-based physical layer communications. DL can improve the ...performance of each individual block in communication systems or optimize the whole transmitter/receiver. Therefore, we categorize the applications of DL in physical layer communications into systems with and without block structures. For DL-based communication systems with the block structure, we demonstrate the power of DL in signal compression and signal detection. We also discuss the recent endeavors in developing DL-based end-to-end communication systems. Finally, potential research directions are identified to boost intelligent physical layer communications. Introduction