Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At ...its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.
The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, ...and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.
Nonorthogonal multiple access (NOMA) represents a paradigm shift from conventional orthogonal multiple-access (MA) concepts and has been recognized as one of the key enabling technologies for ...fifth-generation mobile networks. In this paper, the impact of user pairing on the performance of two NOMA systems, i.e., NOMA with fixed power allocation (F-NOMA) and cognitive-radio-inspired NOMA (CR-NOMA), is characterized. For F-NOMA, both analytical and numerical results are provided to demonstrate that F-NOMA can offer a larger sum rate than orthogonal MA, and the performance gain of F-NOMA over conventional MA can be further enlarged by selecting users whose channel conditions are more distinctive. For CR-NOMA, the quality of service (QoS) for users with poorer channel conditions can be guaranteed since the transmit power allocated to other users is constrained following the concept of cognitive radio networks. Because of this constraint, CR-NOMA exhibits a different behavior compared with F-NOMA. For example, for the user with the best channel condition, CR-NOMA prefers to pair it with the user with the second best channel condition, whereas the user with the worst channel condition is preferred by F-NOMA.
This paper considers the application of multiple-input multiple-output (MIMO) techniques to nonorthogonal multiple access (NOMA) systems. A new design of precoding and detection matrices for ...MIMO-NOMA is proposed and its performance is analyzed for the case with a fixed set of power allocation coefficients. To further improve the performance gap between MIMO-NOMA and conventional orthogonal multiple access schemes, user pairing is applied to NOMA and its impact on the system performance is characterized. More sophisticated choices of power allocation coefficients are also proposed to meet various quality-of-service requirements. Finally, computer simulation results are provided to facilitate the performance evaluation of MIMO-NOMA and also demonstrate the accuracy of the developed analytical results.
Non-orthogonal multiple access (NOMA) has received tremendous attention for the design of radio access techniques for fifth generation (5G) wireless networks and beyond. The basic concept behind NOMA ...is to serve more than one user in the same resource block, for example, a time slot, subcarrier, spreading code, or space. With this, NOMA promotes massive connectivity, lowers latency, improves user fairness and spectral efficiency, and increases reliability compared to orthogonal multiple access (OMA) techniques. While NOMA has gained significant attention from the communications community, it has also been subject to several widespread misunderstandings, such as "NOMA is based on allocating higher power to users with worse channel conditions. As such, cell-edge users receive more power in NOMA and due to this biased power allocation toward celledge users inter-cell interference is more severe in NOMA compared to OMA. NOMA also compromises security for spectral efficiency." The above statements are actually false, and this article aims at identifying such common myths about NOMA and clarifying why they are not true. We also pose critical questions that are important for the effective adoption of NOMA in 5G and beyond and identify promising research directions for NOMA, which will require intense investigation in the future.
The application of multiple-input multiple-output (MIMO) techniques to nonorthogonal multiple access (NOMA) systems is important to enhance the performance gains of NOMA. In this paper, a novel ...MIMO-NOMA framework for downlink and uplink transmission is proposed by applying the concept of signal alignment. By using stochastic geometry, closed-form analytical results are developed to facilitate the performance evaluation of the proposed framework for randomly deployed users and interferers. The impact of different power allocation strategies, namely fixed power allocation and cognitive radio inspired power allocation, on the performance of MIMO-NOMA is also investigated. Computer simulation results are provided to demonstrate the performance of the proposed framework and the accuracy of the developed analytical results.
To overcome devices' limitations in performing computation-intense applications, mobile edge computing (MEC) enables users to offload tasks to proximal MEC servers for faster task computation. ...However, the current MEC system design is based on average-based metrics, which fails to account for the ultra-reliable low-latency requirements in mission-critical applications. To tackle this, this paper proposes a new system design, where probabilistic and statistical constraints are imposed on task queue lengths, by applying extreme value theory . The aim is to minimize users' power consumption while trading off the allocated resources for local computation and task offloading. Due to wireless channel dynamics, users are reassociated to MEC servers in order to offload tasks using higher rates or accessing proximal servers. In this regard, a user-server association policy is proposed, taking into account the channel quality as well as the servers' computation capabilities and workloads. By marrying tools from Lyapunov optimization and matching theory, a two-timescale mechanism is proposed, where a user-server association is solved in the long timescale, while a dynamic task offloading and resource allocation policy are executed in the short timescale. The simulation results corroborate the effectiveness of the proposed approach by guaranteeing highly reliable task computation and lower delay performance, compared to several baselines.
This paper considers the co-existence of two important communication techniques, non-orthogonal multiple access (NOMA) and mobile edge computing (MEC). Both NOMA uplink and downlink transmissions are ...applied to MEC, and analytical results are developed to demonstrate that the use of NOMA can efficiently reduce the latency and energy consumption of MEC offloading. In addition, various asymptotic studies are carried out to reveal the impact of the users' channel conditions and transmit powers on the application of NOMA to MEC are quite different from those in conventional NOMA scenarios. Computer simulation results are also provided to facilitate the performance evaluation of NOMA-MEC and also verify the accuracy of the developed analytical results.
The key idea of non-orthogonal multiple access (NOMA) is to serve multiple users simultaneously at the same time and frequency, which can result in excessive multiple-access interference. As a ...crucial component of NOMA systems, successive interference cancelation (SIC) is key to combating this multiple-access interference, and is the focus of this letter, where an overview of SIC decoding order selection schemes is provided. In particular, selecting the SIC decoding order based on the users' channel state information (CSI) and the users' quality of service (QoS), respectively, is discussed. The limitations of these two approaches are illustrated, and then a recently proposed scheme, termed hybrid SIC, which dynamically adapts the SIC decoding order is presented and shown to achieve a surprising performance improvement that cannot be realized by the conventional SIC decoding order selection schemes individually.
Benefiting from tens of GHz of bandwidth, terahertz (THz) communication has become a promising technology for future 6G network. To deal with the serious propagation loss of THz signals, massive ...multiple-input multiple-output (MIMO) with hybrid precoding is utilized to generate directional beams with high array gains. However, the standard hybrid precoding architecture based on frequency-independent phase-shifters cannot cope with the beam split effect in THz massive MIMO caused by the large bandwidth and the large number of antennas, where the beams split into different physical directions at different frequencies. The beam split effect will result in a serious array gain loss across the entire bandwidth, which has not been well investigated in THz massive MIMO. In this paper, we first quantify the seriousness of the beam split effect in THz massive MIMO by analyzing the array gain loss it causes. Then, we propose a new precoding architecture called delay-phase precoding (DPP) to mitigate this effect. Specifically, the proposed DPP introduces a time delay network composed of a small number of time delay elements between radio-frequency chains and phase-shifters in the standard hybrid precoding architecture. Unlike frequency-independent phase shifts, the time delay network introduced in the DPP can realize frequency-dependent phase shifts, which can be designed to generate frequency-dependent beams towards the target physical direction across the entire bandwidth. Due to the joint control of delay and phase, the proposed DPP can alleviate the array gain loss caused by the beam split effect. Furthermore, we propose a hardware structure by using true-time-delayers to realize frequency-dependent phase shifts for realizing the concept of DPP. A corresponding precoding algorithm is proposed to realize the precoding design. Theoretical analysis and simulations show that the proposed DPP can mitigate the beam split effect and achieve near-optimal rate with higher energy efficiency.