In this paper, the problem of joint power and resource allocation (JPRA) for ultra-reliable low-latency communication (URLLC) in vehicular networks is studied. Therein, the network-wide power ...consumption of vehicular users (VUEs) is minimized subject to high reliability in terms of probabilistic queuing delays. Using extreme value theory (EVT), a new reliability measure is defined to characterize extreme events pertaining to vehicles' queue lengths exceeding a predefined threshold. To learn these extreme events, assuming they are independently and identically distributed over VUEs, a novel distributed approach based on federated learning (FL) is proposed to estimate the tail distribution of the queue lengths. Considering the communication delays incurred by FL over wireless links, Lyapunov optimization is used to derive the JPRA policies enabling URLLC for each VUE in a distributed manner. The proposed solution is then validated via extensive simulations using a Manhattan mobility model. Simulation results show that FL enables the proposed method to estimate the tail distribution of queues with an accuracy that is close to a centralized solution with up to 79% reductions in the amount of exchanged data. Furthermore, the proposed method yields up to 60% reductions of VUEs with large queue lengths, while reducing the average power consumption by two folds, compared to an average queue-based baseline.
Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At ...its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.
In this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (UAVs), used as aerial base stations to collect data from ground Internet of Things (IoT) devices, are ...investigated. In particular, to enable reliable uplink communications for the IoT devices with a minimum total transmit power, a novel framework is proposed for jointly optimizing the 3D placement and the mobility of the UAVs, device-UAV association, and uplink power control. First, given the locations of active IoT devices at each time instant, the optimal UAVs' locations and associations are determined. Next, to dynamically serve the IoT devices in a time-varying network, the optimal mobility patterns of the UAVs are analyzed. To this end, based on the activation process of the IoT devices, the time instances at which the UAVs must update their locations are derived. Moreover, the optimal 3D trajectory of each UAV is obtained in a way that the total energy used for the mobility of the UAVs is minimized while serving the IoT devices. Simulation results show that, using the proposed approach, the total-transmit power of the IoT devices is reduced by 45% compared with a case, in which stationary aerial base stations are deployed. In addition, the proposed approach can yield a maximum of 28% enhanced system reliability compared with the stationary case. The results also reveal an inherent tradeoff between the number of update times, the mobility of the UAVs, and the transmit power of the IoT devices. In essence, a higher number of updates can lead to lower transmit powers for the IoT devices at the cost of an increased mobility for the UAVs.
In this paper, the effective use of flight-time constrained unmanned aerial vehicles (UAVs) as flying base stations that provide wireless service to ground users is investigated. In particular, a ...novel framework for optimizing the performance of such UAV-based wireless systems in terms of the average number of bits (data service) transmitted to users as well as the UAVs' hover duration (i.e. flight time) is proposed. In the considered model, UAVs hover over a given geographical area to serve ground users that are distributed within the area based on an arbitrary spatial distribution function. In this case, two practical scenarios are considered. In the first scenario, based on the maximum possible hover times of UAVs, the average data service delivered to the users under a fair resource allocation scheme is maximized by finding the optimal cell partitions associated to the UAVs. Using the powerful mathematical framework of optimal transport theory, this cell partitioning problem is proved to be equivalent to a convex optimization problem. Subsequently, a gradient-based algorithm is proposed for optimally partitioning the geographical area based on the users' distribution, hover times, and locations of the UAVs. In the second scenario, given the load requirements of ground users, the minimum average hover time that the UAVs need for completely servicing their ground users is derived. To this end, first, an optimal bandwidth allocation scheme for serving the users is proposed. Then, given this optimal bandwidth allocation, optimal cell partitions associated with the UAVs are derived by exploiting the optimal transport theory. Simulation results show that our proposed cell partitioning approach leads to a significantly higher fairness among the users compared with the classical weighted Voronoi diagram. Furthermore, the results demonstrate that the average hover time of the UAVs can be reduced by 64% by adopting the proposed optimal bandwidth allocation scheme as well as the optimal cell partitioning approach. In addition, our results reveal an inherent tradeoff between the hover time of UAVs and bandwidth efficiency while serving the ground users.
Next-generation wireless networks must enable emerging technologies such as augmented reality and connected autonomous vehicles via a wide range of wireless services that span enhanced mobile ...broadband (eMBB) and ultra-reliable low-latency communication (URLLC). Existing wireless systems that solely rely on the scarce sub-6 GHz, mW frequency bands will be unable to meet such stringent and mixed service requirements for future wireless services due to spectrum scarcity. Meanwhile, operating at high-frequency mmWave bands is seen as an attractive solution, primarily due to the bandwidth availability and possibility of large-scale multi-antenna communication. However, even though leveraging the large bandwidth at mmWave frequencies can potentially boost the wireless capacity for eMBB services and reduce the transmission delay for low-latency applications, mmWave communication is inherently unreliable due to its susceptibility to blockage, high path loss, and channel uncertainty. Hence, to provide URLLC and high-speed wireless access, it is desirable to seamlessly integrate the reliability of mW networks with the high capacity of mmWave networks. To this end, in this article, the first comprehensive tutorial for integrated mmWave-mW communications is introduced. This envisioned integrated design will enable wireless networks to achieve URLLC along with eMBB by leveraging the best of two worlds: reliable, long-range communications at the mW bands and directional high-speed communications at the mmWave frequencies. To achieve this goal, key solution concepts are discussed that include new architectures for the radio interface, URLLC-aware frame structure and resource allocation methods along with mobility management, to realize the potential of integrated mmWave-mW communications. The opportunities and challenges of each proposed scheme are discussed and key results are presented to show the merits of t
Recently, millimeter-wave (mmWave) bands have been postulated as a means to accommodate the foreseen extreme bandwidth demands in vehicular communications, which result from the dissemination of ...sensory data to nearby vehicles for enhanced environmental awareness and improved safety level. However, the literature is particularly scarce in regards to principled resource allocation schemes that deal with the challenging radio conditions posed by the high mobility of vehicular scenarios. In this paper, we propose a novel framework that blends together matching theory and swarm intelligence to dynamically and efficiently pair vehicles and optimize both transmission and reception beamwidths. This is done by jointly considering channel state information and queue state information when establishing vehicle-to-vehicle (V2V) links. To validate the proposed framework, simulation results are presented and discussed, where the throughput performance as well as the latency/reliability tradeoffs of the proposed approach are assessed and compared with several baseline approaches recently proposed in the literature. The results obtained in this paper show performance gains of 25% in reliability and delay for ultra-dense vehicular scenarios with 50% more active V2V links than the baselines. These results shed light on the operational limits and practical feasibility of mmWave bands, as a viable radio access solution for future high-rate V2V communications.
In this paper, the effective use of multiple quadrotor drones as an aerial antenna array that provides wireless service to ground users is investigated. In particular, under the goal of minimizing ...the airborne service time needed for communicating with ground users, a novel framework for deploying and operating a drone-based antenna array system whose elements are single-antenna drones is proposed. In the considered model, the service time is minimized by minimizing the wireless transmission time as well as the control time that is needed for movement and stabilization of the drones. To minimize the transmission time, first, the antenna array gain is maximized by optimizing the drone spacing within the array. In this case, using perturbation techniques, the drone spacing optimization problem is addressed by solving successive, perturbed convex optimization problems. Then, according to the location of each ground user, the optimal locations of the drones around the array's center are derived such that the transmission time for the user is minimized. Given the determined optimal locations of drones, the drones must spend a control time to adjust their positions dynamically so as to serve multiple users. To minimize this control time of the quadrotor drones, the speed of rotors is optimally adjusted based on both the destinations of the drones and external forces (e.g., wind and gravity). In particular, using bang-bang control theory, the optimal rotors' speeds as well as the minimum control time are derived in closed-form. Simulation results show that the proposed approach can significantly reduce the service time to ground users compared with a fixed-array case in which the same number of drones form a fixed uniform antenna array. The results also show that, in comparison with the fixed-array case, the network's spectral efficiency can be improved by 32% while leveraging the drone antenna array system. Finally, the results reveal an inherent tradeoff between the control time and transmission time while varying the number of drones in the array.
The next-generation of wireless networks will enable many machine learning (ML) tools and applications to efficiently analyze various types of data collected by edge devices for inference, autonomy, ...and decision making purposes. However, due to resource constraints, delay limitations, and privacy challenges, edge devices cannot offload their entire collected datasets to a cloud server for centrally training their ML models or inference purposes. To overcome these challenges, distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges, thus reducing the communication overhead and latency as well as improving data privacy. However, deploying distributed learning over wireless networks faces several challenges including the uncertain wireless environment (e.g., dynamic channel and interference), limited wireless resources (e.g., transmit power and radio spectrum), and hardware resources (e.g., computational power). This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks. We present a detailed overview of several emerging distributed learning paradigms, including federated learning, federated distillation, distributed inference, and multi-agent reinforcement learning. For each learning framework, we first introduce the motivation for deploying it over wireless networks. Then, we present a detailed literature review on the use of communication techniques for its efficient deployment. We then introduce an illustrative example to show how to optimize wireless networks to improve its performance. Finally, we introduce future research opportunities. In a nutshell, this paper provides a holistic set of guidelines on how to deploy a broad range of distributed learning frameworks over real-world wireless communication networks.
The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing. In particular, with their inherent attributes such as mobility, flexibility, and ...adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. On the other hand, UAVs can operate as flying mobile terminals within a cellular network. Such cellular-connected UAVs can enable several applications ranging from real-time video streaming to item delivery. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as 3D deployment, performance analysis, channel modeling, and energy efficiency are explored along with representative results. Then, open problems and potential research directions pertaining to UAV communications are introduced. Finally, various analytical frameworks and mathematical tools, such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems.
In this letter, a novel framework for delay-optimal cell association in unmanned aerial vehicle (UAV)-enabled wireless cellular networks is proposed. In particular, to minimize the average network ...delay under any arbitrary spatial distribution of the ground users, the optimal cell partitions of the UAVs and terrestrial base stations are determined. To this end, using the powerful mathematical tools of optimal transport theory, the existence of the solution to the optimal cell association problem is proved and the solution space is completely characterized. The analytical and simulation results show that the proposed approach yields substantial improvements in terms of the average network delay.