Network slicing is a promising technique for wireless service providers to support enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) services in a shared radio ...access network (RAN) infrastructure. In this paper, we apply numerology, mini-slot based transmission, and punctured scheduling techniques to support eMBB and URLLC network slices. For efficient allocation of radio resources (e.g., physical resource blocks, transmit power) to the users, we formulate RAN slicing problem as a multi-timescale problem. To solve this problem and address the dynamics of the traffic, we propose a hierarchical deep learning framework. Specifically, in each long time slot, the service provider employs a deep reinforcement learning (DRL) algorithm to determine the slice configuration parameters. The eMBB and URLLC schedulers use their own attention-based deep neural network (DNN) algorithm to allocate radio resources to their corresponding users in each short and mini time slot, respectively. Simulation results show that the proposed framework can achieve a higher aggregate throughput and a higher service level agreement (SLA) satisfaction ratio compared to some other RAN slicing approaches, including the resource proportional placement algorithm, decomposition and relaxation based resource allocation algorithm, and distributed bandwidth optimization algorithm.
To meet the wide range of 5G use cases in a cost-efficient way, network slicing has been advocated as a key enabler. Unlike the core network slicing in a virtualized environment, radio access network ...(RAN) slicing is still in its infancy and the corresponding realization is challenging. In this paper, we investigate the realization approach of fog RAN slicing, where two network slice instances for hotspot and vehicle-to-infrastructure scenarios are concerned and orchestrated. In particular, the framework for RAN slicing is formulated as an optimization problem of jointly tackling content caching and mode selection, in which the time-varying channel and unknown content popularity distribution are characterized. Due to the different users' demands and the limited resources, the complexity of original optimization problem is significant high, which makes traditional optimization approaches hard to be directly applied. To deal with this dilemma, a deep reinforcement learning algorithm is proposed, whose core idea is that the cloud server makes proper decisions on the content caching and mode selection to maximize the reward performance under the dynamical channel state and cache status. The simulation results demonstrate the performance in terms of hit ratio and sum transmit rate can be significantly improved by the proposal.
We study the transmission over a network in which users send information to a remote destination through relay nodes that are connected to the destination via finite-capacity error-free links, i.e., ...a cloud radio access network. The relays are constrained to operate without knowledge of the users' codebooks, i.e., they perform oblivious processing. The destination, or central processor, however, is informed about the users' codebooks. We establish a single-letter characterization of the capacity region of this model for a class of discrete memoryless channels in which the outputs at the relay nodes are independent given the users' inputs. We show that both relaying à-la Cover-El Gamal, i.e., compress-and-forward with joint decompression and decoding, and "noisy network coding" are optimal. The proof of the converse part establishes, and utilizes, connections with the Chief Executive Officer source coding problem under logarithmic loss distortion measure. Extensions to general discrete memoryless channels are also investigated. In this case, we establish the inner and outer bounds on the capacity region. For memoryless Gaussian channels within the studied class of channels, we characterize the capacity region when the users are constrained to time-share among Gaussian codebooks. Furthermore, we also discuss the suboptimality of separate decompression and decoding and the role of time sharing.
Radio access network (RAN) slicing is a virtualization technology that partitions radio resources into multiple autonomous virtual networks. Since RAN slicing can be tailored to provide diverse ...performance requirements, it will be pivotal to achieve the high-throughput and low-latency communications that next-generation (5G) systems have long yearned for. To this end, effective RAN slicing algorithms must (i) partition radio resources so as to leverage coordination among multiple base stations and thus boost network throughput; and (ii) reduce interference across different slices to guarantee slice isolation and avoid performance degradation. The ultimate goal of this paper is to design RAN slicing algorithms that address the above two requirements. First, we show that the RAN slicing problem can be formulated as a 0-1 Quadratic Programming problem, and we prove its NP-hardness. Second, we propose an optimal solution for small-scale 5G network deployments, and we present three approximation algorithms to make the optimization problem tractable when the network size increases. We first analyze the performance of our algorithms through simulations, and then demonstrate their performance through experiments on a standard-compliant LTE testbed with 2 base stations and 6 smartphones. Our results show that not only do our algorithms efficiently partition RAN resources, but also improve network throughput by 27% and increase by <inline-formula> <tex-math notation="LaTeX">2\times </tex-math></inline-formula> the signal-to-interference-plus-noise ratio.
A novel dynamic radio-cooperation strategy is proposed for a Cloud Radio Access Network (Cloud-RAN) consisting of multiple Remote Radio Heads connected to a central Virtual Base Station (VBS) pool. ...In particular, the key capabilities of Cloud-RAN in computing-resource sharing and real-time communication among the VBSs are leveraged to design a joint dynamic radio clustering and cooperative beamforming scheme that maximizes the downlink Weighted Sum-Rate System Utility (WSRSU). Due to the combinatorial nature of the radio clustering process and to the non-convexity of the cooperative beamforming design, the underlying optimization problem is NP-hard, and is extremely difficult to solve for a large network. The proposed approach aims for a suboptimal solution by transforming the original problem into a Mixed-Integer Second-Order Cone Program (MI-SOCP) and applying Sequential Convex Approximation (SCA) to derive a novel iterative algorithm. Numerical simulation results show that our low-complexity algorithm provides near-optimal performance in terms of WSRSU while significantly outperforming conventional radio clustering and beamforming schemes. Additionally, the results also demonstrate the significant improvement in computing-resource utilization of Cloud-RAN over a traditional RAN with distributed computing resources.
Deep learning-based univariate time series classification can improve the user experience of Open Radio Access Network (RAN)-based Cellular Vehicle-to-Everything (CV2x). However,few institutes ...researching ORAD-based CV2x can satisfy the enormous demand of labeled data. This issue is known as few-shot learning. Thus, we deeply explore the issue of few-shot learning for ORAE-based CV2x. Meta-transfer learning is a good alternative to solving few-shot learning. Most of them, however, are still plagued by catastrophic forgetting. Numerous studies have demonstrated that deliberately applying gradient sparsity can significantly increase a meta-model's capacity for generalization. In this paper, we propose a pre-training framework named Distilling for Sparse-Meta-transfer Learning (DSML). It is a combination and enhancement of meta-transfer learning, multi-teacher knowledge distillation, and sparse Model-Agnostic Meta-Learning (sparse-MAML). It utilizes multi-teacher knowledge distillation to address the catastrophic forgetting in the meta-learning phase. Simultaneously, it utilizes sigmoid function to fundamentally address the gradient anomaly problem of sparse-MAML. We conducted ablation experiments on Sparse-MAML and prove that it can actually increase the meta-model's generalization capacity. We also compare DSML with the state-of-the-art algorithm in the univariate time series classification field. The results demonstrate that DSML performs better. Finally, we present two case studies of applying DSML to ORAN-based CV2x.
This paper considers a downlink cloud radio access network (C-RAN) in which all the base-stations (BSs) are connected to a central computing cloud via digital backhaul links with finite capacities. ...Each user is associated with a user-centric cluster of BSs; the central processor shares the user's data with the BSs in the cluster, which then cooperatively serve the user through joint beamforming. Under this setup, this paper investigates the user scheduling, BS clustering, and beamforming design problem from a network utility maximization perspective. Differing from previous works, this paper explicitly considers the per-BS backhaul capacity constraints. We formulate the network utility maximization problem for the downlink C-RAN under two different models depending on whether the BS clustering for each user is dynamic or static over different user scheduling time slots. In the former case, the user-centric BS cluster is dynamically optimized for each scheduled user along with the beamforming vector in each time-frequency slot, whereas in the latter case, the user-centric BS cluster is fixed for each user and we jointly optimize the user scheduling and the beamforming vector to account for the backhaul constraints. In both cases, the nonconvex per-BS backhaul constraints are approximated using the reweighted ℓ 1 -norm technique. This approximation allows us to reformulate the per-BS backhaul constraints into weighted per-BS power constraints and solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach. This paper shows that the proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes. This paper also proposes two heuristic static clustering schemes that can already achieve a substantial portion of the gain.
The increasing wireless data traffic demands have driven the need to explore suitable spectrum regions for meeting the projected requirements. In the light of this, millimeter wave (mmWave) ...communication has received considerable attention from the research community. Typically, in fifth generation (5G) wireless networks, mmWave massive multiple-input multiple-output (MIMO) communications is realized by the hybrid transceivers which combine high dimensional analog phase shifters and power amplifiers with lower-dimensional digital signal processing units. This hybrid beamforming design reduces the cost and power consumption which is aligned with an energy-efficient design vision of 5G. In this paper, we track the progress in hybrid beamforming for massive MIMO communications in the context of system models of the hybrid transceivers' structures, the digital and analog beamforming matrices with the possible antenna configuration scenarios and the hybrid beamforming in heterogeneous wireless networks. We extend the scope of the discussion by including resource management issues in hybrid beamforming. We explore the suitability of hybrid beamforming methods, both, existing and proposed till first quarter of 2017, and identify the exciting future challenges in this domain.
Traffic over mobile cellular networks has significantly increased over the past decade, and with the introduction of 5G there is a growing focus on throughput capacity, reliability, and low latency ...to meet the demands of new and innovative applications. Multi-access Edge Computing (MEC) is being developed to achieve a series of challenges posed by the introduction of new applications and services that require ultra-low latency and high bandwidth. This article is a comprehensive survey of recent advances in MEC and provides a description of the MEC concept, framework, and capabilities. We also summarize a set of MEC technology enablers including Software Defined Networking, Network Function Virtualization, Information-Centric Networking, Service Function Chaining, Cloud-Radio Access Networks, Fog-computing based Radio Access Networks and Network Slicing. The MEC use cases and the open research challenges are presented.
The open radio access network (O-RAN) architecture provides enhanced opportunities for integrating machine learning in 5G/6G resource management by decomposing RAN functionalities. Yet, generic ...learning mechanisms either do not fully exploit the disaggregated non-real-time and near-real-time RAN controllers or ignore the potential elasticity of application demands, another degree of freedom in managing RAN resources. We introduce a two-timescale framework aimed at optimizing users' long-term total QoS. Rather than reactive resource allocation, our approach proactively modifies multi-resource user demands using congestion indicators, prior to enforcing any allocation rules. Addressing the issue of insufficient user feedback on individual resource utilities, we employ a bandit-feedback version of the combinatorial multi-armed bandit framework to deduce resource-specific signals. Also, to compensate for insufficient and infrequent feedback, we've developed an algorithm that gleans side information from live network traffic to refine predictions on user resource sensitivities. This streamlines the algorithm's optimality convergence and leverages the two-tier O-RAN controller structure. We validate our algorithms' efficacy through analysis and 5G usage experiments, revealing our proposed method improves application utility by 13-60%, throughput by 8-19%, and reduces latency by 10-18%.