Millimeter wave (mmWave) communications has been regarded as a key enabling technology for 5G networks, as it offers orders of magnitude greater spectrum than current cellular bands. In contrast to ...conventional multiple-input-multiple-output (MIMO) systems, precoding in mmWave MIMO cannot be performed entirely at baseband using digital precoders, as only a limited number of signal mixers and analog-to-digital converters can be supported considering their cost and power consumption. As a cost-effective alternative, a hybrid precoding transceiver architecture, combining a digital precoder and an analog precoder, has recently received considerable attention. However, the optimal design of such hybrid precoders has not been fully understood. In this paper, treating the hybrid precoder design as a matrix factorization problem, effective alternating minimization (AltMin) algorithms will be proposed for two different hybrid precoding structures, i.e., the fully-connected and partially-connected structures. In particular, for the fully-connected structure, an AltMin algorithm based on manifold optimization is proposed to approach the performance of the fully digital precoder, which, however, has a high complexity. Thus, a low-complexity AltMin algorithm is then proposed, by enforcing an orthogonal constraint on the digital precoder. Furthermore, for the partially-connected structure, an AltMin algorithm is also developed with the help of semidefinite relaxation. For practical implementation, the proposed AltMin algorithms are further extended to the broadband setting with orthogonal frequency division multiplexing modulation. Simulation results will demonstrate significant performance gains of the proposed AltMin algorithms over existing hybrid precoding algorithms. Moreover, based on the proposed algorithms, simulation comparisons between the two hybrid precoding structures will provide valuable design insights.
Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized mobile cloud computing toward mobile edge ...computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges, and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.
Effective resource management plays a pivotal role in wireless networks, which, unfortunately, typically results in challenging mixed-integer nonlinear programming (MINLP) problems. Machine ...learning-based methods have recently emerged as a disruptive way to obtain near-optimal performance for MINLPs with affordable computational complexity. There have been some attempts in applying such methods to resource management in wireless networks, but these attempts require huge amounts of training samples and lack the capability to handle constrained problems. Furthermore, they suffer from severe performance deterioration when the network parameters change, which commonly happens and is referred to as the task mismatch problem. In this paper, to reduce the sample complexity and address the feasibility issue, we propose a framework of Learning to Optimize for Resource Management (LORM). In contrast to the end-to-end learning approach adopted in previous studies, LORM learns the optimal pruning policy in the branch-and-bound algorithm for MINLPs via a sample-efficient method, namely, imitation learning. To further address the task mismatch problem, we develop a transfer learning method via self-imitation in LORM, named LORM-TL, which can quickly adapt a pre-trained machine learning model to the new task with only a few additional unlabeled training samples. Numerical simulations demonstrate that LORM outperforms specialized state-of-the-art algorithms and achieves near-optimal performance, while providing significant speedup compared with the branch-and-bound algorithm. Moreover, LORM-TL, by relying on a few unlabeled samples, achieves comparable performance with the model trained from scratch with sufficient labeled samples.
Over-the-air computation (AirComp) is a disruptive technique for fast wireless data aggregation in Internet of Things (IoT) networks via exploiting the waveform superposition property of ...multiple-access channels. However, the performance of AirComp is bottlenecked by the worst channel condition among all links between the IoT devices and the access point. In this paper, a reconfigurable intelligent surface (RIS) assisted AirComp system is proposed to boost the received signal power and thus mitigate the performance bottleneck by reconfiguring the propagation channels. With an objective to minimize the AirComp distortion, we propose a joint design of AirComp transceivers and RIS phase-shifts, which however turns out to be a highly intractable non-convex programming problem. To this end, we develop a novel alternating minimization framework in conjunction with the successive convex approximation technique, which is proved to converge monotonically. To reduce the computational complexity, we transform the subproblem in each alternation as a smooth convex-concave saddle point problem, which is then tackled by proposing a Mirror-Prox method that only involves a sequence of closed-form updates. Simulations show that the computation time of the proposed algorithm can be two orders of magnitude smaller than that of the state-of-the-art algorithms, while achieving a similar distortion performance.
Over-the-air computation (AirComp) based federated learning (FL) is capable of achieving fast model aggregation by exploiting the waveform superposition property of multiple-access channels. However, ...the model aggregation performance is severely limited by the unfavorable wireless propagation channels. In this paper, we propose to leverage intelligent reflecting surface (IRS) to achieve fast yet reliable model aggregation for AirComp-based FL. To optimize the learning performance, we present the convergence analysis of our proposed IRS-assisted AirComp-based FL system, based on which we propose to maximize the number of scheduled devices of each communication round under certain mean-squared error (MSE) requirements. To tackle the formulated highly-intractable problem, we propose a two-step optimization framework. Specifically, we induce the sparsity of device selection in the first step, followed by solving a series of MSE minimization problems to find the maximum feasible device set in the second step. We then propose an alternating optimization framework, supported by the difference-of-convex programming for low-rank optimization, to efficiently design the aggregation beamformers at the BS and phase shifts at the IRS. Simulation results demonstrate that our proposed algorithm and the deployment of an IRS can achieve a higher FL prediction accuracy than the baseline schemes.
Federated learning (FL) has recently emerged as an important and promising learning scheme in IoT, enabling devices to jointly learn a model without sharing their raw data sets. As FL does not ...collect and store the data centrally, it requires frequent model exchange through the wireless network. However, since the aggregation in FL can be partially participated with synchronized frequency, its communication pattern is different from the conventional network. Therein, limited bandwidth and package loss restrict interactions in training. Thus, the network scheduling could largely affect the FL convergence. To figure out the specific effects, we analyze the convergence rate of FL regarding the joint impact of communication and training. Combining it with the network model, we formulate the optimal scheduling problem for FL implementation. The theoretical results could guide the hyper-parameter design in the network and explain the principle of how the wireless communication could influence the FL training process.
Aberrant metabolism is the root cause of several serious health issues, creating a huge burden to health and leading to diminished life expectancy. A dysregulated metabolism induces the secretion of ...several molecules which in turn trigger the inflammatory pathway. Inflammation is the natural reaction of the immune system to a variety of stimuli, such as pathogens, damaged cells, and harmful substances. Metabolically triggered inflammation, also called metaflammation or low-grade chronic inflammation, is the consequence of a synergic interaction between the host and the exposome-a combination of environmental drivers, including diet, lifestyle, pollutants and other factors throughout the life span of an individual. Various levels of chronic inflammation are associated with several lifestyle-related diseases such as diabetes, obesity, metabolic associated fatty liver disease (MAFLD), cancers, cardiovascular disorders (CVDs), autoimmune diseases, and chronic lung diseases. Chronic diseases are a growing concern worldwide, placing a heavy burden on individuals, families, governments, and health-care systems. New strategies are needed to empower communities worldwide to prevent and treat these diseases. Precision medicine provides a model for the next generation of lifestyle modification. This will capitalize on the dynamic interaction between an individual's biology, lifestyle, behavior, and environment. The aim of precision medicine is to design and improve diagnosis, therapeutics and prognostication through the use of large complex datasets that incorporate individual gene, function, and environmental variations. The implementation of high-performance computing (HPC) and artificial intelligence (AI) can predict risks with greater accuracy based on available multidimensional clinical and biological datasets. AI-powered precision medicine provides clinicians with an opportunity to specifically tailor early interventions to each individual. In this article, we discuss the strengths and limitations of existing and evolving recent, data-driven technologies, such as AI, in preventing, treating and reversing lifestyle-related diseases.
Small cell networks have recently been proposed as an important evolution path for the next-generation cellular networks. However, with more and more irregularly deployed base stations (BSs), it is ...becoming increasingly difficult to quantify the achievable network throughput or energy efficiency. In this paper, we develop an analytical framework for downlink performance evaluation of small cell networks, based on a random spatial network model, where BSs and users are modeled as two independent spatial Poisson point processes. A new simple expression of the outage probability is derived, which is analytically tractable and is especially useful with multi-antenna transmissions. This new result is then applied to evaluate the network throughput and energy efficiency. It is analytically shown that deploying more BSs can always increase the network throughput, but the throughput will scale with the BS density first linearly, then logarithmically, and finally converge to a constant. On the other hand, increasing the number of BS antennas can decrease the outage probability exponentially, thus can always increase the network throughput. However, increasing the BS density or the number of transmit antennas will first increase and then decrease the energy efficiency if the non-transmission power or the circuit power consumption is less than certain thresholds, and the optimal BS density and the optimal number of BS antennas can be found. Otherwise, the energy efficiency will always decrease. Simulation results shall demonstrate that our conclusions based on the random network model are general and also hold in a regular grid-based model.
The emerging deterministic networking (DetNet) has stimulated an increasing enthusiasm for the investigation of supporting deterministic, ultra-reliable, and low-latency services. However, the ...time-varying channel characteristics and bursty data traffic bring uncertainties for transmission, thereby making the assurance of deterministic quality-of-service (QoS) a challenging issue in practice. In this paper, we are interested in supporting the deterministic QoS demand in a massive access scenario by reducing the bit dropping rate incurred by delay violation. To minimize the bit dropping rate, a cross-layer scheduling scheme with joint channel and buffer awareness is highly desired to efficiently adjust the resource allocation among users, whose complexity increases exponentially with the number of users. Fortunately, the complexity issue can be relieved by adopting the mean-field approximation approach, which can substantially simplify the design and analysis of the cross-layer scheduling scheme with massive users. Two threshold-based scheduling policies are proposed, which have low computational complexity. Besides, we also derive the deadline-constrained capacity for massive access, which is substantially superior to that of single user transmissions. Numerical results will demonstrate the effectiveness of the mean-field approximation based cross-layer scheduling scheme.
Mobility-Aware Caching in D2D Networks Rui Wang; Jun Zhang; Song, S. H. ...
IEEE transactions on wireless communications,
2017-Aug., 2017-8-00, 20170801, Letnik:
16, Številka:
8
Journal Article
Recenzirano
Odprti dostop
Caching at mobile devices can facilitate device-todevice (D2D) communications, which may significantly improve spectrum efficiency and alleviate the heavy burden on backhaul links. However, most ...previous works ignored user mobility, thus having limited practical applications. In this paper, we take advantage of the user mobility pattern by the inter-contact times between different users, and propose a mobility-aware caching placement strategy to maximize the data offloading ratio, which is defined as the percentage of the requested data that can be delivered via D2D links rather than through base stations. Given the NP-hard caching placement problem, we first propose an optimal dynamic programming algorithm to obtain a performance benchmark with much lower complexity than exhaustive search. We then prove that the problem falls in the category of monotone submodular maximization over a matroid constraint, and propose a time-efficient greedy algorithm, which achieves an approximation ratio as 2/1. Simulation results with real-life data sets will validate the effectiveness of our proposed mobility-aware caching placement strategy. We observe that users moving at either a very low or very high speed should cache the most popular files, while users moving at a medium speed should cache less popular files to avoid duplication.