Digital Twin (DT) is an emerging technology surrounded by many promises, and potentials to reshape the future of industries and society overall. A DT is a system-of-systems which goes far beyond the ...traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart. The two systems (physical and digital) exist side by side, sharing all the inputs and operations using real-time data communications and information transfer. With the incorporation of Internet of Things (IoT), Artificial Intelligence (AI), 3D models, next generation mobile communications (5G/6G), Augmented Reality (AR), Virtual Reality (VR), distributed computing, Transfer Learning (TL), and electronic sensors, the digital/virtual counterpart of the real-world system is able to provide seamless monitoring, analysis, evaluation and predictions. The DT offers a platform for the testing and analysing of complex systems, which would be impossible in traditional simulations and modular evaluations. However, the development of this technology faces many challenges including the complexities in effective communication and data accumulation, data unavailability to train Machine Learning (ML) models, lack of processing power to support high fidelity twins, the high need for interdisciplinary collaboration, and the absence of standardized development methodologies and validation measures. Being in the early stages of development, DTs lack sufficient documentation. In this context, this survey paper aims to cover the important aspects in realization of the technology. The key enabling technologies, challenges and prospects of DTs are highlighted. The paper provides a deep insight into the technology, lists design goals and objectives, highlights design challenges and limitations across industries, discusses research and commercial developments, provides its applications and use cases, offers case studies in industry, infrastructure and healthcare, lists main service providers and stakeholders, and covers developments to date, as well as viable research dimensions for future developments in DTs.
Ubiquitous cell-free Massive MIMO communications Interdonato, Giovanni; Björnson, Emil; Quoc Ngo, Hien ...
EURASIP journal on wireless communications and networking,
08/2019, Letnik:
2019, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Since the first cellular networks were trialled in the 1970s, we have witnessed an incredible wireless revolution. From 1G to 4G, the massive traffic growth has been managed by a combination of wider ...bandwidths, refined radio interfaces, and network densification, namely increasing the number of antennas per site. Due its cost-efficiency, the latter has contributed the most. Massive MIMO (multiple-input multiple-output) is a key 5G technology that uses massive antenna arrays to provide a very high beamforming gain and spatially multiplexing of users and hence increases the spectral and energy efficiency (see references herein). It constitutes a centralized solution to densify a network, and its performance is limited by the inter-cell interference inherent in its cell-centric design. Conversely, ubiquitous cell-free Massive MIMO refers to a distributed Massive MIMO system implementing coherent user-centric transmission to overcome the inter-cell interference limitation in cellular networks and provide additional macro-diversity. These features, combined with the system scalability inherent in the Massive MIMO design, distinguish ubiquitous cell-free Massive MIMO from prior coordinated distributed wireless systems. In this article, we investigate the enormous potential of this promising technology while addressing practical deployment issues to deal with the increased back/front-hauling overhead deriving from the signal co-processing.
Recently, machine learning has been used in every possible field to leverage its amazing power. For a long time, the networking and distributed computing system is the key infrastructure to provide ...efficient computational resources for machine learning. Networking itself can also benefit from this promising technology. This article focuses on the application of MLN, which can not only help solve the intractable old network questions but also stimulate new network applications. In this article, we summarize the basic workflow to explain how to apply machine learning technology in the networking domain. Then we provide a selective survey of the latest representative advances with explanations of their design principles and benefits. These advances are divided into several network design objectives and the detailed information of how they perform in each step of MLN workflow is presented. Finally, we shed light on the new opportunities in networking design and community building of this new inter-discipline. Our goal is to provide a broad research guideline on networking with machine learning to help motivate researchers to develop innovative algorithms, standards and frameworks.
This technical note addresses the distributed specified-time consensus protocol design problem for multiagent systems with general linear dynamics over directed graphs. By using motion planning ...approaches, a novel class of distributed consensus protocols are developed. With a prespecified settling time, the proposed protocols solve the consensus problem of linear multiagent systems over directed graphs containing a directed spanning tree. In particular, the settling time can be offline prespecified, according to task requirements. Compared with the existing results for multiagent systems, to our best knowledge, this is the first time specified-time consensus problems are raised and solved for general linear multiagent systems over directed graphs. Extensions to the specified-time formation flying are further studied for multiple satellites described by Hill equations.
This paper considers the problem of communication over a discrete memoryless channel (DMC) or an additive white Gaussian noise (AWGN) channel subject to the constraint that the probability that an ...adversary who observes the channel outputs can detect the communication is low. In particular, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a DMC whose output distribution induced by the "off" input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum amount of information that can be transmitted under this criterion scales like the square root of the blocklength. The same is true for the AWGN channel. Exact expressions for the scaling constant are also derived.
This article studies massive access in cell-free massive multi-input multi-output (MIMO)-based Internet of Things and solves the challenging active user detection (AUD) and channel estimation (CE) ...problems. For the uplink transmission, we propose an advanced frame structure design to reduce the access latency. Moreover, by considering the cooperation of all access points (APs), we investigate two processing paradigms at the receiver for massive access: cloud computing and edge computing. For cloud computing, all APs are connected to a centralized processing unit (CPU), and the signals received at all APs are centrally processed at the CPU. While for edge computing, the central processing is offloaded to part of APs equipped with distributed processing units, so that the AUD and CE can be performed in a distributed processing strategy. Furthermore, by leveraging the structured sparsity of the channel matrix, we develop a structured sparsity-based generalized approximated message passing (SS-GAMP) algorithm for reliable joint AUD and CE, where the quantization accuracy of the processed signals is taken into account. Based on the SS-GAMP algorithm, a successive interference cancellation-based AUD and CE scheme is further developed under two paradigms for reduced access latency. Simulation results validate the superiority of the proposed approach over the state-of-the-art baseline schemes. Besides, the results reveal that the edge computing can achieve the similar massive access performance as the cloud computing, and the edge computing is capable of alleviating the burden on CPU, having a faster access response, and supporting more flexible AP cooperation.
This paper studies the inference problem in quantile regression (QR) for a large sample size n but under a limited memory constraint, where the memory can only store a small batch of data of size m. ...A natural method is the naive divide-and-conquer approach, which splits data into batches of size m, computes the local QR estimator for each batch and then aggregates the estimators via averaging. However, this method only works when n = o(m²) and is computationally expensive. This paper proposes a computationally efficient method, which only requires an initial QR estimator on a small batch of data and then successively refines the estimator via multiple rounds of aggregations. Theoretically, as long as n grows polynomially in m, we establish the asymptotic normality for the obtained estimator and show that our estimator with only a few rounds of aggregations achieves the same efficiency as the QR estimator computed on all the data. Moreover, our result allows the case that the dimensionality p goes to infinity. The proposed method can also be applied to address the QR problem under distributed computing environment (e.g., in a large-scale sensor network) or for real-time streaming data.