The burgeoning field of fog computing introduces a transformative computing paradigm with extensive applications across diverse sectors. At the heart of this paradigm lies the pivotal role of edge ...servers, which are entrusted with critical computing and storage functions. The optimization of these servers’ storage capacities emerges as a crucial factor in augmenting the efficacy of fog computing infrastructures. This paper presents a novel storage optimization algorithm, dubbed LIRU (Low Interference Recently Used), which synthesizes the strengths of the LIRS (Low Interference Recency Set) and LRU (Least Recently Used) replacement algorithms. Set against the backdrop of constrained storage resources, this research endeavours to formulate an algorithm that optimizes storage space utilization, elevates data access efficiency, and diminishes access latencies. The investigation initiates a comprehensive analysis of the storage resources available on edge servers, pinpointing the essential considerations for optimization algorithms: storage resource utilization and data access frequency. The study then constructs an optimization model that harmonizes data frequency with cache capacity, employing optimization theory to discern the optimal solution for storage maximization. Subsequent experimental validations of the LIRU algorithm underscore its superiority over conventional replacement algorithms, showcasing significant improvements in storage utilization, data access efficiency, and reduced access delays. Notably, the LIRU algorithm registers a 5% increment in one-hop hit ratio relative to the LFU algorithm, a 66% enhancement over the LRU algorithm, and a 14% elevation in system hit ratio against the LRU algorithm. Moreover, it curtails the average system response time by 2.4% and 16.5% compared to the LRU and LFU algorithms, respectively, particularly in scenarios involving large cache sizes. This research not only sheds light on the intricacies of edge server storage optimization but also significantly propels the performance and efficiency of the broader fog computing ecosystem. Through these insights, the study contributes a valuable framework for enhancing data management strategies within fog computing architectures, marking a noteworthy advancement in the field.
Digital Twin (DT) is an emerging technology surrounded by many promises, and potentials to reshape the future of industries and society overall. A DT is a system-of-systems which goes far beyond the ...traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart. The two systems (physical and digital) exist side by side, sharing all the inputs and operations using real-time data communications and information transfer. With the incorporation of Internet of Things (IoT), Artificial Intelligence (AI), 3D models, next generation mobile communications (5G/6G), Augmented Reality (AR), Virtual Reality (VR), distributed computing, Transfer Learning (TL), and electronic sensors, the digital/virtual counterpart of the real-world system is able to provide seamless monitoring, analysis, evaluation and predictions. The DT offers a platform for the testing and analysing of complex systems, which would be impossible in traditional simulations and modular evaluations. However, the development of this technology faces many challenges including the complexities in effective communication and data accumulation, data unavailability to train Machine Learning (ML) models, lack of processing power to support high fidelity twins, the high need for interdisciplinary collaboration, and the absence of standardized development methodologies and validation measures. Being in the early stages of development, DTs lack sufficient documentation. In this context, this survey paper aims to cover the important aspects in realization of the technology. The key enabling technologies, challenges and prospects of DTs are highlighted. The paper provides a deep insight into the technology, lists design goals and objectives, highlights design challenges and limitations across industries, discusses research and commercial developments, provides its applications and use cases, offers case studies in industry, infrastructure and healthcare, lists main service providers and stakeholders, and covers developments to date, as well as viable research dimensions for future developments in DTs.
Privacy Preserving Average Consensus Yilin Mo; Murray, Richard M.
IEEE transactions on automatic control,
2017-Feb., 2017-2-00, 20170201, Letnik:
62, Številka:
2
Journal Article
Recenzirano
Odprti dostop
Average consensus is a widely used algorithm for distributed computing and control, where all the agents in the network constantly communicate and update their states in order to achieve an ...agreement. This approach could result in an undesirable disclosure of information on the initial state of an agent to the other agents. In this paper, we propose a privacy preserving average consensus algorithm to guarantee the privacy of the initial state and asymptotic consensus on the exact average of the initial values, by adding and subtracting random noises to the consensus process. We characterize the mean square convergence rate of our consensus algorithm and derive the covariance matrix of the maximum likelihood estimate on the initial state. Moreover, we prove that our proposed algorithm is optimal in the sense that it does not disclose any information more than necessary to achieve the average consensus. A numerical example is provided to illustrate the effectiveness of the proposed design.
The advent of wireless sensor technology and ad-hoc networks has made DSC a major field of interest. Edited and written by the leading players in the field, this book presents the latest theory, ...algorithms and applications, making it the definitive reference on DSC for systems designers and implementers, researchers, and graduate students. This book gives a clear understanding of the performance limits of distributed source coders for specific classes of sources and presents the design and application of practical algorithms for realistic scenarios. Material covered includes the use of standard channel codes, such as LDPC and Turbo codes, to DSC, and discussion of the suitability of compressed sensing for distributed compression of sparse signals. Extensive applications are presented and include distributed video coding, microphone arrays and securing biometric data. * Clear explanation of the principles of distributed source coding (DSC), a technology that has applications in sensor networks, ad-hoc networks, and distributed wireless video systems for surveillance * Edited and written by the leading players in the field, providing a complete and authoritative reference * Contains all the latest theory, practical algorithms for DSC design and the most recently developed applications
Ubiquitous cell-free Massive MIMO communications Interdonato, Giovanni; Björnson, Emil; Quoc Ngo, Hien ...
EURASIP journal on wireless communications and networking,
08/2019, Letnik:
2019, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Since the first cellular networks were trialled in the 1970s, we have witnessed an incredible wireless revolution. From 1G to 4G, the massive traffic growth has been managed by a combination of wider ...bandwidths, refined radio interfaces, and network densification, namely increasing the number of antennas per site. Due its cost-efficiency, the latter has contributed the most. Massive MIMO (multiple-input multiple-output) is a key 5G technology that uses massive antenna arrays to provide a very high beamforming gain and spatially multiplexing of users and hence increases the spectral and energy efficiency (see references herein). It constitutes a centralized solution to densify a network, and its performance is limited by the inter-cell interference inherent in its cell-centric design. Conversely, ubiquitous cell-free Massive MIMO refers to a distributed Massive MIMO system implementing coherent user-centric transmission to overcome the inter-cell interference limitation in cellular networks and provide additional macro-diversity. These features, combined with the system scalability inherent in the Massive MIMO design, distinguish ubiquitous cell-free Massive MIMO from prior coordinated distributed wireless systems. In this article, we investigate the enormous potential of this promising technology while addressing practical deployment issues to deal with the increased back/front-hauling overhead deriving from the signal co-processing.
<p>Offers valuable insight into the complex world of distributed computing systems</p> <p>Distributed computing allows multiple autonomous computers to work together to solve ...complex computational problems. The increased processing power comes at the cost of increased electrical power usage. Greener distributed computing systems would allow users to exploit the power of these systems while avoiding adverse environmental effects and exorbitant energy costs.</p> <p>One of the first books of its kind, this timely reference illustrates the need for, and the state of, increasingly energy-efficient distributed computing systems. Featuring the latest research findings on emerging topics by well-known scientists, it explains how constraints on energy consumption create a suite of complex engineering problems that need to be resolved in order to lead to "greener" distributed computing systems.</p> <p><i>Energy-Efficient Distributed Computing Systems:</i></p> <ul> <li>Summarizes the latest research achievements in the field of energy-efficient computing</li> <li>Strikes a balance between theoretical and practical coverage of innovative problem-solving techniques for a range of distributed platforms</li> <li>Provides a wealth of paradigms, technologies, and applications that target the different facets of energy consumption in computing systems</li> <li>Allows researchers to explore different energy-consumption issues and their impact on the design of new computing systems</li> <li>Includes carefully arranged, timely information dealing with vital factors affecting performance in a variety of important high-performance systems</li> <li>Offers research that greatly feeds into other technologies and application domains</li> </ul> <p>An ideal text for senior undergraduates and postgraduate students who study computer science and engineering, the book will also appeal to researchers, engineers, and IT professionals who work in the fields of energy-efficient computing.</p>
A guide to the essential techniques for designing and building dependable distributed systems. Instead of covering a broad range of research works for each dependability strategy, it focuses on only ...a selected few, explaining each in depth, usually with a comprehensive set of examples.
In this paper, we investigate a cell-free massive MIMO system with both access points (APs) and user equipments (UEs) equipped with multiple antennas over jointly-correlated Rayleigh fading channels. ...We study four uplink implementations, from fully centralized processing to fully distributed processing, and derive their achievable spectral efficiency (SE) expressions with minimum mean-squared error successive interference cancellation (MMSE-SIC) detectors and arbitrary combining schemes. Furthermore, the global and local MMSE combining schemes are derived based on full and local channel state information (CSI) obtained under pilot contamination, which can maximize the achievable SE for the fully centralized and distributed implementation, respectively. We study a two-layer decoding implementation with an arbitrary combining scheme in the first layer and optimal large-scale fading decoding (LSFD) in the second layer. Besides, we compute novel closed-form SE expressions for the two-layer decoding implementation with maximum ratio (MR) combining. In the numerical results, we compare the SE performance for different implementation levels, combining schemes, and channel models. It is important to note that increasing the number of antennas per UE may degrade the SE performance.