Empowered by the optical orthogonal frequency-division multiplexing (O-OFDM) technology, flexible online service provisioning can be realized with dynamic routing, modulation, and spectrum assignment ...(RMSA). In this paper, we propose several online service provisioning algorithms that incorporate dynamic RMSA with a hybrid single-/multi-path routing (HSMR) scheme. We investigate two types of HSMR schemes, namely HSMR using online path computation (HSMR-OPC) and HSMR using fixed path sets (HSMR-FPS). Moreover, for HSMR-FPS, we analyze several path selection policies to optimize the design. We evaluate the proposed algorithms with numerical simulations using a Poisson traffic model and two mesh network topologies. The simulation results have demonstrated that the proposed HSMR schemes can effectively reduce the bandwidth blocking probability (BBP) of dynamic RMSA, as compared to two benchmark algorithms that use single-path routing and split spectrum. Our simulation results suggest that HSMR-OPC can achieve the lowest BBP among all HSMR schemes. This is attributed to the fact that HSMR-OPC optimizes routing paths for each request on the fly with considerations of both bandwidth utilizations and lengths of links. Our simulation results also indicate that the HSMR-FPS scheme that use the largest slots-over-square-of-hops first path-selection policy obtains the lowest BBP among all HSMR-FPS schemes. We then investigate the proposed algorithms' impacts on other network performance metrics, including network throughput and network bandwidth fragmentation ratio. To the best of our knowledge, this is the first attempt to consider dynamic RMSA based on both online path computation and offline path computation with various path selection policies for multipath provisioning in O-OFDM networks.
While it has been a belief for over a decade that wireless sensor networks (WSN) are application-specific, we argue that it can lead to resource underutilization and counter-productivity. We also ...identify two other main problems with WSN: rigidity to policy changes and difficulty to manage. In this paper, we take a radical, yet backward and peer compatible, approach to tackle these problems inherent to WSN. We propose a Software-Defined WSN architecture and address key technical challenges for its core component, Sensor OpenFlow. This work represents the first effort that synergizes software-defined networking and WSN.
Capacity Limits of Optical Fiber Networks Essiambre, René-Jean; Kramer, Gerhard; Winzer, Peter J. ...
Journal of lightwave technology,
02/2010, Letnik:
28, Številka:
4
Journal Article
Recenzirano
Odprti dostop
We describe a method to estimate the capacity limit of fiber-optic communication systems (or ¿fiber channels¿) based on information theory. This paper is divided into two parts. Part 1 reviews ...fundamental concepts of digital communications and information theory. We treat digitization and modulation followed by information theory for channels both without and with memory. We provide explicit relationships between the commonly used signal-to-noise ratio and the optical signal-to-noise ratio. We further evaluate the performance of modulation constellations such as quadrature-amplitude modulation, combinations of amplitude-shift keying and phase-shift keying, exotic constellations, and concentric rings for an additive white Gaussian noise channel using coherent detection. Part 2 is devoted specifically to the "fiber channel.'' We review the physical phenomena present in transmission over optical fiber networks, including sources of noise, the need for optical filtering in optically-routed networks, and, most critically, the presence of fiber Kerr nonlinearity. We describe various transmission scenarios and impairment mitigation techniques, and define a fiber channel deemed to be the most relevant for communication over optically-routed networks. We proceed to evaluate a capacity limit estimate for this fiber channel using ring constellations. Several scenarios are considered, including uniform and optimized ring constellations, different fiber dispersion maps, and varying transmission distances. We further present evidences that point to the physical origin of the fiber capacity limitations and provide a comparison of recent record experiments with our capacity limit estimation.
In underwater wireless optical communication (UWOC) links, multiple scattering may cause temporal spread of beam pulse characterized by the impulse response, which therefore results in inter-symbol ...interference (ISI) and degrades system error performance. The impulse response of UWOC links has been investigated both theoretically and experimentally by researchers but has not been derived in simple closed-form to the best of our knowledge. In this paper, we analyze the optical characteristics of seawater and present a closed-form expression of double Gamma functions to model the channel impulse response. The double Gamma functions model fits well with Monte Carlo simulation results in turbid seawater such as coastal and harbor water. The bit-error-rate (BER) and channel bandwidth are further evaluated based on this model for various link ranges. Numerical results suggest that the temporal pulse spread strongly degrades the BER performance for high data rate UWOC systems with on-off keying (OOK) modulation and limits the channel bandwidth in turbid underwater environments. The zero-forcing (ZF) equalization designed based on our channel model has been adopted to overcome ISI and improve the system performance. It is plausible and convenient to utilize this impulse response model for performance analysis and system design of UWOC systems.
This open access book offers comprehensive, self-contained knowledge on Digital Twin (DT), which is a very promising technology for achieving digital intelligence in the next-generation wireless ...communications and computing networks. DT is a key technology to connect physical systems and digital spaces in Metaverse. The objectives of this book are to provide the basic concepts of DT, to explore the promising applications of DT integrated with emerging technologies, and to give insights into the possible future directions of DT. For easy understanding, this book also presents several use cases for DT models and applications in different scenarios. The book starts with the basic concepts, models, and network architectures of DT. Then, we present the new opportunities when DT meets edge computing, Blockchain and Artificial Intelligence, and distributed machine learning (e.g., federated learning, multi-agent deep reinforcement learning). We also present a wide application of DT as an enabling technology for 6G networks, Aerial-Ground Networks, and Unmanned Aerial Vehicles (UAVs). The book allows an easy cross-reference owing to the broad coverage on both the principle and applications of DT. The book is written for people interested in communications and computer networks at all levels. The primary audience includes senior undergraduates, postgraduates, educators, scientists, researchers, developers, engineers, innovators and research strategists.
Building Broadband Kim, Yongsoo; Kelly, Tim; Raja, Siddhartha
Building broadband: strategies and policies for the developing world,
2010, 06-15-2010, 20100101
eBook, Book
Odprti dostop
This book suggests an ecosystem approach to broadband policy that could help in the design of strategies, policies, and programs that support network expansion, have the potential to transform ...economies, improve the quality and range of services, enable application development, and broaden adoption among users. To identify emerging best practices to nurture this ecosystem, this volume analyzes the Republic of Korea and other leading broadband markets. It identifies three building blocks to support the growth of the broadband ecosystem: defining visionary but flexible strategies, using competition to promote market growth, and facilitating demand. An important but often neglected building block is demand facilitation. This includes raising awareness about the benefits of broadband and improving affordability and accessibility for the largest number of users. Successful countries have often focused on creating a suite of useful applications that increase the relevance of broadband to the widest base of users. Programs to mainstream information and communication technology (ICT) use in education, health, or government have been common.
Based on the concept of infrastructure as a service, optical network virtualization can facilitate the sharing of physical infrastructure among different users and applications. In this paper, we ...design algorithms for both transparent and opaque virtual optical network embedding (VONE) over flexible-grid elastic optical networks. For transparent VONE, we first formulate an integer linear programming (ILP) model that leverages the all-or-nothing multi-commodity flow in graphs. Then, to consider the continuity and consecutiveness of substrate fiber links' (SFLs') optical spectra, we propose a layered-auxiliary-graph (LAG) approach that decomposes the physical infrastructure into several layered graphs according to the bandwidth requirement of a virtual optical network request. With LAG, we design two heuristic algorithms: one applies LAG to achieve integrated routing and spectrum assignment in link mapping (i.e., local resource capacity (LRC)-layered shortest-path routing LaSP), while the other realizes coordinated node and link mapping using LAG (i.e., layered local resource capacity(LaLRC)-LaSP). The simulation results from three different substrate topologies demonstrate that LaLRC-LaSP achieves better blocking performance than LRC-LaSP and an existing benchmark algorithm. For the opaque VONE, an ILP model is also formulated. We then design a LRC metric that considers the spectrum consecutiveness of SFLs. With this metric, a novel heuristic for opaque VONE, consecutiveness-aware LRC-K shortest-path-first fit (CaLRC-KSP-FF), is proposed. Simulation results show that compared with the existing algorithms, CaLRC-KSP-FF can reduce the request blocking probability significantly.
A multiplicity of autonomous terminals simultaneously transmits data streams to a compact array of antennas. The array uses imperfect channel-state information derived from transmitted pilots to ...extract the individual data streams. The power radiated by the terminals can be made inversely proportional to the square-root of the number of base station antennas with no reduction in performance. In contrast if perfect channel-state information were available the power could be made inversely proportional to the number of antennas. Lower capacity bounds for maximum-ratio combining (MRC), zero-forcing (ZF) and minimum mean-square error (MMSE) detection are derived. An MRC receiver normally performs worse than ZF and MMSE. However as power levels are reduced, the cross-talk introduced by the inferior maximum-ratio receiver eventually falls below the noise level and this simple receiver becomes a viable option. The tradeoff between the energy efficiency (as measured in bits/J) and spectral efficiency (as measured in bits/channel use/terminal) is quantified for a channel model that includes small-scale fading but not large-scale fading. It is shown that the use of moderately large antenna arrays can improve the spectral and energy efficiency with orders of magnitude compared to a single-antenna system.
Landslide, one of the most critical natural hazards, is caused due to specific compositional slope movement. In the past decades, due to inflation of urbanized area and climate change, a compelling ...expansion in landslide prevalence took place which is also termed as mass/slope movement and mass wasting, causing extensive collapse around the world. The principal reason for its pursuance is a reduction in the internal resistance of soil and rocks, classified as a slide, topple, fall, and flow. Slopes can be differentiated based on earth material and the nature of its movements. The downward flow of landslides occurs due to excessive rainfall, snowmelt, earthquake, volcanic eruption, and so on. This review article revisits the conventional approaches for identification of landslides, predicting future risk, associated with slope failures, followed by emphasizing the advantages of modern geospatial techniques such as aerial photogrammetry, satellite remote sensing images (ie, panchromatic, multispectral, radar images), Terrestrial laser scanning, and High‐Resolution Digital Elevation Model (HR‐DEM) in updating landslide inventory maps. Machine learning techniques like Support Vector Machine, Artificial neural network, deep learning has been extensively used with geographical data producing effective results for assessment of natural hazard/resources and environmental research. Based on recent studies, deep learning is a reliable tool addressing remote sensing challenges such as trade‐off in imaging system producing poor quality investigation, in addition, to expedite consequent task such as image recognition, object detection, classification, and so on. Conventional methods, like pixel and object‐based machine learning methods, have been broadly explored. Advanced development in deep learning technique like CNN (Convolutional neural network) has been extensively successful in information extraction from an image and has exceeded other traditional approaches. Over the past few years, minor attempts have been made for landslide susceptibility mapping using CNN. In addition, small sample sizes for training purpose will be major drawback and notably remarkable while using deep learning techniques. Also, assessment of the model's performance with diverse training and testing proportion other than commonly utilized ratio, that is, 70/30 needs to be explored further. The review article briefly highlights the remote sensing methods for landslide detection using machine learning and deep learning.
This article presents various landslide detection techniques. Remote sensing methods for detection of landslide based on Machine learning and deep learning‐based classification methods have been discussed.
This paper studies the newly emerging wireless powered communication network in which one hybrid access point (H-AP) with constant power supply coordinates the wireless energy/information ...transmissions to/from a set of distributed users that do not have other energy sources. A "harvest-then-transmit" protocol is proposed where all users first harvest the wireless energy broadcast by the H-AP in the downlink (DL) and then send their independent information to the H-AP in the uplink (UL) by time-division-multiple-access (TDMA). First, we study the sum-throughput maximization of all users by jointly optimizing the time allocation for the DL wireless power transfer versus the users' UL information transmissions given a total time constraint based on the users' DL and UL channels as well as their average harvested energy values. By applying convex optimization techniques, we obtain the closed-form expressions for the optimal time allocations to maximize the sum-throughput. Our solution reveals an interesting "doubly near-far" phenomenon due to both the DL and UL distance-dependent signal attenuation, where a far user from the H-AP, which receives less wireless energy than a nearer user in the DL, has to transmit with more power in the UL for reliable information transmission. As a result, the maximum sum-throughput is shown to be achieved by allocating substantially more time to the near users than the far users, thus resulting in unfair rate allocation among different users. To overcome this problem, we furthermore propose a new performance metric so-called common-throughput with the additional constraint that all users should be allocated with an equal rate regardless of their distances to the H-AP. We present an efficient algorithm to solve the common-throughput maximization problem. Simulation results demonstrate the effectiveness of the common-throughput approach for solving the new doubly near-far problem in wireless powered communication networks.