Recent developments in the aerospace industry have led to a dramatic reduction in the manufacturing and launch costs of low Earth orbit satellites. The new trend enables the paradigm shift of ...satellite-terrestrial integrated networks with global coverage. In particular, the integration of 5G communication systems and satellites has the potential to restructure next-generation mobile networks. By leveraging the network function visualization and network slicing, the satellite 5G core networks will facilitate the coordination and management of network functions in satellite-terrestrial integrated networks. We are the first to deploy a 5G core network on a real-world satellite to investigate its feasibility. We conducted experiments to validate the satellite 5G core network functions. The validated procedures include registration and session setup procedures. The results show that the satellite 5G core network can function normally and generate correct signaling.
Remote sensing is one of the main applications of satellite communication. With limited communication time between low earth orbit (LEO) sensing satellites and the ground, geostationary orbit (GEO) ...satellite relaying has become an effective solution. In practical applications, one GEO communication satellite usually needs to provide data forwarding services for multiple satellites or even multiple satellite constellations. Therefore, communication resource management of GEO satellites has become crucial. Existing methods only focus on resource allocation without taking data forwarding task scheduling into consideration, which limits their applicability in real-time applications. In this paper, we propose a real-time resource management approach for GEO relaying, aiming to maximize network throughputs as well as to reduce transmission delays. In this approach, resource allocation is modeled as a Stackelberg game while task scheduling is modeled as a real-time queuing problem. Furthermore, we propose two real-time algorithms, including an adaptive gradient descent method to find optimal price for data forwarding services with low computational complexity, and a queue-jumping algorithm for data forwarding scheduling to minimize transmission delays. With comparison with existing methods, simulations verify the effectiveness of our proposed method with respect to convergence speed of the algorithm, network throughputs, and transmission delays.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
As more and more services are deployed in the cloud datacenter, network traffic is growing exponentially. Virtual machines (VMs) of a virtual cluster (VC) must be allocated on physical machines (PMs) ...in the datacenter with a certain topology. Each VM needs some resources to run various services. Apparently, allocating VMs in the datacenter as compactly as possible can reduce traffic consumption and avoid bandwidth-related bottlenecks. However, loose allocation scheme can reduce the loss expectation of VMs due to failure possibilities of PMs and switches, and the availability of the VC is increased thereby. To enhance availability and reduce network bandwidth usage, it is significant to determine the scheme of allocating VMs of the VC. In this paper, we first introduce four typical datacenter architectures with network topologies and corresponding cost matrices, and extend to generality. Then we propose a joint optimization function to measure the risk of VC and the core bandwidth usage with a global availability constraint. Subsequently, an evolutionary algorithm is raised to minimize the value of the constrained optimization function. Finally, the evaluation results show the effectiveness of the proposed approach and performance advancement over the existing approaches.
A critical research issue is to lower the energy consumption of a virtualized data center by means of virtual machine placement optimization while satisfying the resource requirements of the cloud ...services. In this paper, we focus on different existing schemes and on the energy-aware virtual machine placement optimization problem of a heterogeneous virtualized data center. We attempt to explore a better alternative approach to minimizing the energy consumption, and we observe that particle swarm optimization (PSO) has considerable potential. However, the PSO must be improved to solve an optimization problem. The improvement includes redefining the parameters and operators of the PSO, adopting an energy-aware local fitness first strategy and designing a novel coding scheme. Using the improved PSO, an optimal virtual machine replacement scheme with the lowest energy consumption can be found. Experimental results indicate that our approach significantly outperforms other approaches, and can lessen 13%-23% energy consumption in the context of this paper.
An increasing number of companies are beginning to deploy services/applications in the cloud computing environment. Enhancing the reliability of cloud service has become a critical and challenging ...research problem. In the cloud computing environment, all resources are commercialized. Therefore, a reliability enhancement approach should not consume too much resource. However, existing approaches cannot achieve the optimal effect because of checkpoint image-sharing neglect, and checkpoint image inaccessibility caused by node crashing. To address this problem, we propose a cloud service reliability enhancement approach for minimizing network and storage resource usage in a cloud data center. In our proposed approach, the identical parts of all virtual machines that provide the same service are checkpointed once as the service checkpoint image, which can be shared by those virtual machines to reduce the storage resource consumption. Then, the remaining checkpoint images only save the modified page. To persistently store the checkpoint image, the checkpoint image storage problem is modeled as an optimization problem. Finally, we present an efficient heuristic algorithm to solve the problem. The algorithm exploits the data center network architecture characteristics and the node failure predicator to minimize network resource usage. To verify the effectiveness of the proposed approach, we extend the renowned cloud simulator Cloudsim and conduct experiments on it. Experimental results based on the extended Cloudsim show that the proposed approach not only guarantees cloud service reliability, but also consumes fewer network and storage resources than other approaches.
The number of mobile applications(APPs) has increased dramatically with the development of mobile Internet. It becomes challenging for users to identify these APPs they are really interested in. ...Existing mobile APP recommendation methods focus on learning users' preference and recommending high visibility APPs. However, some low visibility APPs may satisfy users and even surprise them. If those low visibility APPs have the opportunity to show to the user, they will not only improve the user's satisfaction, but also provide a fair competitive market for APP providers. Furthermore, it will improve the vitality of the APP market. To this end, we present a fairness-aware APP recommendation method named FARM. The principal study of this method emphasizes on the fairness issue during the recommendation process. In this method, APP candidates are divided into high visibility and low visibility APPs, and implement recommendation algorithm respectively. For low visibility APPs, we set a fairness factor for everyone, and use the user's latest feedback to make a dynamic adjustment. Based on the fairness factor, the recommendation is implemented by roulette-wheel. For high visibility APPs, we employ the fuzzy analytic hierarchy process to implement the recommendation. The evaluation results show that FARM outperforms baselines in terms of recommendation fairness.
Geosynchronous Earth Orbit (GEO) satellites, which can relay image data for Low Earth Orbit (LEO) satellites, play an important role in remote sensing. With the development of satellite technologies, ...the significantly improved computation capabilities of GEO satellites have enabled space service computing, through which GEO satellites can provide data processing services before forwarding to reduce the quantity of transmitted data. In the presence of multiple LEO satellites, how to make effective use of limited communication and computation resources in GEO satellites has become crucial. At present, the research on satellite resource management typically focuses on either communication or computation resources. Existing resource management algorithms are usually of slow convergence speed, which limits their applicability in real-time remote sensing scenarios. Therefore, we propose an aggregated resource management method for remote sensing applications. We first propose models for transmission tasks and processing tasks of remote sensing images. Then we formulate the aggregated resource management for satellite edge computing as a hybrid Stackelberg game and simplify the problem to speed up its convergence speed. Then we propose a distributed resource management algorithm to determine the optimal strategies. Simulation results show that the proposed method can quickly obtain the optimal resource allocation strategy and outperforms typical dynamic iterative algorithms in terms of service quantity and throughput.
Content delivery network (CDN) has gained increasing popularity in recent years for facilitating content delivery. Most existing CDN-based works upload the content generated by mobile users to the ...cloud data center firstly. Then, the cloud data center delivers the content to the proxy server. Finally, the mobile users request the required content from the proxy server. However, uploading all the collected content to the cloud data center increases the pressure on the core network. In addition, it also wastes a lot of bandwidth resources because most of the content does not have to be uploaded. To make up for the shortcomings of existing CDN-based works, this article proposes an edge content delivery and update (ECDU) framework based on mobile edge computing architecture. In the ECDU framework, we deploy a number of content servers to store raw content collected from mobile users, and cache pools to store content that frequently requested at the edge of the network. Thus, it is not necessary to upload all content collected by mobile users to the cloud data center, thereby alleviating the pressure of the core network. Based on content popularity and cache pool ranking, we also propose edge content delivery (ECD) and edge content update (ECU) schemes. The ECD scheme is to deliver content from cloud data center to cache pool, and the ECU scheme is to mitigate the content to appropriate cache pools in terms of its request frequency and cache pool ranking. Finally, a representative case study is provided and several open research issues are discussed.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Dynamic adaptive video streaming over HTTP (DASH) is widely studied and has been adopted in modern video players to ensure user quality of experience (QoE). In DASH, adaptive bitrate control is a key ...part whose ultimate goal is to maximize video bitrate while minimizing rebuffering. Throughput prediction plays an important role in helping select the proper video bitrate dynamically. In this paper, we studied the influence of throughput prediction on adaptive video streaming. Because the real-world network is dynamic, different methods need to be tested with large-scale deployments and analyzed statistically. However, this is difficult in academic research. Therefore, we established a reproducible trace-based emulation environment, which enables us to compare different methods quantitatively under the artificially same condition, with limited experiments. The throughput prediction methods are implemented into DASH to evaluate the effect on QoE for video streaming. The results indicate that the prediction method using long short-term memory (LSTM) performs better than the other methods. However, throughput prediction alone is not enough to ensure high QoE. To further improve the QoE, we proposed the decision map method (DMM), where the buffer occupancy is also incorporated to make a selection. By using this decision map, the choice of bitrate can be smarter than that when only prediction information is used. The total QoE is further improved by 32.1% in the ferry trace, which shows the effectiveness of DMM in further improving the performance of throughput prediction in adaptive bitrate control.