Our work in this paper stems from our insight that recent research efforts on open vehicle routing (OVR) problems, an active area in operations research, are based on similar assumptions and ...constraints compared to sensor networks. Therefore, it may be feasible that we could adapt these techniques in such a way that they will provide valuable solutions to certain tricky problems in the wireless sensor network (WSN) domain. To demonstrate that this approach is feasible, we develop one data collection protocol called EDAL, which stands for Energy-efficient Delay-aware Lifetime-balancing data collection. The algorithm design of EDAL leverages one result from OVR to prove that the problem formulation is inherently NP-hard. Therefore, we proposed both a centralized heuristic to reduce its computational overhead and a distributed heuristic to make the algorithm scalable for large-scale network operations. We also develop EDAL to be closely integrated with compressive sensing, an emerging technique that promises considerable reduction in total traffic cost for collecting sensor readings under loose delay bounds. Finally, we systematically evaluate EDAL to compare its performance to related protocols in both simulations and a hardware testbed.
Infrastructure-as-a-Service (IaaS) cloud computing offers customers (tenants) a scalable and economical way to provision virtual machines (VMs) on demand while charging them only for the leased ...computing resources by time. However, due to the VM contention on shared computing resources in datacenters, this new computing paradigm inevitably brings noticeable performance overhead (i.e., unpredictable performance) of VMs to tenants, which has become one of the primary issues of the IaaS cloud. Consequently, increasing efforts have recently been devoted to guaranteeing VM performance for tenants. In this survey, we review the state-of-the-art research on managing the performance overhead of VMs, and summarize them under diverse scenarios of the IaaS cloud, ranging from the single-server virtualization, a single mega datacenter, to multiple geodistributed datacenters. Specifically, we unveil the causes of VM performance overhead by illustrating representative scenarios, discuss the performance modeling methods with a particular focus on their accuracy and cost, and compare the overhead mitigation techniques by identifying their effectiveness and implementation complexity. With the obtained insights into the pros and cons of each existing solution, we further bring forth future research challenges pertinent to the modeling methods and mitigation techniques of VM performance overhead in the IaaS cloud.
The accurately estimated state is of great importance for maintaining a stable running condition of power systems. To maintain the accuracy of the estimated state, bad data detection (BDD) is ...utilized by power systems to get rid of erroneous measurements due to meter failures or outside attacks. However, false data injection (FDI) attacks, as recently revealed, can circumvent BDD and insert any bias into the value of the estimated state. Continuous works on constructing and/or protecting power systems from such attacks have been done in recent years. This survey comprehensively overviews three major aspects: constructing FDI attacks; impacts of FDI attacks on electricity market; and defending against FDI attacks. Specifically, we first explore the problem of constructing FDI attacks, and further show their associated impacts on electricity market operations, from the adversary's point of view. Then, from the perspective of the system operator, we present countermeasures against FDI attacks. We also outline the future research directions and potential challenges based on the above overview, in the context of FDI attacks, impacts, and defense.
The wide proliferation of various wireless communication systems and wireless devices has led to the arrival of big data era in large-scale wireless networks. Big data of large-scale wireless ...networks has the key features of wide variety, high volume, real-time velocity, and huge value leading to the unique research challenges that are different from existing computing systems. In this article, we present a survey of the state-of-art big data analytics (BDA) approaches for large-scale wireless networks. In particular, we categorize the life cycle of BDA into four consecutive stages: Data Acquisition, Data Preprocessing, Data Storage, and Data Analytics. We then present a detailed survey of the technical solutions to the challenges in BDA for large-scale wireless networks according to each stage in the life cycle of BDA. Moreover, we discuss the open research issues and outline the future directions in this promising area.
The Internet of Things (IoT) aims to connect billions of smart objects to the Internet, which can bring a promising future to smart cities. These objects are expected to generate large amounts of ...data and send the data to the cloud for further processing, especially for knowledge discovery, in order that appropriate actions can be taken. However, in reality sensing all possible data items captured by a smart object and then sending the complete captured data to the cloud is less useful. Further, such an approach would also lead to resource wastage (e.g., network, storage, etc.). The Fog (Edge) computing paradigm has been proposed to counterpart the weakness by pushing processes of knowledge discovery using data analytics to the edges. However, edge devices have limited computational capabilities. Due to inherited strengths and weaknesses, neither Cloud computing nor Fog computing paradigm addresses these challenges alone. Therefore, both paradigms need to work together in order to build a sustainable IoT infrastructure for smart cities. In this article, we review existing approaches that have been proposed to tackle the challenges in the Fog computing domain. Specifically, we describe several inspiring use case scenarios of Fog computing, identify ten key characteristics and common features of Fog computing, and compare more than 30 existing research efforts in this domain. Based on our review, we further identify several major functionalities that ideal Fog computing platforms should support and a number of open challenges toward implementing them, to shed light on future research directions on realizing Fog computing for building sustainable smart cities.
This paper studies the problem of energy charging using a robust Stackelberg game approach in a power system composed of an aggregator and multiple electric vehicles (EVs) in the presence of demand ...uncertainty, where the aggregator and EVs are considered to be a leader and multiple followers, respectively. We propose two different robust approaches under demand uncertainty: a noncooperative optimization and a cooperative design. In the robust noncooperative approach, we formulate the energy charging problem as a competitive game among self-interested EVs, where each EV chooses its own demand strategy to maximize its own benefit selfishly. In the robust cooperative model, we present an optimal distributed energy scheduling algorithm that maximizes the sum benefit of the connected EVs. We theoretically prove the existence and uniqueness of robust Stackelberg equilibrium for the two approaches and develop distributed algorithms to converge to the global optimal solution that are robust against the demand uncertainty. Moreover, we extend the two robust models to a time-varying power system to handle the slowly varying environments. Simulation results show the effectiveness of the robust solutions in uncertain environments.
Resource allocation in Internet of Vehicles (IoV) edge computing is currently a research hotspot. Existing studies focus on social welfare or revenue maximization. However, there is little research ...on lowest revenue guarantees, which is a problem of great concern to resource providers. This paper presents the innovative concept of the lowest revenue limit, which enables service providers to preset the revenue
B
and calculate whether the preset revenue can be achieved under the current supply and demand of resources through mechanism design. This approach is very friendly to service providers and can prevent low revenue and waste of resources. Specifically, we improved the ascending price auction mechanism so that it can be used for multi-resource allocation, the unit prices of different resources are calculated according to the intensity of competition among users, and the winning users and the payment are determined by eliminating users with low cost performance. Our mechanism is not sensitive to resource capacity, works well under deployment constraints in edge computing, and satisfies economic characteristics such as individual rationality and truthfulness. Compared with existing algorithms, our approach is shown to enable the service provider to obtain a higher revenue under a lower resource utilization.
Using insights from biological processes could help to design new optimization techniques for long-standing computational problems. This paper exploits a cellular computing model in the slime mold ...physarum polycephalum to solve the Steiner tree problem which is an important NP-hard problem in various applications, especially in network design. Inspired by the path-finding and network formation capability of physarum, we develop a new optimization algorithm, named as the physarum optimization, with low complexity and high parallelism. To validate and evaluate our proposed models and algorithm, we further apply the physarum optimization to the minimal exposure problem which is a fundamental problem corresponding to the worst-case coverage in wireless sensor networks. Complexity analysis and simulation results show that our proposed algorithm could achieve good performance with low complexity. Moreover, the core mechanism of our physarum optimization also may provide a useful starting point to develop some practical distributed algorithms for network design.
Deep Learning (DL) algorithms based on artificial neural networks have achieved remarkable success and are being extensively applied in a variety of application domains, ranging from image ...classification, automatic driving, natural language processing to medical diagnosis, credit risk assessment, intrusion detection. However, the privacy and security issues of DL have been revealed that the DL model can be stolen or reverse engineered, sensitive training data can be inferred, even a recognizable face image of the victim can be recovered. Besides, the recent works have found that the DL model is vulnerable to adversarial examples perturbed by imperceptible noised, which can lead the DL model to predict wrongly with high confidence. In this paper, we first briefly introduces the four types of attacks and privacy-preserving techniques in DL. We then review and summarize the attack and defense methods associated with DL privacy and security in recent years. To demonstrate that security threats really exist in the real world, we also reviewed the adversarial attacks under the physical condition. Finally, we discuss current challenges and open problems regarding privacy and security issues in DL.
Big Data though it is a hype up-springing many technical challenges that confront both academic research communities and commercial IT deployment, the root sources of Big Data are founded on data ...streams and the curse of dimensionality. It is generally known that data which are sourced from data streams accumulate continuously making traditional batch-based model induction algorithms infeasible for real-time data mining. Feature selection has been popularly used to lighten the processing load in inducing a data mining model. However, when it comes to mining over high dimensional data the search space from which an optimal feature subset is derived grows exponentially in size, leading to an intractable demand in computation. In order to tackle this problem which is mainly based on the high-dimensionality and streaming format of data feeds in Big Data, a novel lightweight feature selection is proposed. The feature selection is designed particularly for mining streaming data on the fly, by using accelerated particle swarm optimization (APSO) type of swarm search that achieves enhanced analytical accuracy within reasonable processing time. In this paper, a collection of Big Data with exceptionally large degree of dimensionality are put under test of our new feature selection algorithm for performance evaluation.