Spatial crowdsensing is a special kind of crowdsourcing which allocates tasks to workers in some special places where workers can sense data for them. Due to the lack of priori information about the ...quality of workers and the ground truth, selecting the most suitable workers, which can guarantee the quality of the sensing tasks, remains a great challenge. In this paper, we propose a novel framework which can choose the most reliable workers among available workers under constraint budget. We model the quality of workers through two factors, bias and variance, which describe the continuous feature of sensing tasks. Our framework first allocate some calibration tasks to calibrate the bias and then iteratively estimate the workers variance more and more accurately. To choose more reliable workers, we face the exploration and exploitation dilemma. Therefore, we design a novel Multi-Armed Bandit (MAB) algorithm which based on Upper Confidence Bounds (UCB) scheme and combined with a weighted data aggregation scheme to estimate a more accurate ground truth of a sensing task. Futhermore, a dynamic budget allocation algorithm is designed to achieve global optimization. Then, we prove the expected sensing error can be bounded according to the regret bound of the MAB. In simulation experiments, we compare our algorithm with several baselines with real-world data set and it shows the effectiveness in inferring the ground truth with limited budget.
Mobile crowdsensing has become a novel and promising paradigm in collecting, analyzing, and exploiting massive amounts of data. However, the issue of data quality has not been carefully addressed. ...Low quality data contributions undermine the effectiveness and prospects of crowdsensing, and thus motivate the need for approaches to guarantee the high quality of the contributed data. In this paper, we integrate quality estimation and monetary incentive, and propose a quality-based truth estimation and surplus sharing method for crowdsensing. Specifically, we design an unsupervised learning approach to quantify the users' data qualities and long-term reputations, and exploit an outlier detection technique to filter out anomalous data items. Furthermore, we model the process of surplus sharing as a co-operative game, and propose a Shapley value-based method to determine each user's payment. We have conducted a real crowdsensing experiment and a large-scale simulation to evaluate our method. The evaluation results show that our approach achieves good performance in terms of both quality estimation and surplus sharing.
With the increase of mobile devices, Mobile Crowdsensing (MCS) has become an efficient way to ubiquitously sense and collect environment data. Comparing to traditional sensor networks, MCS has a ...vital advantage that workers play an active role in collecting and sensing data. However, due to the openness of MCS, workers and sensors are of different qualities. Low quality sensors and workers may yield noisy data or even inaccurate data. Which gives the importance of inferring the quality of workers and sensors and seeking a valid task assignment with enough total qualities for MCS. To solve the problem, we adopt truth inference methods to iteratively infer the truth and qualities. Based on the quality inference, this paper proposes a task assignment problem called quality-bounded task assignment with redundancy constraint (QTAR). Different from traditional task assignment problem, redundancy constraint is added to satisfy the preliminaries of truth inference, which requires that each task should be assigned a certain or more amount of workers. We prove that QTAR is NP-complete and propose a <inline-formula><tex-math notation="LaTeX"> (2+\epsilon)</tex-math> <mml:math><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>+</mml:mo><mml:mi>ε</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="gao-ieq1-2965932.gif"/> </inline-formula> - approximation algorithm for QTAR, called QTA. Finally, experiments are conducted on both synthesis data and real dataset. The results of the experiments prove the efficiency and effectiveness of our algorithms.
Deep learning (DL) is becoming increasingly popular in many domains, including computer vision, speech recognition, self-driving automobiles, etc. GPU can train DL models efficiently but is ...expensive, which motivates users to share GPU resource to reduce money costs in practice. To ensure efficient sharing among multiple users, it is necessary to develop efficient GPU resource management and scheduling solutions. However, existing ones have several shortcomings. First, they require the users to specify the job resource requirement which is usually quite inaccurate and leads to cluster resource underutilization. Second, when scheduling DL jobs, they rarely take the cluster network characteristics into consideration, resulting in low job execution performance. To overcome the above issues, we propose Liquid, an efficient GPU resource management platform for DL jobs with intel li gent resource re qui rement estimation and sche d uling. First, we propose a regression model based method for job resource requirement estimation to avoid users over-allocating computing resources. Second, we propose intelligent cluster network-efficient scheduling methods in both immediate and batch modes based on the above resource requirement estimation techniques. Third, we further propose three system-level optimizations, including pre-scheduling data transmission, fine-grained GPU sharing, and event-driven communication. Experimental results show that our Liquid can accelerate the job execution speed by 18% on average and shorten the average job completion time (JCT) by 21% compared with cutting-edge solutions. Moreover, the proposed optimization methods are effective in various scenarios.
In the problem of routing in multi-hop wireless networks, to achieve high end-to-end throughput, it is crucial to find the "best" path from the source node to the destination node. Although a large ...number of routing protocols have been proposed to find the path with minimum total transmission count/time for delivering a single packet, such transmission count/time minimizing protocols cannot be guaranteed to achieve maximum end-to-end throughput. In this paper, we argue that by carefully considering spatial reusability of the wireless communication media, we can tremendously improve the end-to-end throughput in multi-hop wireless networks. To support our argument, we propose spatial reusability-aware single-path routing (SASR) and anypath routing (SAAR) protocols, and compare them with existing single-path routing and anypath routing protocols, respectively. Our evaluation results show that our protocols significantly improve the end-to-end throughput compared with existing protocols. Specifically, for single-path routing, the median throughput gain is up to 60 percent, and for each source-destination pair, the throughput gain is as high as 5.3x; for anypath routing, the maximum per-flow throughput gain is 71.6 percent, while the median gain is up to 13.2 percent.
Energy consumption is a fundamental and critical issue in wireless sensor networks. Mobile sensors consume much more energy during the movement than that during the communication or sensing process. ...Thus how to schedule mobile sensors and minimize their moving distance, while keeping the coverage requirement has great significance to researchers. In this paper, we study the target coverage problem in mobile sensor networks where all the targets need to be covered by sensors continuously. Our goal is to minimize the moving distance of sensors to cover all targets in the surveillance region, which is in Euclidean space. Here initially all the sensors are located at k base stations. Thus, we define this problem as k-Sink Minimum Movement Target Coverage. To solve this problem, we propose a polynomial-time approximation scheme, named Energy Effective Movement Algorithm (EEMA). We prove that the approximation ratio of EEMA is 1 + ε and the time complexity is n O(1/ε 2 ) . We also propose a distributed version of EEMA (D-EEMA) for large-scale networks where EEMA is not efficient enough in practice. Finally, we conduct experiments to validate the efficiency and effectiveness of EEMA and D-EEMA.
As a significant business paradigm, data trading has attracted increasing attention. However, the study of data acquisition in data markets is still in its infancy. Mobile crowdsensing has been ...recognized as an efficient and scalable way to acquire large-scale data. Designing a practical data acquisition scheme for crowd-sensed data markets has to consider three major challenges: crowd-sensed data trading format determination, profit maximization with polynomial computational complexity, and payment minimization in strategic environments. In this paper, we jointly consider these design challenges, and propose VENUS, which is the first profit-driVEN data acqUiSition framework for crowd-sensed data markets. Specifically, VENUS consists of two complementary mechanisms: VENUS-PRO for profit maximization and VENUS-PAY for payment minimization. Given the expected payment for each of the data acquisition points, VENUS-PRO greedily selects the most "cost-efficient" data acquisition points to achieve a sub-optimal profit. To determine the minimum payment for each data acquisition point, we further design VENUS-PAY, which is a data procurement auction in Bayesian setting. Our theoretical analysis shows that VENUS-PAY can achieve both strategy-proofness and optimal expected payment. We evaluate VENUS on a public sensory data set, collected by Intel Research, Berkeley Laboratory. Our evaluation results show that VENUS-PRO approaches the optimal profit, and VENUS-PAY outperforms the canonical second-price reverse auction, in terms of total payment.
This paper studies the problem of R adiation c O nstrained scheduling of wireless C harging tas K s (ROCK), that is, given wireless charging tasks with required charging energy and charging deadline ...for rechargeable devices, scheduling the power of wireless chargers to maximize the overall effective charging energy for all rechargeable devices, and further to minimize the total charging time, while guaranteeing electromagnetic radiation (EMR) safety, i.e., no point on the considered 2-D area has EMR intensity exceeding a given threshold. To address ROCK, we first present a centralized algorithm. We transform ROCK from nonlinear problem to linear problem by applying two approaches of area discretization and solution regularization, and then propose a linear programming-based greedy test algorithm to solve it. We also propose a distributed algorithm that is scalable with network size by presenting an area partition scheme and two approaches called area-scaling and EMR-scaling, and prove that it achieves effective charging energy no less than <inline-formula> <tex-math notation="LaTeX">(1-\varepsilon) </tex-math></inline-formula> of that of the optimal solution, and charging time no more than that of the optimal solution. We conduct both simulation and field experiments to validate our theoretical findings. The results show that our algorithm achieves 94.9% of the optimal effective charging energy and requires 47.1% smaller charging time compared with the optimal one when <inline-formula> <tex-math notation="LaTeX">{\varepsilon } \geq 0.2 </tex-math></inline-formula>, and outperforms the other algorithms by at least 350.1% in terms of charging time with even more effective charging energy.
Wireless power transfer technology has witnessed huge development because of its convenience and reliability. This paper concerns the fundamental issue of wireless charger PLacement with Optimized ...charging uTility (PLOT), i.e., given a fixed number of chargers and a set of points where rechargeable devices can be placed with orientations uniformly distributed in the range of 0, 2π) positions and orientations of chargers such that the overall expected charging utility for all points is maximized. To address PLOT, we propose a 1 - 1/ε - ε approximation algorithm. First, we present techniques to approximate the nonlinear charging power and the expected charging utility to make the problem almost linear. Second, we develop a dominating coverage set extraction method to reduce the continuous search space of PLOT to a limited and discrete one without a performance loss. Third, we prove that the reformulated problem is essentially maximizing a monotone submodular function subject to a matroid constraint, and propose a greedy algorithm to address this problem. We conduct both simulation and field experiments to validate our theoretical results, and the results show that our algorithm can outperform comparison algorithms by at least 32.9%.
As a significant business paradigm, many online information platforms have emerged to satisfy society's needs for person-specific data, where a service provider collects raw data from data ...contributors, and then offers value-added data services to data consumers. However, in the data trading layer, the data consumers face a pressing problem, i.e., how to verify whether the service provider has truthfully collected and processed data? Furthermore, the data contributors are usually unwilling to reveal their sensitive personal data and real identities to the data consumers. In this paper, we propose TPDM, which efficiently integrates T ruthfulness and P rivacy preservation in D ata M arkets. TPDM is structured internally in an Encrypt-then-Sign fashion, using partially homomorphic encryption and identity-based signature. It simultaneously facilitates batch verification, data processing, and outcome verification, while maintaining identity preservation and data confidentiality. We also instantiate TPDM with a profile matching service and a data distribution service, and extensively evaluate their performances on Yahoo! Music ratings dataset and 2009 RECS dataset, respectively. Our analysis and evaluation results reveal that TPDM achieves several desirable properties, while incurring low computation and communication overheads when supporting large-scale data markets.