For the purpose of propagating information and ideas through a social network, a seeding strategy aims to find a small set of seed users that are able to maximize the spread of the influence, which ...is termed influence maximization problem. Despite a large number of works have studied this problem, the existing seeding strategies are limited to the models that cannot fully capture the characteristics of real-world social networks. In fact, due to high-speed data transmission and large population of participants, the diffusion processes in real-world social networks have many aspects of uncertainness. As shown in the experiments, when taking such uncertainness into account, the state-of-the-art seeding strategies are pessimistic as they fail to trace the influence diffusion. In this paper, we study the strategies that select seed users in an adaptive manner. We first formally model the dynamic independent Cascade model and introduce the concept of adaptive seeding strategy. Then, based on the proposed model, we show that a simple greedy adaptive seeding strategy finds an effective solution with a provable performance guarantee. Besides the greedy algorithm, an efficient heuristic algorithm is provided for better scalability. Extensive experiments have been performed on both the real-world networks and synthetic power-law networks. The results herein demonstrate the superiority of the adaptive seeding strategies to other baseline methods.
Mobile crowdsensing has become a novel and promising paradigm in collecting, analyzing, and exploiting massive amounts of data. However, the issue of data quality has not been carefully addressed. ...Low quality data contributions undermine the effectiveness and prospects of crowdsensing, and thus motivate the need for approaches to guarantee the high quality of the contributed data. In this paper, we integrate quality estimation and monetary incentive, and propose a quality-based truth estimation and surplus sharing method for crowdsensing. Specifically, we design an unsupervised learning approach to quantify the users' data qualities and long-term reputations, and exploit an outlier detection technique to filter out anomalous data items. Furthermore, we model the process of surplus sharing as a co-operative game, and propose a Shapley value-based method to determine each user's payment. We have conducted a real crowdsensing experiment and a large-scale simulation to evaluate our method. The evaluation results show that our approach achieves good performance in terms of both quality estimation and surplus sharing.
As a significant business paradigm, data trading has attracted increasing attention. However, the study of data acquisition in data markets is still in its infancy. Mobile crowdsensing has been ...recognized as an efficient and scalable way to acquire large-scale data. Designing a practical data acquisition scheme for crowd-sensed data markets has to consider three major challenges: crowd-sensed data trading format determination, profit maximization with polynomial computational complexity, and payment minimization in strategic environments. In this paper, we jointly consider these design challenges, and propose VENUS, which is the first profit-driVEN data acqUiSition framework for crowd-sensed data markets. Specifically, VENUS consists of two complementary mechanisms: VENUS-PRO for profit maximization and VENUS-PAY for payment minimization. Given the expected payment for each of the data acquisition points, VENUS-PRO greedily selects the most "cost-efficient" data acquisition points to achieve a sub-optimal profit. To determine the minimum payment for each data acquisition point, we further design VENUS-PAY, which is a data procurement auction in Bayesian setting. Our theoretical analysis shows that VENUS-PAY can achieve both strategy-proofness and optimal expected payment. We evaluate VENUS on a public sensory data set, collected by Intel Research, Berkeley Laboratory. Our evaluation results show that VENUS-PRO approaches the optimal profit, and VENUS-PAY outperforms the canonical second-price reverse auction, in terms of total payment.
Mobile crowdsensing has shown elegant capacity in data collection and has given rise to numerous applications. In the sense of coverage quality, marginal works have considered the efficient (less ...cost) and effective (considerable coverage) design for mobile crowdsensing networks. We investigate the optimal quality-aware coverage in mobile crowdsensing networks. The difference between ours and the conventional coverage problem is that we only select a subset of mobile users so that the coverage quality is maximized with constrained budget. To address this new problem, which is proved to be NP-hard, we first prove that the set function of coverage quality is nondecreasing submodular. By leveraging the favorable property in submodular optimization, we then propose an (1 - (1/e)) approximation algorithm with O(n k+2 ) time complexity, where k is an integer that is greater than or equal to 3. Finally, we conduct extensive simulations for the proposed scheme, and the results demonstrate that ours outperforms the random selection scheme and one of the state of the art in terms of total coverage quality by, at most, 2.4× and 1.5× and by, on average, 1.4× and 1.3×, respectively. Additionally, ours achieves a near-optimal solution, compared with the brute-force search results.
Opportunistic routing, has been shown to improve the network throughput, by allowing nodes that overhear the transmission and closer to the destination to participate in forwarding packets, i.e., in ...forwarder list. The nodes in forwarder list are prioritized and the lower priority forwarder will discard the packet if the packet has been forwarded by a higher priority forwarder. One challenging problem is to select and prioritize forwarder list such that a certain network performance is optimized. In this paper, we focus on selecting and prioritizing forwarder list to minimize energy consumption by all nodes. We study both cases where the transmission power of each node is fixed or dynamically adjustable. We present an energy-efficient opportunistic routing strategy, denoted as EEOR. Our extensive simulations in TOSSIM show that our protocol EEOR performs better than the well-known ExOR protocol (when adapted in sensor networks) in terms of the energy consumption, the packet loss ratio, and the average delivery delay.
Data aggregation is a key functionality in wireless sensor networks (WSNs). This paper focuses on data aggregation scheduling problem to minimize the delay (or latency). We propose an efficient ...distributed algorithm that produces a collision-free schedule for data aggregation in WSNs. We theoretically prove that the delay of the aggregation schedule generated by our algorithm is at most 16R + Δ - 14 time slots. Here, R is the network radius and Δ is the maximum node degree in the communication graph of the original network. Our algorithm significantly improves the previously known best data aggregation algorithm with an upper bound of delay of 24D + 6Δ + 16 time slots, where D is the network diameter (note that D can be as large as 2R). We conduct extensive simulations to study the practical performances of our proposed data aggregation algorithm. Our simulation results corroborate our theoretical results and show that our algorithms perform better in practice. We prove that the overall lower bound of delay for data aggregation under any interference model is max{log n,R}, where n is the network size. We provide an example to show that the lower bound is (approximately) tight under the protocol interference model when r I = r, where r I is the interference range and r is the transmission range. We also derive the lower bound of delay under the protocol interference model when r <; r I <; 3r and r I ≥ 3r.
Human-carried or vehicle-mounted sensors can be exploited to collect data ubiquitously for building various sensing maps. Most of existing mobile sensing applications consider users reporting and ...accessing sensing data through the Internet. However, this approach cannot be applied in the scenarios with poor network coverage or expensive network access. Existing data forwarding schemes for mobile opportunistic networks are not sufficient for sensing applications as spatial-temporal correlation among sensory data has not been explored. In order to build sensing maps satisfying specific sensing quality with low delay and energy consumption, we design COUPON, a novel cooperative sensing and data forwarding framework. We first notice that cooperative sensing scheme can eliminate sampling redundancy and hence save energy. Then we design two cooperative forwarding schemes by leveraging data fusion: Epidemic Routing with Fusion (ERF) and Binary Spray-and-Wait with Fusion (BSWF). Different from previous work assuming that all packets are propagated independently, we consider that packets are spatial-temporal correlated in the forwarding process, and derive the dissemination law of correlated packets. Both the theoretic analysis and simulation results show that our cooperative forwarding schemes can achieve better tradeoff between delivery delay and transmission overhead. We also evaluate our proposed framework and schemes with real mobile traces. Extensive simulations demonstrate that the cooperative sensing scheme can reduce the number of samplings by 93 percent compared with the non-cooperative scheme; ERF can reduce the transmission overhead by 78 percent compared with Epidemic Routing (ER); BSWF can increase the delivery ratio by 16 percent, and reduce the delivery delay and transmission overhead by 5 and 32 percent respectively, compared with Binary Spray-and-Wait (BSW).
•We present the first linear-time approximation algorithm for non-monotone adaptive submodular maximization.•Our results do not rely on pointwise submodularity.•We present a faster algorithm for the ...monotone case.•We propose an enhanced algorithm for the montone case subject to a partition matroid constraint.•We develop a linear-time algorithm for maximizing fully adaptive submodular functions subject to a parition matroid constraint.
In this paper, we study the non-monotone adaptive submodular maximization problem subject to a cardinality constraint. We first revisit the adaptive random greedy algorithm proposed in 13, where they show that this algorithm achieves a 1/e approximation ratio if the objective function is adaptive submodular and pointwise submodular. It is not clear whether the same guarantee holds under adaptive submodularity (without resorting to pointwise submodularity) or not. Our first contribution is to show that the adaptive random greedy algorithm achieves a 1/e approximation ratio under adaptive submodularity. One limitation of the adaptive random greedy algorithm is that it requires O(n×k) value oracle queries, where n is the size of the ground set and k is the cardinality constraint. Our second contribution is to develop the first linear-time algorithm for the non-monotone adaptive submodular maximization problem. Our algorithm achieves a 1/e−ϵ approximation ratio (this bound is improved to 1−1/e−ϵ for monotone case), using only O(nϵ−2logϵ−1) value oracle queries. Notably, O(nϵ−2logϵ−1) is independent of the cardinality constraint. For the monotone case, we propose a faster algorithm that achieves a 1−1/e−ϵ approximation ratio in expectation with O(nlog1ϵ) value oracle queries. We also generalize our study by considering a partition matroid constraint, and develop a linear-time algorithm for monotone and fully adaptive submodular functions.
Crowd counting, which count or accurately estimate the number of human beings within a region, is critical in many applications, such as guided tour, crowd control and marketing research and ...analysis. A crowd counting solution should be scalable and be minimally intrusive (i.e., device-free) to users. Image-based solutions are device-free, but cannot work well in a dim or dark environment. Non-image based solutions usually require every human being carrying device, and are inaccurate and unreliable in practice. In this paper, we present FCC, a device-Free Crowd Counting approach based on Channel State Information (CSI). Our design is motivated by our observation that CSI is highly sensitive to environment variation, like a frog eye. We theoretically discuss the relationship between the number of moving people and the variation of wireless channel state. A major challenge in our design of FCC is to find a stable monotonic function to characterize the relationship between the crowd number and various features of CSI. To this end, we propose a metric, the Percentage of nonzero Elements (PEM), in the dilated CSI Matrix. The monotonic relationship can be explicitly formulated by the Grey Verhulst Model, which is used for crowd counting without a labor-intensive site survey. We implement FCC using off-the-shelf IEEE 802.11n devices and evaluate its performance via extensive experiments in typical real-world scenarios. Our results demonstrate that FCC outperforms the state-of-art approaches with much better accuracy, scalability and reliability.
As tenants take networked virtual machines (VMs) as their requirements, effective placement of VMs is needed to reduce the network cost in cloud data centers. The cost is one of the major concerns ...for the cloud providers. In addition to the cost caused by network traffics (N-cost), the cost caused by the utilization of physical machines (PM-cost) is also non-negligible. In this paper, we focus on the optimized placement of VMs to minimize the cost, the combination of N-cost and PM-cost. We define N-cost by various functions, according to different communication models. We formulate the placement problem, and prove it to be NP-hard. We investigate the problem from two aspects. Firstly, we put a special emphasis on minimizing the N-cost with fixed PM-cost. For the case that tenants request the same amount of VMs, we present optimal algorithms under various definitions of N-cost. For the case that tenants require different numbers of VMs, we propose an approximation algorithm. Also, a greedy algorithm is implemented as the baseline to evaluate the performance. Secondly, we study the general case of the VM placement problem, in which both N-cost and PM-cost are taken into account. We present an effective binary-search-based algorithm to determine how many PMs should be used, which makes a tradeoff between PM-cost and N-cost. For all of the algorithms, we conduct theoretical analysis and extensive simulations to evaluate their performance and efficiency.