Traditional single image super-resolution (SISR) methods that focus on solving single and uniform degradation (i.e., bicubic down-sampling), typically suffer from poor performance when applied into ...real-world low-resolution (LR) images due to the complicated realistic degradations. The key to solving this more challenging real image super-resolution (RealSR) problem lies in learning feature representations that are both informative and content-aware. In this paper, we propose an Omni-frequency Region-adaptive Network (ORNet) to address both challenges, here we call features of all low, middle and high frequencies omni-frequency features. Specifically, we start from the frequency perspective and design a Frequency Decomposition (FD) module to separate different frequency components to comprehensively compensate the information lost for real LR image. Then, considering the different regions of real LR image have different frequency information lost, we further design a Region-adaptive Frequency Aggregation (RFA) module by leveraging dynamic convolution and spatial attention to adaptively restore frequency components for different regions. The extensive experiments endorse the effective, and scenario-agnostic nature of our OR-Net for RealSR.
Single image super-resolution (SISR) aims to recover the high-resolution (HR) image from its low-resolution (LR) input image. With the development of deep learning, SISR has achieved great progress. ...However, It is still a challenge to restore the real-world LR image with complicated authentic degradations. Therefore, we propose FAN, a frequency aggregation network, to address the real-world image super-resolu-tion problem. Specifically, we extract different frequencies of the LR image and pass them to a channel attention-grouped residual dense network (CA-GRDB) individually to output corresponding feature maps. And then aggregating these residual dense feature maps adaptively to recover the HR image with enhanced details and textures. We conduct extensive experiments quantitatively and qualitatively to verify that our FAN performs well on the real image super-resolution task of AIM 2020 challenge. According to the released final results, our team SR-IM achieves the fourth place on the X4 track with PSNR of 31.1735 and SSIM of 0.8728.
The continuous publication of aggregate statistics over crowd-sourced data to the public has enabled many data mining applications (e.g., real-time traffic analysis). Existing systems usually rely on ...a trusted server to aggregate the spatio-temporal crowd-sourced data and then apply differential privacy mechanism to perturb the aggregate statistics before publishing to provide strong privacy guarantee. However, the privacy of users will be exposed once the server is hacked or cannot be trusted. In this paper, we study the problem of real-time crowd-sourced statistical data publishing with strong privacy protection under an untrusted server. We propose a novel distributed agent-based privacy-preserving framework, called DADP, that introduces a new level of multiple agents between the users and the untrusted server. Instead of directly uploading the check-in information to the untrusted server, a user can randomly select one agent and upload the check-in information to it with the anonymous connection technology. Each agent aggregates the received crowd-sourced data and perturbs the aggregated statistics locally with Laplace mechanism. The perturbed statistics from all the agents are further combined together to form the entire perturbed statistics for publication. In particular, we propose a distributed budget allocation mechanism and an agent-based dynamic grouping mechanism to realize global ww-event \epsilonε-differential privacy in a distributed way. We prove that DADP can provide ww-event \epsilonε-differential privacy for real-time crowd-sourced statistical data publishing under the untrusted server. Extensive experiments on real-world datasets demonstrate the effectiveness of DADP.
Incentive mechanisms are essential for stimulating adequate worker participation to achieve good truth discovery performance in mobile crowdsensing (MCS) systems. However, most of existing incentive ...mechanisms only consider compensating workers' sensing cost, while the cost incurred by potential privacy leakage has been largely neglected. Moreover, none of existing privacy-preserving incentive mechanisms has incorporated workers' different privacy preferences to provide personalized payments for them. In this paper, we propose a contract-based personalized privacy-preserving incentive mechanism for truth discovery in MCS systems, named Paris-TD, which provides personalized payments for workers as a compensation for privacy cost while achieving accurate truth discovery. The basic idea is that the platform offers a set of different contracts to workers with different privacy preferences, and each worker chooses to sign a contract which specifies a privacy-preserving degree (PPD) and the corresponding payment the worker will receive if she submits perturbed data with that PPD. Specifically, we respectively design a set of optimal contracts analytically under both full and incomplete information models, which maximize the truth discovery accuracy under a given budget, while satisfying the individual rationality and incentive compatibility properties. The feasibility and effectiveness of Paris-TD are validated through experiments on both synthetic and real-world datasets.
Differential privacy (DP) has gained popularity in truth discovery recently due to its strong privacy guarantee. However, existing DP mechanisms for streaming data publication are not suitable for ...truth discovery as they fail to consider the different reliabilities of individuals, while the DP-based approaches for truth discovery are not suitable for streaming data because they ignore the correlations between truths over time. Directly applying these existing methods to streaming crowdsourced data would lead to low accuracy of the discovered truth. To solve this problem, in this paper, we propose an edge computing based privacy-preserving truth discovery mechanism, named PrivSTD, for streaming crowdsourced data to realize high accuracy of discovered truth while protecting the privacy of workers. Specifically, edge servers are introduced between the untrusted cloud server and workers to securely calculate the local truths and workers' reliabilities. A truth-dependent budget recycle mechanism is proposed for each edge server to adaptively determine the perturbed timestamp and allocate the privacy budget according to the changing pattern of local truths. Besides, a reliability-based perturbation mechanism is proposed to reduce the perturbation magnitude on the basis of worker's reliability. We theoretical analyze the data utility and computation cost of PrivSTD, and prove that PrivSTD can satisfy <inline-formula><tex-math notation="LaTeX">w</tex-math> <mml:math><mml:mi>w</mml:mi></mml:math><inline-graphic xlink:href="ren-ieq1-3062775.gif"/> </inline-formula>-event (<inline-formula><tex-math notation="LaTeX">\epsilon,\delta</tex-math> <mml:math><mml:mrow><mml:mi>ε</mml:mi><mml:mo>,</mml:mo><mml:mi>δ</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="ren-ieq2-3062775.gif"/> </inline-formula>)-differential privacy. Extensive experimental results on synthetic and real-world datasets demonstrate that PrivSTD achieves better utility than the state-of-the-art approaches.
In mobile edge computing (MEC), users can offload tasks to nearby MEC servers to reduce computation cost. Considering that the size of offloaded tasks could disclose user location information, ...several location privacy-preserving task offloading mechanisms have been proposed under the single-server scenario. However, to the best of our knowledge, none of them could provide a strict privacy protection guarantee or be applicable to the multi-server scenario where the user's location can be inferred more accurately if servers collude with each other. In this paper, we propose a novel location privacy-aware task offloading framework (LPA-Offload) for both single-server and multi-server scenarios, which provides strict and provable location privacy protection while achieving efficient task offloading. Specifically, we propose a location perturbation mechanism that allows each user to perturb its real location within a rational perturbation region and provides a differential privacy guarantee. To make a satisfactory offloading strategy, we propose a perturbation region determination mechanism and an offloading strategy generation mechanism that adaptively select a proper perturbation region according to the customized privacy factor, and then generate an optimal offloading strategy based on the perturbed location within the decided region. The determination of the perturbation region could achieve personalized privacy requirements while reducing computation cost. LPA-Offload is proved to satisfy <inline-formula><tex-math notation="LaTeX">(\epsilon,\delta)</tex-math> <mml:math><mml:mrow><mml:mo>(</mml:mo><mml:mi>ε</mml:mi><mml:mo>,</mml:mo><mml:mi>δ</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="hu-ieq1-3254553.gif"/> </inline-formula>-differential privacy, and the experiments demonstrate the effectiveness of our framework.
Federated Learning (FL) is susceptible to the gradient leakage attack (GLA), which can recover local private training data from the shared gradients or model updates. To ensure privacy, differential ...privacy is applied in FL by clipping and adding noise to local gradients (i.e., Local Differential Privacy (LDP)) or the global model update (i.e., Central Differential Privacy (CDP)). However, the effectiveness of DP in defending GLAs needs to be thoroughly investigated since some works briefly verify that DP can guard FL against GLAs while others question its defense capability. In this paper, we empirically evaluate CDP and LDP on the resistance of GLAs, and pay close attention to the trade-offs between privacy and utility in FL. Our findings reveal that 1) existing GLAs can be defended by CDP using a per-layer clipping strategy and LDP with a reasonable privacy guarantee; 2) both CDP and LDP ensure the trade-off between privacy and utility in training shallow model, but cannot guarantee this trade-off in deeper model training (e.g., ResNets). Triggered by the crucial role of clipping operation for DP, we propose an improved attack that incorporates the clipping operation into existing GLAs without requiring additional information. The experimental results show our attack can destruct the protection of CDP and weaken the effectiveness of LDP. Overall, our work validates the effectiveness as well as reveals the vulnerability of DP under GLAs. We hope this work can provide guidance on utilizing DP for defending against GLA in FL and inspire the design of future privacy-preserving FL.
Mobile crowdsensing (MCS) has now become an effective paradigm to collect massive data for various sensing applications. However, the interactions between mobile users and the platform, and the data ...release to third parties, pose severe challenges of privacy leakage for MCS systems, such as the leakage of users' identities and locations. Although several works on MCS have explored the privacy issues in task allocation, incentive, and data reporting, there is still a lack of a comprehensive privacy preserving framework for MCS to protect the privacy of users throughout users' involvement in crowdsensing tasks. In this article, we divide the life cycle of each crowdsensing task in MCS into four phases: task allocation, incentive, data collection, and data publishing, and design a privacy-preserving framework for MCS to protect users' privacy in the whole life cycle of MCS.