In a decentralized Internet of Things (IoT) network, a fusion center receives information from multiple sensors to infer a public hypothesis of interest. To prevent the fusion center from abusing the ...sensor information, each sensor sanitizes its local observation using a local privacy mapping, which is designed to achieve both inference privacy of a private hypothesis and data privacy of the sensor raw observations. Various inference and data privacy metrics have been proposed in the literature. We introduce the concept of privacy implication (with vanishing budget) to study the relationships between these privacy metrics. We propose an optimization framework in which both local differential privacy (data privacy) and information privacy (inference privacy) metrics are incorporated. In the parametric case where sensor observations' distributions are known a priori , we propose a two-stage local privacy mapping at each sensor, and show that such an architecture is able to achieve information privacy and local differential privacy to within the predefined budgets. For the nonparametric case where sensor distributions are unknown, we adopt an empirical optimization approach. Simulation and experiment results demonstrate that our proposed approaches allow the fusion center to accurately infer the public hypothesis while protecting both inference and data privacy.
Machine learning (ML) models have been widely applied to various applications, including image classification, text generation, audio recognition, and graph data analysis. However, recent studies ...have shown that ML models are vulnerable to membership inference attacks (MIAs), which aim to infer whether a data record was used to train a target model or not. MIAs on ML models can directly lead to a privacy breach. For example, via identifying the fact that a clinical record that has been used to train a model associated with a certain disease, an attacker can infer that the owner of the clinical record has the disease with a high chance. In recent years, MIAs have been shown to be effective on various ML models, e.g., classification models and generative models. Meanwhile, many defense methods have been proposed to mitigate MIAs. Although MIAs on ML models form a newly emerging and rapidly growing research area, there has been no systematic survey on this topic yet. In this article, we conduct the first comprehensive survey on membership inference attacks and defenses. We provide the taxonomies for both attacks and defenses, based on their characterizations, and discuss their pros and cons. Based on the limitations and gaps identified in this survey, we point out several promising future research directions to inspire the researchers who wish to follow this area. This survey not only serves as a reference for the research community but also provides a clear description for researchers outside this research domain. To further help the researchers, we have created an online resource repository, which we will keep updated with future relevant work. Interested readers can find the repository at https://github.com/HongshengHu/membership-inference-machine-learning-literature.
Taking English culture as its representative sample, The Secret History of Domesticity asks how the modern notion of the public-private relation emerged in the seventeenth and eighteenth centuries. ...Treating that relation as a crucial instance of the modern division of knowledge, Michael McKeon narrates its pre-history along with that of its essential component, domesticity.
This narrative draws upon the entire spectrum of English people's experience. At the most public extreme are political developments like the formation of civil society over against the state, the rise of contractual thinking, and the devolution of absolutism from monarch to individual subject. The middle range of experience takes in the influence of Protestant and scientific thought, the printed publication of the private, the conceptualization of virtual publics—society, public opinion, the market—and the capitalization of production, the decline of the domestic economy, and the increase in the sexual division of labor. The most private pole of experience involves the privatization of marriage, the family, and the household, and the complex entanglement of femininity, interiority, subjectivity, and sexuality.
McKeon accounts for how the relationship between public and private experience first became intelligible as a variable interaction of distinct modes of being—not a static dichotomy, but a tool to think with. Richly illustrated with nearly 100 images, including paintings, engravings, woodcuts, and a representative selection of architectural floor plans for domestic interiors, this volume reads graphic forms to emphasize how susceptible the public-private relation was to concrete and spatial representation. McKeon is similarly attentive to how literary forms evoked a tangible sense of public-private relations—among them figurative imagery, allegorical narration, parody, the author-character-reader dialectic, aesthetic distance, and free indirect discourse. He also finds a structural analogue for the emergence of the modern public-private relation in the conjunction of what contemporaries called the secret history and the domestic novel.
A capacious and synthetic historical investigation, The Secret History of Domesticity exemplifies how the methods of literary interpretation and historical analysis can inform and enrich one another.
Information privacy refers to the desire of individuals to control or have some influence over data about themselves. Advances in information technology have raised concerns about information privacy ...and its impacts, and have motivated Information Systems researchers to explore information privacy issues, including technical solutions to address these concerns. In this paper, we inform researchers about the current state of information privacy research in IS through a critical analysis of the IS literature that considers information privacy as a key construct. The review of the literature reveals that information privacy is a multilevel concept, but rarely studied as such. We also find that information privacy research has been heavily reliant on studentbased and USA-centric samples, which results in findings of limited generalizability. Information privacy research focuses on explaining and predicting theoretical contributions, with few studies in journal articles focusing on design and action contributions. We recommend that future research should consider different levels of analysis as well as multilevel effects of information privacy. We illustrate this with a multilevel framework for information privacy concerns. We call for research on information privacy to use a broader diversity of sampling populations, and for more design and action information privacy research to be published in journal articles that can result in IT artifacts for protection or control of information privacy.
•Experimental research into the ‘privacy paradox’ is still comparatively rare.•This study focuses on observing actual behavior.•Neither technical knowledge nor money prevent from paradoxical ...behavior.•Privacy is not rated overly important in the evaluation of an app’s desirability.•Functionality, design and its perceived cost-to-benefit outweigh privacy concerns.
Research shows that people’s use of computers and mobile phones is often characterized by a privacy paradox: Their self-reported concerns about their online privacy appear to be in contradiction with their often careless online behaviors. Earlier research into the privacy paradox has a number of caveats. Most studies focus on intentions rather than behavior and the influence of technical knowledge, privacy awareness, and financial resources is not systematically ruled out. This study therefore tests the privacy paradox under extreme circumstances, focusing on actual behavior and eliminating the effects of a lack of technical knowledge, privacy awareness, and financial resources. We designed an experiment on the downloading and usage of a mobile phone app among technically savvy students, giving them sufficient money to buy a paid-for app. Results suggest that neither technical knowledge and privacy awareness nor financial considerations affect the paradoxical behavior observed in users in general. Technically-skilled and financially independent users risked potential privacy intrusions despite their awareness of potential risks. In their considerations for selecting and downloading an app, privacy aspects did not play a significant role; functionality, app design, and costs appeared to outweigh privacy concerns.
Pufferfish privacy (PP) is a generalization of differential privacy (DP), that offers flexibility in specifying sensitive information and integrates domain knowledge into the privacy definition. ...Inspired by the illuminating formulation of DP in terms of mutual information due to Cuff and Yu, this work explores PP through the lens of information theory. We provide an information-theoretic formulation of PP, termed mutual information PP (MI PP), in terms of the conditional mutual information between the mechanism and the secret, given the public information. We show that MI PP is implied by the regular PP and characterize conditions under which the reverse implication is also true, recovering the relationship between DP and its information-theoretic variant as a special case. We establish convexity, composability, and post-processing properties for MI PP mechanisms and derive noise levels for the Gaussian and Laplace mechanisms. The obtained mechanisms are applicable under relaxed assumptions and provide improved noise levels in some regimes. Lastly, applications to auditing privacy frameworks, statistical inference tasks, and algorithm stability are explored.
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as ...differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.
This study builds on the privacy calculus model to revisit the privacy paradox on social media. A two-wave panel data set from Hong Kong and a cross-sectional data set from the United States are ...used. This study extends the model by incorporating privacy self-efficacy as another privacy-related factor in addition to privacy concerns (i.e., costs) and examines how these factors interact with social capital (i.e., the expected benefit) in influencing different privacy management strategies, including limiting profile visibility, self-disclosure, and friending. This study proposed and found a two-step privacy management strategy in which privacy concerns and privacy self-efficacy prompt users to limit their profile visibility, which in turn enhances their self-disclosing and friending behaviors in both Hong Kong and the United States. Results from the moderated mediation analyses further demonstrate that social capital strengthens the positive–direct effect of privacy self-efficacy on self-disclosure in both places, and it can mitigate the direct effect of privacy concerns on restricting self-disclosure in Hong Kong (the conditional direct effects). Social capital also enhances the indirect effect of privacy self-efficacy on both self-disclosure and friending through limiting profile visibility in Hong Kong (the conditional indirect effects). Implications of the findings are discussed.
Do people really care about their privacy? Surveys show that privacy is a primary concern for citizens in the digital age. On the other hand, individuals reveal personal information for relatively ...small rewards, often just for drawing the attention of peers in an online social network. This inconsistency of privacy attitudes and privacy behaviour is often referred to as the “privacy paradox”. In this paper, we present the results of a review of research literature on the privacy paradox. We analyse studies that provide evidence of a paradoxical dichotomy between attitudes and behaviour and studies that challenge the existence of such a phenomenon. The diverse research results are explained by the diversity in research methods, the different contexts and the different conceptualisations of the privacy paradox. We also present several interpretations of the privacy paradox, stemming from social theory, psychology, behavioural economics and, in one case, from quantum theory. We conclude that current research has improved our understanding of the privacy paradox phenomenon. It is, however, a complex phenomenon that requires extensive further research. Thus, we call for synthetic studies to be based on comprehensive theoretical models that take into account the diversity of personal information and the diversity of privacy concerns. We suggest that future studies should use evidence of actual behaviour rather than self-reported behaviour.