Taking English culture as its representative sample, The Secret History of Domesticity asks how the modern notion of the public-private relation emerged in the seventeenth and eighteenth centuries. ...Treating that relation as a crucial instance of the modern division of knowledge, Michael McKeon narrates its pre-history along with that of its essential component, domesticity.
This narrative draws upon the entire spectrum of English people's experience. At the most public extreme are political developments like the formation of civil society over against the state, the rise of contractual thinking, and the devolution of absolutism from monarch to individual subject. The middle range of experience takes in the influence of Protestant and scientific thought, the printed publication of the private, the conceptualization of virtual publics—society, public opinion, the market—and the capitalization of production, the decline of the domestic economy, and the increase in the sexual division of labor. The most private pole of experience involves the privatization of marriage, the family, and the household, and the complex entanglement of femininity, interiority, subjectivity, and sexuality.
McKeon accounts for how the relationship between public and private experience first became intelligible as a variable interaction of distinct modes of being—not a static dichotomy, but a tool to think with. Richly illustrated with nearly 100 images, including paintings, engravings, woodcuts, and a representative selection of architectural floor plans for domestic interiors, this volume reads graphic forms to emphasize how susceptible the public-private relation was to concrete and spatial representation. McKeon is similarly attentive to how literary forms evoked a tangible sense of public-private relations—among them figurative imagery, allegorical narration, parody, the author-character-reader dialectic, aesthetic distance, and free indirect discourse. He also finds a structural analogue for the emergence of the modern public-private relation in the conjunction of what contemporaries called the secret history and the domestic novel.
A capacious and synthetic historical investigation, The Secret History of Domesticity exemplifies how the methods of literary interpretation and historical analysis can inform and enrich one another.
Information privacy refers to the desire of individuals to control or have some influence over data about themselves. Advances in information technology have raised concerns about information privacy ...and its impacts, and have motivated Information Systems researchers to explore information privacy issues, including technical solutions to address these concerns. In this paper, we inform researchers about the current state of information privacy research in IS through a critical analysis of the IS literature that considers information privacy as a key construct. The review of the literature reveals that information privacy is a multilevel concept, but rarely studied as such. We also find that information privacy research has been heavily reliant on studentbased and USA-centric samples, which results in findings of limited generalizability. Information privacy research focuses on explaining and predicting theoretical contributions, with few studies in journal articles focusing on design and action contributions. We recommend that future research should consider different levels of analysis as well as multilevel effects of information privacy. We illustrate this with a multilevel framework for information privacy concerns. We call for research on information privacy to use a broader diversity of sampling populations, and for more design and action information privacy research to be published in journal articles that can result in IT artifacts for protection or control of information privacy.
Pufferfish privacy (PP) is a generalization of differential privacy (DP), that offers flexibility in specifying sensitive information and integrates domain knowledge into the privacy definition. ...Inspired by the illuminating formulation of DP in terms of mutual information due to Cuff and Yu, this work explores PP through the lens of information theory. We provide an information-theoretic formulation of PP, termed mutual information PP (MI PP), in terms of the conditional mutual information between the mechanism and the secret, given the public information. We show that MI PP is implied by the regular PP and characterize conditions under which the reverse implication is also true, recovering the relationship between DP and its information-theoretic variant as a special case. We establish convexity, composability, and post-processing properties for MI PP mechanisms and derive noise levels for the Gaussian and Laplace mechanisms. The obtained mechanisms are applicable under relaxed assumptions and provide improved noise levels in some regimes. Lastly, applications to auditing privacy frameworks, statistical inference tasks, and algorithm stability are explored.
Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as ...differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals’ private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP’s performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model; and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.
This study builds on the privacy calculus model to revisit the privacy paradox on social media. A two-wave panel data set from Hong Kong and a cross-sectional data set from the United States are ...used. This study extends the model by incorporating privacy self-efficacy as another privacy-related factor in addition to privacy concerns (i.e., costs) and examines how these factors interact with social capital (i.e., the expected benefit) in influencing different privacy management strategies, including limiting profile visibility, self-disclosure, and friending. This study proposed and found a two-step privacy management strategy in which privacy concerns and privacy self-efficacy prompt users to limit their profile visibility, which in turn enhances their self-disclosing and friending behaviors in both Hong Kong and the United States. Results from the moderated mediation analyses further demonstrate that social capital strengthens the positive–direct effect of privacy self-efficacy on self-disclosure in both places, and it can mitigate the direct effect of privacy concerns on restricting self-disclosure in Hong Kong (the conditional direct effects). Social capital also enhances the indirect effect of privacy self-efficacy on both self-disclosure and friending through limiting profile visibility in Hong Kong (the conditional indirect effects). Implications of the findings are discussed.
•Experimental research into the ‘privacy paradox’ is still comparatively rare.•This study focuses on observing actual behavior.•Neither technical knowledge nor money prevent from paradoxical ...behavior.•Privacy is not rated overly important in the evaluation of an app’s desirability.•Functionality, design and its perceived cost-to-benefit outweigh privacy concerns.
Research shows that people’s use of computers and mobile phones is often characterized by a privacy paradox: Their self-reported concerns about their online privacy appear to be in contradiction with their often careless online behaviors. Earlier research into the privacy paradox has a number of caveats. Most studies focus on intentions rather than behavior and the influence of technical knowledge, privacy awareness, and financial resources is not systematically ruled out. This study therefore tests the privacy paradox under extreme circumstances, focusing on actual behavior and eliminating the effects of a lack of technical knowledge, privacy awareness, and financial resources. We designed an experiment on the downloading and usage of a mobile phone app among technically savvy students, giving them sufficient money to buy a paid-for app. Results suggest that neither technical knowledge and privacy awareness nor financial considerations affect the paradoxical behavior observed in users in general. Technically-skilled and financially independent users risked potential privacy intrusions despite their awareness of potential risks. In their considerations for selecting and downloading an app, privacy aspects did not play a significant role; functionality, app design, and costs appeared to outweigh privacy concerns.
Do people really care about their privacy? Surveys show that privacy is a primary concern for citizens in the digital age. On the other hand, individuals reveal personal information for relatively ...small rewards, often just for drawing the attention of peers in an online social network. This inconsistency of privacy attitudes and privacy behaviour is often referred to as the “privacy paradox”. In this paper, we present the results of a review of research literature on the privacy paradox. We analyse studies that provide evidence of a paradoxical dichotomy between attitudes and behaviour and studies that challenge the existence of such a phenomenon. The diverse research results are explained by the diversity in research methods, the different contexts and the different conceptualisations of the privacy paradox. We also present several interpretations of the privacy paradox, stemming from social theory, psychology, behavioural economics and, in one case, from quantum theory. We conclude that current research has improved our understanding of the privacy paradox phenomenon. It is, however, a complex phenomenon that requires extensive further research. Thus, we call for synthetic studies to be based on comprehensive theoretical models that take into account the diversity of personal information and the diversity of privacy concerns. We suggest that future studies should use evidence of actual behaviour rather than self-reported behaviour.
With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the ...amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue.
Accessing and integrating human genomic data with phenotypes are important for biomedical research. Making genomic data accessible for research purposes, however, must be handled carefully to avoid ...leakage of sensitive individual information to unauthorized parties and improper use of data. In this article, we focus on data sharing within the scope of data accessibility for research. Current common practices to gain biomedical data access are strictly rule based, without a clear and quantitative measurement of the risk of privacy breaches. In addition, several types of studies require privacy‐preserving linkage of genotype and phenotype information across different locations (e.g., genotypes stored in a sequencing facility and phenotypes stored in an electronic health record) to accelerate discoveries. The computer science community has developed a spectrum of techniques for data privacy and confidentiality protection, many of which have yet to be tested on real‐world problems. In this article, we discuss clinical, technical, and ethical aspects of genome data privacy and confidentiality in the United States, as well as potential solutions for privacy‐preserving genotype–phenotype linkage in biomedical research.