Abstract
According to the theory of contextual integrity (CI), privacy norms prescribe information flows with reference to five parameters — sender, recipient, subject, information type, and ...transmission principle. Because privacy is grasped contextually (e.g., health, education, civic life, etc.), the values of these parameters range over contextually meaningful ontologies — of information types (or topics) and actors (subjects, senders, and recipients), in contextually defined capacities. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Mounting challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory. This Article introduces the metaphor of a data (food) chain to capture the nature of these challenges. With motion up the chain, where data of higher order is inferred from lower-order data, the crucial question is whether privacy norms governing lower-order data are sufficient for the inferred higher-order data. While CI has a response to this question, a greater challenge comes from data primitives, such as digital impulses of mouse clicks, motion detectors, and bare GPS coordinates, because they appear to have no meaning. Absent a semantics, they escape CI’s privacy norms entirely
.
Abstract
The digital age brings with it novel forms of data flow. As a result, individuals are constantly being monitored while consuming products, services and content. These abilities have given ...rise to a variety of concerns, which are most often framed using “privacy” and “data protection”-related paradigms. An important, oft-noted yet undertheorized concern is that these dynamics might facilitate the manipulation of subjects; a process in which firms strive to motivate and influence individuals to take specific steps and make particular decisions in a manner considered to be socially unacceptable. That it is important and imperative to battle manipulation carries with it a strong intuitive appeal. Intuition, however, does not always indicate the existence of a sound justification or policy option. For that, a deeper analytic and academic discussion is called for
.
This Article begins by emphasizing the importance of addressing the manipulation-based argument, which derives from several crucial problems and flaws in the legal and policy setting currently striving to meet the challenges of the digital age. Next, the Article examines whether the manipulation-based concerns are sustainable, or are merely a visceral response to changing technologies which cannot be provided with substantial analytical backing. Here the Article details the reasons for striving to block manipulative conduct and, on the other hand, reasons why legal intervention should be, in the best case, limited. The Article concludes with some general implications of this discussion for the broader themes and future directions of privacy law, while trying to ascertain whether the rise of the manipulation-based discourse will lead to information privacy’s expansion or perhaps its demise
.
Privacy visualizations help users understand the privacy implications of using an online service. Privacy by Design guidelines provide generally accepted privacy standards for developers of online ...services. To obtain a comprehensive understanding of online privacy, we review established approaches, distill a unified list of 15 privacy attributes and rank them based on perceived importance by users and privacy experts. We then discuss similarities, explain notable differences, and examine trends in terms of the attributes covered. Finally, we show how our results provide a foundation for user-centric privacy visualizations, inspire best practices for developers, and give structure to privacy policies.
This meta‐analysis investigates privacy concerns and literacy as predictors of use of online services and social network sites (SNSs), sharing information, and adoption of privacy protective ...measures. A total of 166 studies from 34 countries (n = 75,269) were included in the analysis. In line with the premise of privacy paradox, privacy concerns did not predict SNS use. However, users concerned about privacy were less likely to use online services and share information and were more likely to utilize privacy protective measures. Except for information sharing, the relationships were comparable for intentions and behavior. Analyses also confirm the role that privacy literacy plays in enhancing use of privacy protective measures. The findings can be generalized across gender, cultural orientation, and national legal systems.
The art of data privacy Bowen, Claire McKay
Significance (Oxford, England),
February 2022, 2022-02-01, 20220201, Letnik:
19, Številka:
1
Journal Article
Odprti dostop
Statistics are vital for understanding society, but they can pose a risk to the privacy of individuals who contribute their data. Claire McKay Bowen illustrates some of the methods used to minimise ...that risk – with the aid of a famous artwork
Statistics are vital for understanding society, but they can pose a risk to the privacy of individuals who contribute their data. Claire McKay Bowen illustrates some of the methods used to minimise that risk — with the aid of a famous artwork.
In the digital era, it is increasingly important to understand how privacy decisions are made because information is frequently perceived as a commodity that is mismanaged. The preponderance of ...privacy literature focuses on individual-level information privacy concern and personal self-disclosure decisions. We propose that a more versatile multilevel description is required to enable exploration of complex privacy decisions that involve co-owned (i.e., group) information in increasingly sophisticated digital environments. We define the concepts of group and individual information privacy, “we-privacy” and “I-privacy” respectively, as the ability of an individual or group to construct, regulate, and apply the rules for managing their information and interaction with others. We develop the theory of multilevel information privacy (TMIP), which uses the theory of communication privacy management and the developmental theory of privacy as foundations for a social rule-based (i.e., normative) process of making privacy decisions that evolve over time with experience. The TMIP contributes to the privacy literature by drawing from prominent social psychology theories of group behavior (i.e., social identity and self-categorization theories) to explain how privacy decisions can be made by individuals or groups (i.e., social units) or social units acting as members of a particular group. We contend that technology complicates the privacy decision-making process by adding unique environmental characteristics that can influence the social identity assumed for a particular privacy decision, the estimation of the cost-benefit components of the privacy calculus, and the application and evolution of the norms that define the rules for information and interaction management. We discuss the implications of the TMIP for information systems research and provide a research agenda.