With the rapidly evolving permeation of digital technologies into everyday human life, we are witnessing an era of personal data digitalization. Personal data digitalization refers to the ...sociotechnical encounters associated with the digitization of personal data for use in digital technologies. Personal data digitalization is being applied to central attributes of human life-health, cognition, and emotion-with the purported aim of helping individuals live longer, healthier lives endowed with the requisite cognition and emotion for responding to life situations and other people in a manner that enables human flourishing. A concern taking hold in manifold fields ranging from IT, bioethics, and law, to philosophy and religion is that as personal data digitalization permeates ever more areas of human existence, humans risk becoming artifacts of technology production. This concern brings to center stage the very notion of what it means to be human, a notion encapsulated in the term human dignity, which broadly refers to the recognition that human beings possess intrinsic value and, as such, are endowed with certain rights and should be treated with respect. In this paper, we identify, describe, and transform what we know about personal data digitalization into a higher order theoretical structure around the concept of human dignity. The result of our analysis is the CARE (claims, affronts, response, equilibrium) theory of dignity amid personal data digitalization, a theory that explains the relationship of personal data digitalization to human dignity. Building upon the CARE theory as a foundation, researchers in a variety of IS research streams could develop mid-range theories for empirical testing or could use the CARE theory as an overarching lens for interpreting emerging IS phenomena. Practitioners and government agencies can also use the CARE theory to understand the opportunities and risks of personal data digitalization and to develop policies and systems that respect the dignity of employees and citizens.
One of the biggest concerns of big data is privacy. However, the study on big data privacy is still at a very early stage. We believe the forthcoming solutions and theories of big data privacy root ...from the in place research output of the privacy discipline. Motivated by these factors, we extensively survey the existing research outputs and achievements of the privacy field in both application and theoretical angles, aiming to pave a solid starting ground for interested readers to address the challenges in the big data case. We first present an overview of the battle ground by defining the roles and operations of privacy systems. Second, we review the milestones of the current two major research categories of privacy: data clustering and privacy frameworks. Third, we discuss the effort of privacy study from the perspectives of different disciplines, respectively. Fourth, the mathematical description, measurement, and modeling on privacy are presented. We summarize the challenges and opportunities of this promising topic at the end of this paper, hoping to shed light on the exciting and almost uncharted land.
Many data analysis operations can be expressed as a GROUP BY query on an unbounded set of partitions, followed by a per-partition aggregation. To make such a query differentially private, adding ...noise to each aggregation is not enough: we also need to make sure that the set of partitions released is also differentially private.
This problem is not new, and it was recently formally introduced as
14. In this work, we continue this area of study, and focus on the common setting where each user is associated with a single partition. In this setting, we propose a simple,
differentially private mechanism that maximizes the number of released partitions. We discuss implementation considerations, as well as the possible extension of this approach to the setting where each user contributes to a fixed, small number of partitions.
Sharing data for public usage requires sanitization to prevent sensitive information from leaking. Previous studies have presented methods for creating privacy preserving visualizations. However, few ...of them provide sufficient feedback to users on how much utility is reduced (or preserved) during such a process. To address this, we design a visual interface along with a data manipulation pipeline that allows users to gauge utility loss while interactively and iteratively handling privacy issues in their data. Widely known and discussed types of privacy models, i.e., syntactic anonymity and differential privacy, are integrated and compared under different use case scenarios. Case study results on a variety of examples demonstrate the effectiveness of our approach.
The privacy-preserving query is critical for modern blockchain systems, especially when supporting many crucial applications such as finance and healthcare. Recent advances in blockchain query ...schemes mainly focus on enhancing the traceability efficiency of integrity authentication. Despite these efforts, we argue that the exposure of retrieval information may result in privacy leakage, which inevitably poses an important yet unresolved challenge. In this paper, we introduce Cloak, a novel privacy-preserving blockchain query scheme with two notable features. First, it utilizes a two-phase distributed query requests technique, i.e., division and aggregation, to hide retrieval information based on the natural independent characteristic of blockchain. Second, we add noise to the sub-request set to avoid malicious attacks during transmission and adopt smart contract-based asymmetric encryption to guarantee the correctness of query results. Experimental results demonstrate that Cloak improves the query performance by up to 4× and reduces the storage overhead by 50% compared with the state-of-the-art Spiral.
Privacy as a fundamental right faces considerable challenges as people’s activities have moved into cyberspace. The development of technology has had an impact on a various areas related to personal ...privacy. This article discusses changes to the concept of privacy in the digital age, presents approaches to privacy issues in the law of the European Union (EU) and United States (US) today, and reveals the aspects of privacy protection in criminal law based on the relevant Lithuanian case law and Ukrainian law. This analysis showed that legal regulation and practice must be adapted to the changed situation. The use of technology has created new ways of committing serious privacy violations; therefore, criminal law must be ready to properly respond to the changing nature of crimes against personal privacy in the digital age.
Log data from mobile devices generally contain a series of events with temporal information including time intervals which consist of the start and finish times. However, the problem of releasing ...differentially private time interval datasets has not been tackled yet. A time interval dataset can be represented by a two dimensional (2D) histogram. Most of the methods to publish 2D histograms partition the data into rectangular spaces to reduce the aggregated noise error for range queries. However, the existing algorithms to publish 2D histograms suffer from the structural error when applied to time interval datasets. To reduce the aggregated noise errors and suppress the increase in the structural error, we propose the TIDY (publishing Time Intervals via Differential privacY) algorithm. We use the frequency vectors as a compact representation of the time interval dataset. After applying the Laplace mechanism to the frequency vectors, we improve the utility of the frequency vectors based on a maximum likelihood estimation. We also develop a new partitioning method adapted for the frequency vectors to balance the trade-off between the noise and structural errors. Our empirical study on real-life and synthetic datasets confirms that TIDY outperforms the existing algorithms for 2D histograms.