The meaning of privacy has been much disputed throughout its history in response to wave after wave of new technological capabilities and social configurations. The current round of disputes over ...privacy fuelled by data science has been a cause of despair for many commentators and a death knell for privacy itself for others. We argue that privacy's disputes are neither an accidental feature of the concept nor a lamentable condition of its applicability. Privacy is essentially contested. Because it is, privacy is transformable according to changing technological and social conditions. To make productive use of privacy's essential contestability, we argue for a new approach to privacy research and practical design, focused on the development of conceptual analytics that facilitate dissecting privacy's multiple uses across multiple contexts.
This article is part of the themed issue ‘The ethical impact of data science’.
Saving Governance-By-Design Mulligan, Deirdre K.; Bamberger, Kenneth A.
California law review,
06/2018, Volume:
106, Issue:
3
Journal Article
Peer reviewed
Governing through technology has proven irresistibly seductive. Everything from the Internet backbone to consumer devices employs technological design to regulate behavior purposefully by promoting ...values such as privacy, security, intellectual property protection, innovation, and freedom of expression. Legal and policy scholarship has discussed individual skirmishes over the political impact of technical choices—from whether intelligence and police agencies can gain access to privately encrypted data to debates over digital rights management. But it has failed to come to terms with the reality that “governance-by-design”—the purposeful effort to use technology to embed values—is becoming a central mode of policymaking, and that our existing regulatory system is fundamentally ill-equipped to prevent that phenomenon from subverting public governance.
Far from being a panacea, governance-by-design has undermined important governance norms and chipped away at our voting, speech, privacy, and equality rights. In administrative agencies, courts, Congress, and international policy bodies, public discussions about embedding values in design arise in a one-off, haphazard way, if at all. Constrained by their structural limitations, these traditional venues rarely explore the full range of other values that design might affect, and often advance, a single value or occasionally pit one value against another. They seldom permit a meta-discussion about when and whether it is appropriate to enlist technology in the service of values at all. And their policy discussions almost never include designers, engineers, and those that study the impact of socio-technical systems on values.
When technology is designed to regulate without such discussions—as it often is—the effects can be even more insidious. The resulting technology often hides government and corporate aims and the fundamental political decisions that have been made. In this way, governance-by-design obscures policy choices altogether. Such choices recede from the political as they become what “is” rather than what politics has determined ought to be.
This Article proposes a detailed framework for saving governance-by-design.
Through four case studies, the Article examines a range of recent battles over the values embedded in technology design and makes the case that we are entering an era of policymaking by “design war.” These four battles, in turn, highlight four recurring dysfunctions of governance-by-design:
First, governance-by-design overreaches by using overbroad technological fixes that lack the flexibility to balance equities and adapt to changing circumstances. Errors and unintended consequences result.
Second, governance-by-design often privileges one or a few values while excluding other important ones, particularly broad human rights.
Third, regulators lack the proper tools for governance-by-design. Administrative agencies, legislatures, and courts often lack technical expertise and have traditional structures and accountability mechanisms that poorly fit the job of regulating technology.
Fourth, governance-by-design decisions that broadly affect the public are often made in private venues or in processes that make technological choices appear inevitable and apolitical.
If we fail to develop new rules of engagement for governance-by-design, substantial and consequential policy choices will be made without effective public participation, purposeful debate, and relevant expertise. Important values will be sacrificed—sometimes inadvertently, because of bad decisions, and sometimes willfully, because decisions will be captured by powerful stakeholders.
To address these critical issues, this Article proposes four rules of engagement. It constructs a framework to help decision makers protect values and democratic processes as they consider regulating by technology. Informed by the examination of skirmishes across the battlefields, as well as relevant Science and Technology Studies (STS), legal, design, and engineering literatures, this framework embraces four overarching imperatives:
1. Design with Modesty and Restraint to Preserve Flexibility
2. Privilege Human and Public Rights
3. Ensure Regulators Possess the Right Tools: Broad Authority and Competence, and Technical Expertise
4. Maintain the Publicness of Policymaking
These rules of engagement offer a way toward surfacing and resolving value disputes in technological design, while proeserving rather than subverting public governance and public values.
Differential privacy is at a turning point. Implementations have been successfully leveraged in private industry, the public sector, and academia in a wide variety of applications, allowing ...scientists, engineers, and researchers the ability to learn about populations of interest without specifically learning about these individuals. Because differential privacy allows us to quantify cumulative privacy loss, these differentially private systems will, for the first time, allow us to measure and compare the total privacy loss due to these personal data-intensive activities. Appropriately leveraged, this could be a watershed moment for privacy.
Like other technologies and techniques that allow for a range of instantiations, implementation details matter. When meaningfully implemented, differential privacy supports deep data-driven insights with minimal worst-case privacy loss. When not meaningfully implemented, differential privacy delivers privacy mostly in name. Using differential privacy to maximize learning while providing a meaningful degree of privacy requires judicious choices with respect to the privacy parameter epsilon, among other factors. However, there is little understanding of what is the optimal value of epsilon for a given system or classes of systems/purposes/data etc. or how to go about figuring it out.
To understand current differential privacy implementations and how organizations make these key choices in practice, we conducted interviews with practitioners to learn from their experiences of implementing differential privacy. We found no clear consensus on how to choose epsilon, nor is there agreement on how to approach this and other key implementation decisions. Given the importance of these implementation details there is a need for shared learning amongst the differential privacy community. To serve these purposes, we propose the creation of the Epsilon Registry—a publicly available communal body of knowledge about differential privacy implementations that can be used by various stakeholders to drive the identification and adoption of judicious differentially private implementations.
This Thing Called Fairness Mulligan, Deirdre K.; Kroll, Joshua A.; Kohli, Nitin ...
Proceedings of the ACM on human-computer interaction,
11/2019, Volume:
3, Issue:
CSCW
Journal Article
Peer reviewed
The explosion in the use of software in important sociotechnical systems has renewed focus on the study of the way technical constructs reflect policies, norms, and human values. This effort requires ...the engagement of scholars and practitioners from many disciplines. And yet, these disciplines often conceptualize the operative values very differently while referring to them using the same vocabulary. The resulting conflation of ideas confuses discussions about values in technology at disciplinary boundaries. In the service of improving this situation, this paper examines the value of shared vocabularies, analytics, and other tools that facilitate conversations about values in light of these disciplinary specific conceptualizations, the role such tools play in furthering research and practice, outlines different conceptions of "fairness" deployed in discussions about computer systems, and provides an analytic tool for interdisciplinary discussions and collaborations around the concept of fairness. We use a case study of risk assessments in criminal justice applications to both motivate our effort--describing how conflation of different concepts under the banner of "fairness" led to unproductive confusion--and illustrate the value of the fairness analytic by demonstrating how the rigorous analysis it enables can assist in identifying key areas of theoretical, political, and practical misunderstanding or disagreement, and where desired support alignment or collaboration in the absence of consensus.
U.S. privacy law is under attack. Scholars and advocates criticize it as weak, incomplete, and confusing, and argue that it fails to empower individuals to control the use of their personal ...information. These critiques present a largely accurate description of the law "on the books." But the debate has strangely ignored privacy "on the ground" —since 1994, no one has conducted a sustained inquiry into how corporations actually manage privacy, and what motivates them. This Article presents findings from the first study of corporate privacy management in fifteen years, involving qualitative interviews with chief privacy officers identified by their peers as industry leaders. Spurred by these findings, we present a descriptive account of privacy "on the ground'' that upends the terms of the prevailing policy debate. This alternative account identifies elements neglected by the traditional story—the emergence of the Federal Trade Commission as a privacy regulator, the increasing influence of privacy advocates, market and media pressures for privacy protection, and the rise of privacy professionals— and traces the ways in which these players supplemented a privacy debate largely focused on processes (such as notice and consent mechanisms) with a growing emphasis on substance: preventing violations of consumers' expectations of privacy. This "grounded" account should inform privacy reforms. While widespread efforts to expand consent mechanisms to empower individuals to control their personal information may offer some promise, those efforts should not proceed in a way that eclipses robust substantive definitions of privacy and the processes and protections they are beginning to produce, or that constrains the regulatory flexibility that permits their evolution. This would destroy important tools for limiting corporate overreaching, curbing consumer manipulation, and protecting shared expectations about the personal sphere on the Internet and in the marketplace.
Although "privacy by design" (PBD)?embedding privacy protections into products during design, rather than retroactively?uses the term "design" to recognize how technical design choices implement and ...settle policy, design approaches and methodologies are largely absent from PBD conversations. Critical, speculative, and value-centered design approaches can be used to elicit reflections on relevant social values early in product development, and are a natural fit for PBD and necessary to achieve PBD's goal. Bringing these together, we present a case study using a design workbook of speculative design fictions as a values elicitation tool. Originally used as a reflective tool among a research group, we transformed the workbook into artifacts to share as values elicitation tools in interviews with graduate students training as future technology professionals. We discuss how these design artifacts surface contextual, socially-oriented understandings of privacy, and their potential utility in relationship to other values levers.
Data, privacy, and the greater good Horvitz, Eric; Mulligan, Deirdre
Science (American Association for the Advancement of Science),
07/2015, Volume:
349, Issue:
6245
Journal Article
Peer reviewed
Large-scale aggregate analyses of anonymized data can yield valuable results and insights that address public health challenges and provide new avenues for scientific discovery. These methods can ...extend our knowledge and provide new tools for enhancing health and wellbeing. However, they raise questions about how to best address potential threats to privacy while reaping benefits for individuals and to society as a whole. The use of machine learning to make leaps across informational and social contexts to infer health conditions and risks from nonmedical data provides representative scenarios for reflections on directions with balancing innovation and regulation.
Doctrine for Cybersecurity Mulligan, Deirdre K.; Schneider, Fred B.
Daedalus (Cambridge, Mass.),
09/2011, Volume:
140, Issue:
4
Journal Article
Peer reviewed
Open access
A succession of doctrines for enhancing cybersecurity has been advocated in the past, including prevention, risk management, and deterrence through accountability. None has proved effective. ...Proposals that are now being made view cybersecurity as a public good and adopt mechanisms inspired by those used for public health. This essay discusses the failings of previous doctrines and surveys the landscape of cybersecurity through the lens that a new doctrine, public cybersecurity, provides.
While the turn from traditional regulation to more collaborative, experimentalist, and flexible forms of governance has garnered significant academic focus, far less attention has been paid to the ...effects of such “new governance” approaches on regulated firms' understanding of the laws' demands, and on the structures employed within business organizations to meet them. This article targets this analytic gap by examining internal corporate practices regarding consumer privacy, an arena in which the Federal Trade Commission and the states have adopted new governance models. Using data from qualitative interviews with leading corporate Chief Privacy Officers, as well as internal corporate documentation, it examines the way privacy practices have been catalyzed in the shadow of new privacy governance approaches and the combination of regulatory, market, and stakeholder forces they seek to harness. Specifically, it suggests the convergence of a set of practices adopted by privacy officers identified as “leaders,” regarding both high‐level corporate privacy management and the integration of privacy into entity‐wide risk management goals through technology, decision‐making processes, and the empowerment of distributed expertise networks throughout the firm.
Administrative agencies increasingly rely on technology to promote the substantive goals they are charged to pursue. The digitization of administration, then, raises the question of how to ensure ...that decisions about the use of technology in public management reflect the political and social commitments to universal privacy concerns. Having suggested limits to traditional means of external oversight in the privacy context, the authors explore what factors might, by contrast, promote the consideration of privacy. To that end, they examine the implementation of the PIA requirement by two different federal agencies considering the adoption of a single technology: radio frequency identification (RFID). The RFID case studies suggest three areas of significant variance between the agencies that the authors identify as potentially contributing to the disparate levels of compliance with the PIA mandate: 1. the status and independence of a privacy expert embedded within the agency, 2. the decentralized distribution, disciplinary diversity, prior experience, and expertise of the privacy staff; and 3. the creation of an alternative external oversight structure.