The hidden costs of artificial intelligence, from natural
resources and labor to privacy and freedom What happens
when artificial intelligence saturates political life and depletes
the planet? How is ...AI shaping our understanding of ourselves and
our societies? In this book Kate Crawford reveals how this
planetary network is fueling a shift toward undemocratic governance
and increased inequality. Drawing on more than a decade of
research, award-winning science, and technology, Crawford reveals
how AI is a technology of extraction: from the energy and minerals
needed to build and sustain its infrastructure, to the exploited
workers behind "automated" services, to the data AI collects from
us. Rather than taking a narrow focus on code and algorithms,
Crawford offers us a political and a material perspective on what
it takes to make artificial intelligence and where it goes wrong.
While technical systems present a veneer of objectivity, they are
always systems of power. This is an urgent account of what is at
stake as technology companies use artificial intelligence to
reshape the world.
This paper develops the concept of listening as a metaphor for paying attention online. Pejorative terms such as 'lurking' have failed to capture much detail about the experience of presence online. ...Instead, much online media research has focused on 'having a voice', be it in blogs, wikis, social media, or discussion lists. The metaphor of listening can offer a productive way to analyse the forms of online engagement that have previously been overlooked, while also allowing a deeper consideration of the emerging disciplines of online attention. Social media are the focus of this paper, and in particular, how these platforms are changing the configurations of the ideal listening subject. Three modes of online listening are discussed: background listening, reciprocal listening, and delegated listening; Twitter provides a case study for how these modes are experienced and performed by individuals, politicians and corporations.
There are growing discontinuities between the research practices of data science and established tools of research ethics regulation. Some of the core commitments of existing research ethics ...regulations, such as the distinction between research and practice, cannot be cleanly exported from biomedical research to data science research. Such discontinuities have led some data science practitioners and researchers to move toward rejecting ethics regulations outright. These shifts occur at the same time as a proposal for major revisions to the Common Rule—the primary regulation governing human-subjects research in the USA—is under consideration for the first time in decades. We contextualize these revisions in long-running complaints about regulation of social science research and argue data science should be understood as continuous with social sciences in this regard. The proposed regulations are more flexible and scalable to the methods of non-biomedical research, yet problematically largely exclude data science methods from human-subjects regulation, particularly uses of public datasets. The ethical frameworks for Big Data research are highly contested and in flux, and the potential harms of data science research are unpredictable. We examine several contentious cases of research harms in data science, including the 2014 Facebook emotional contagion study and the 2016 use of geographical data techniques to identify the pseudonymous artist Banksy. To address disputes about application of human-subjects research ethics in data science, critical data studies should offer a historically nuanced theory of “data subjectivity” responsive to the epistemic methods, harms and benefits of data science and commerce.
Deep learning techniques are growing in popularity within the field of artificial intelligence (AI). These approaches identify patterns in large scale datasets, and make classifications and ...predictions, which have been celebrated as more accurate than those of humans. But for a number of reasons, including nonlinear path from inputs to outputs, there is a dearth of theory that can explain why deep learning techniques work so well at pattern detection and prediction. Claims about “superhuman” accuracy and insight, paired with the inability to fully explain how these results are produced, form a discourse about AI that we call enchanted determinism. To analyze enchanted determinism, we situate it within a broader epistemological diagnosis of modernity: Max Weber’s theory of disenchantment. Deep learning occupies an ambiguous position in this framework. On one hand, it represents a complex form of technological calculation and prediction, phenomena Weber associated with disenchantment. On the other hand, both deep learning experts and observers deploy enchanted, magical discourses to describe these systems’ uninterpretable mechanisms and counter-intuitive behavior. The combination of predictive accuracy and mysterious or unexplainable properties results in myth-making about deep learning’s transcendent, superhuman capacities, especially when it is applied in social settings. We analyze how discourses of magical deep learning produce techno-optimism, drawing on case studies from game-playing, adversarial examples, and attempts to infer sexual orientation from facial images. Enchantment shields the creators of these systems from accountability while its deterministic, calculative power intensifies social processes of classification and control.
Limitless Worker Surveillance Ajunwa, Ifeoma; Crawford, Kate; Schultz, Jason
California law review,
06/2017, Letnik:
105, Številka:
3
Journal Article
Recenzirano
From the Pinkerton private detectives of the 1850s, to the closed-circuit cameras and email monitoring of the 1990s, to new apps that quantify the productivity of workers, and to the collection of ...health data as part of workplace Wellness programs, American employers have increasingly sought to track the activities of their employees. Starting with Taylorism and Fordism, American workers have become accustomed to heightened levels of monitoring that have only been mitigated by the legal counterweight of organized unions and labor laws. Thus, along with economic and technological limits, the law has always been presumed as a constraint on these surveillance activities. Recently, technological advancements in several fields—big data analytics, communications capture, mobile device design, DNA testing, and biometrics—have dramatically expanded capacities for worker surveillance both on and off the job. While the cost of many forms of surveillance has dropped significantly, new technologies make the surveillance of workers even more convenient and accessible, and labor unions have become much less powerful in advocating for workers. The American worker must now contend with an all-seeing Argus Panoptes built from technology that allows for the trawling of employee data from the Internet and the employer collection of productivity data and health data, with the ostensible consent of the worker. This raises the question of whether the law still remains a meaningful avenue to delineate boundaries for worker surveillance. In this Article, we start from the normative viewpoint that the right to privacy is not an economic good that may be exchanged for the opportunity for employment. We then examine the effectiveness of the law as a check on intrusive worker surveillance, given recent technological innovations. In particular, we focus on two popular trends in worker tracking—productivity apps and worker Wellness programs—to argue that current legal constraints are insufficient and may leave American workers at the mercy of 24/7 employer monitoring. We consider three possible approaches to remedying this deficiency of the law: (I) a comprehensive omnibus federal information privacy law, similar to approaches taken in the European Onion, which would protect all individual privacy to various degrees regardless of whether or not one is at work or elsewhere and without regard to the sensitivity of the data at issue; (2) a narrower, sector-specific Employee Privacy Protection Act (EPPA), which would focus on prohibiting specific workplace surveillance practices that extend outside of work-related locations or activities; and (3) an even narrower sector and sensitivity-specific Employee Health Information Privacy Act (EHIPA), which would protect the most sensitive type of employee data, especially those that could arguably fall outside of the Health Insurance Portability and Accountability Act's (HIPAA) jurisdiction, such as Wellness and other data related to health and one's personhood.
AI SYSTEMS AS STATE ACTORS Crawford, Kate; Schultz, Jason
Columbia law review,
11/2019, Letnik:
119, Številka:
7
Journal Article
Recenzirano
Many legal scholars have explored how courts can apply legal doctrines, such as procedural due process and equal protection, directly to government actors when those actors deploy artificial ...intelligence (AI) systems. But very little attention has been given to how courts should hold private vendors of these technologies accountable when the government uses their AI tools in ways that violate the law. This is a concerning gap, given that governments are turning to third-party vendors with increasing frequency to provide the algorithmic architectures for public services, including welfare benefits and criminal risk assessments. As such, when challenged, many state governments have disclaimed any knowledge or ability to understand, explain, or remedy problems created by AI systems that they have procured from third parties. The general position has been “we cannot be responsible for something we don’t understand.” This means that algorithmic systems are contributing to the process of government decisionmaking without any mechanisms of accountability or liability. They fall within an accountability gap.
In response, we argue that courts should adopt a version of the state action doctrine to apply to vendors who supply AI systems for government decisionmaking. Analyzing the state action doctrine’s public function, compulsion, and joint participation tests, we argue that—much like other private actors who perform traditional core government functions at the behest of the state—developers of AI systems that directly influence government decisions should be found to be state actors for purposes of constitutional liability. This is a necessary step, we suggest, to bridge the current AI accountability gap.
This piece examines emoji as conduits for affective labor in the social networks of informational capitalism. Emoji, ubiquitous digital images that can appear in text messages, emails, and social ...media chat platforms, are rich in social, cultural, and economic significance. This article examines emoji as historical, social, and cultural objects, and as examples of skeuomorphism and of technical standardization. Now superseded as explicitly monetized objects by other graphics designed for affective interactions, emoji nonetheless represent emotional data of enormous interest to businesses in the digital economy, and continue to act symbolically as signifiers of affective meaning. We argue that emoji characters both embody and represent the tension between affect as human potential, and as a productive force that capital continually seeks to harness through the management of everyday biopolitics. Emoji are instances of a contest between the creative power of affective labor and its limits within a digital realm in the thrall of market logic.
This article contrasts the Megan's Story campaign, a recent Australian media and policy response to sexting (the act of taking and transmitting naked or semi-naked pictures via mobile phones) with ...interview responses drawn from an Australian study that has asked young people about mobiles and sexting. It considers local and international responses to sexting as 'child pornography,' raising questions about the adequacy and appropriateness of criminalizing young people's sexual self-representation and communication. Based on young people's responses to sexting, the authors argue that there is an emerging ethics around the issue of consent being developed by young people. However, considerations of consent cannot be accounted for by the laws as they are presently framed, as under-18-year-olds currently are not allowed to consent to any form of sexting. This disconnection between the law and uses of technology by consenting teenagers generates problems both for policy, education and legal systems. This paper suggests a response that would recognize the seriousness of incidents of bullying, harassment or abuse, and would also take into account the meaning that sexting has for young people in specific contexts and cultures.
About the Authors: Matthew Zook * E-mail: zook@uky.edu Affiliation: Department of Geography, University of Kentucky, Lexington, Kentucky, United States of America Solon Barocas Affiliation: Microsoft ...Research, New York, New York, United States of America danah boyd Affiliations Microsoft Research, New York, New York, United States of America, Data & Society, New York, New York, United States of America Kate Crawford Affiliations Microsoft Research, New York, New York, United States of America, Information Law Institute, New York University, New York, New York, United States of America Emily Keller Affiliation: Data & Society, New York, New York, United States of America ORCID http://orcid.org/0000-0001-9189-0421 Seeta Peña Gangadharan Affiliation: Department of Media and Communications, London School of Economics, London, United Kingdom ORCID http://orcid.org/0000-0002-1955-3874 Alyssa Goodman Affiliation: Harvard-Smithsonian Center for Astrophysics, Harvard University, Cambridge, Massachusetts, United States of America Rachelle Hollander Affiliation: Center for Engineering Ethics and Society, National Academy of Engineering, Washington, DC, United States of America Barbara A. Koenig Affiliation: Institute for Health Aging, University of California-San Francisco, San Francisco, California, United States of America Jacob Metcalf Affiliation: Ethical Resolve, Santa Cruz, California, United States of America ORCID http://orcid.org/0000-0002-2803-6625 Arvind Narayanan Affiliation: Department of Computer Science, Princeton University, Princeton, New Jersey, United States of America Alondra Nelson Affiliation: Department of Sociology, Columbia University, New York, New York, United States of America Frank Pasquale Affiliation: Carey School of Law, University of Maryland, Baltimore, Maryland, United States of AmericaCitation: Zook M, Barocas S, boyd d, Crawford K, Keller E, Gangadharan SP, et al. PLoS Comput...
Exogenous DNA can be a template to precisely edit a cell's genome. However, the delivery of in vitro-produced DNA to target cells can be inefficient, and low abundance of template DNA may underlie ...the low rate of precise editing. One potential tool to produce template DNA inside cells is a retron, a bacterial retroelement involved in phage defense. However, little effort has been directed at optimizing retrons to produce designed sequences. Here, we identify modifications to the retron non-coding RNA (ncRNA) that result in more abundant reverse-transcribed DNA (RT-DNA). By testing architectures of the retron operon that enable efficient reverse transcription, we find that gains in DNA production are portable from prokaryotic to eukaryotic cells and result in more efficient genome editing. Finally, we show that retron RT-DNA can be used to precisely edit cultured human cells. These experiments provide a general framework to produce DNA using retrons for genome modification.