The politics of scaling Pfotenhauer, Sebastian; Laurent, Brice; Papageorgiou, Kyriaki ...
Social studies of science,
02/2022, Volume:
52, Issue:
1
Journal Article
Peer reviewed
Open access
A fixation on ‘scaling up’ has captured current innovation discourses and, with it, political and economic life at large. Perhaps most visible in the rise of platform technologies, big data and ...concerns about a new era of monopolies, scalability thinking has also permeated public policy in the search for solutions to ‘grand societal challenges’, ‘mission-oriented innovation’ or transformations through experimental ‘living labs’. In this paper, we explore this scalability zeitgeist as a key ordering logic of current initiatives in innovation and public policy. We are interested in how the explicit preoccupation with scalability reconfigures political and economic power by invading problem diagnoses and normative understandings of how society and social change function. The paper explores three empirical sites – platform technologies, living labs and experimental development economics – to analyze how scalability thinking is rationalized and operationalized. We suggest that social analysis of science and technology needs to come to terms with the ‘politics of scaling’ as a powerful corollary of the ‘politics of technology’, lest we accept the permanent absence from key sites where decisions about the future are made. We focus in on three constitutive elements of the politics of scaling: solutionism, experimentalism and future-oriented valuation. Our analysis seeks to expand our vocabulary for understanding and questioning current modes of innovation that increasingly value scaling as an end in itself, and to open up new spaces for alternative trajectories of social transformation.
Full text
Available for:
NUK, OILJ, SAZU, UKNU, UL, UM, UPUK
Self-driving cars, a quintessentially ‘smart’ technology, are not born smart. The algorithms that control their movements are learning as the technology emerges. Self-driving cars represent a ...high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’ Focusing on the successes and failures of social learning around the much-publicized crash of a Tesla Model S in 2016, I argue that trajectories and rhetorics of machine learning in transport pose a substantial governance challenge. ‘Self-driving’ or ‘autonomous’ cars are misnamed. As with other technologies, they are shaped by assumptions about social needs, solvable problems, and economic opportunities. Governing these technologies in the public interest means improving social learning by constructively engaging with the contingencies of machine learning.
Full text
Available for:
BFBNIB, INZLJ, NMLJ, NUK, OILJ, PNG, SAZU, UKNU, UL, UM, UPUK, ZRSKP
In this paper we argue that recent policy treatments of solar radiation management (SRM) have insufficiently addressed its potential implications for contemporary political systems. Exploring the ...emerging ‘social constitution’ of SRM, we outline four reasons why this is likely to pose immense challenges to liberal democratic politics: That the unequal distribution of and uncertainties about SRM impacts will cause conflicts within existing institutions; that SRM will act at the planetary level and necessitate autocratic governance; that the motivations for SRM will always be plural and unstable; and that SRM will become conditioned by economic forces.
Full text
Available for:
NUK, OILJ, SAZU, UKNU, UL, UM, UPUK
When it comes to making decisions about artificial intelligence (AI), Eric Schmidt is very clear. In 2023, the former Google CEO told NBC’s Meet the Press, “there’s no way a nonindustry person can ...understand what is possible. It’s just too new, too hard, there’s not the expertise.” But if, as Schmidt believes, AI will be the next industrial revolution, then the technology is too important to be left to technology companies. AI poses huge challenges for democratic societies, and the decisions on it are currently being made by a very small group of people. Realizing the opportunities of AI, understanding its risks, and steering it toward the public interest will require a large dose of public participation.
There's a scene in the movie
in which the protagonist is trying to explain to General Groves, his military overseer, the hazards of their endeavor. Groves asks Oppenheimer, "Are you saying there's a ...chance that when we push that button, we destroy the world?" The physicist says, "The chances are near zero." When Groves, understandably alarmed, asks for clarification, Oppenheimer responds, "What do you want from theory alone?"
What does it mean to trust a technology? Stilgoe, Jack
Science (American Association for the Advancement of Science),
12/2023, Volume:
382, Issue:
6676
Journal Article
Peer reviewed
A survey published in October 2023 revealed what seemed to be a paradox. Over the past decade, self-driving vehicles have improved immeasurably, but public trust in the technology is low and falling. ...Only 37% of Americans said they would be comfortable riding in a self- driving vehicle, down from 39% in 2022 and 41% in 2021. Those that have used the technology express more enthusiasm, but the rest have seemingly had their confidence shaken by the failure of the technology to live up to its hype.
We need a Weizenbaum test for AI Stilgoe, Jack
Science (American Association for the Advancement of Science),
2023-Aug-11, 2023-08-11, 20230811, Volume:
381, Issue:
6658
Journal Article
Peer reviewed
Alan Turing introduced his 1950 paper on Computing Machinery and Intelligence with the question "Can machines think?" But rather than engaging in what he regarded as never-ending subjective debate ...about definitions of intelligence, he instead proposed a thought experiment. His "imitation game" offered a test in which an evaluator held conversations with a human and a computer. If the evaluator failed to tell them apart, the computer could be said to have exhibited artificial intelligence (AI). In the decades since Turing's paper, AI has gone from being a fountain of scientific hype to an academic backwater to a gold rush. Throughout, the Turing test has given computer scientists a sense of direction: a quest for what Turing called a "universal machine." Although the debate continues about whether the Turing test is a reasonable measure of artificial intelligence, the real problem is that it asks the wrong question. AI is no longer an academic debate. It is a technological reality. For society to make good decisions about AI, we should instead look to another great late 20th-century computer scientist, Joseph Weizenbaum. In a paper "On the impact of the computer on society," in
in 1972, Weizenbaum argued that his fellow computer scientists should try to view their activities from the standpoint of a member of the public. Whereas computer scientists wonder how to get their technology to work and use "electronic wizardry" to make it safe, Weizenbaum argued that ordinary people would ask "is it good?" and "do we need these things?" As excitement builds about the possibilities of generative AI, rather than asking whether these machines are intelligent, we should instead ask whether they are useful.
The politics of autonomous vehicles Stilgoe, Jack; Mladenović, Miloš
Humanities & social sciences communications,
12/2022, Volume:
9, Issue:
1
Journal Article
Peer reviewed
Open access
Self-driving, ‘autonomous’ vehicles (AVs) promise to change the world in profound ways. The suggested benefits include safety, efficiency and accessibility. However, researchers and others have been ...quick to raise questions about wider implications for mobility and urban environments and responsible development of the technology. In a discussion that has been dominated by science, engineering and narrow questions of ethics, there is a need to draw attention to the old questions of politics: Who wins? Who loses? Who decides? Who pays? AVs will not be defined by their supposed autonomy; they will be defined by a set of social relationships. The special collection that this paper accompanies brings together research from a range of disciplines to explore the politics of autonomous vehicles and provide a foundation for ongoing investigation.