Attention in the AI safety community has increasingly started to include strategic considerations of coordination between relevant actors in the field of AI and AI safety, in addition to the steadily ...growing work on the technical considerations of building safe AI systems. This shift has several reasons: Multiplier effects, pragmatism, and urgency. Given the benefits of coordination between those working towards safe superintelligence, this book surveys promising research in this emerging field regarding AI safety. On a meta-level, the hope is that this book can serve as a map to inform those working in the field of AI coordination about other promising efforts. While this book focuses on AI safety coordination, coordination is important to most other known existential risks (e.g., biotechnology risks), and future, human-made existential risks. Thus, while most coordination strategies in this book are specific to superintelligence, we hope that some insights yield “collateral benefits” for the reduction of other existential risks, by creating an overall civilizational framework that increases robustness, resiliency, and antifragility.
Smart cities and artificial intelligence (AI) are among the most popular discourses in urban policy circles. Most attempts at using AI to improve efficiencies in cities have nevertheless either ...struggled or failed to accomplish the smart city transformation. This is mainly due to short-sighted, technologically determined and reductionist AI approaches being applied to complex urbanization problems. Besides this, as smart cities are underpinned by our ability to engage with our environments, analyze them, and make efficient, sustainable and equitable decisions, the need for a green AI approach is intensified. This perspective paper, reflecting authors’ opinions and interpretations, concentrates on the “green AI” concept as an enabler of the smart city transformation, as it offers the opportunity to move away from purely technocentric efficiency solutions towards efficient, sustainable and equitable solutions capable of realizing the desired urban futures. The aim of this perspective paper is two-fold: first, to highlight the fundamental shortfalls in mainstream AI system conceptualization and practice, and second, to advocate the need for a consolidated AI approach—i.e., green AI—to further support smart city transformation. The methodological approach includes a thorough appraisal of the current AI and smart city literatures, practices, developments, trends and applications. The paper informs authorities and planners on the importance of the adoption and deployment of AI systems that address efficiency, sustainability and equity issues in cities.
"Exposes the vast gap between the actual science
underlying AI and the dramatic claims being made for it." -John
Horgan "If you want to know about AI, read this book…It
shows how a supposedly ...futuristic reverence for Artificial
Intelligence retards progress when it denigrates our most
irreplaceable resource for any future progress: our own human
intelligence." -Peter Thiel Ever since Alan Turing, AI enthusiasts
have equated artificial intelligence with human intelligence. A
computer scientist working at the forefront of natural language
processing, Erik Larson takes us on a tour of the landscape of AI
to reveal why this is a profound mistake. AI works on inductive
reasoning, crunching data sets to predict outcomes. But humans
don't correlate data sets. We make conjectures, informed by context
and experience. And we haven't a clue how to program that kind of
intuitive reasoning, which lies at the heart of common sense.
Futurists insist AI will soon eclipse the capacities of the most
gifted mind, but Larson shows how far we are from
superintelligence-and what it would take to get there. "Larson
worries that we're making two mistakes at once, defining human
intelligence down while overestimating what AI is likely to
achieve…Another concern is learned passivity: our tendency to
assume that AI will solve problems and our failure, as a result, to
cultivate human ingenuity." -David A. Shaywitz, Wall Street
Journal "A convincing case that artificial general
intelligence-machine-based intelligence that matches our own-is
beyond the capacity of algorithmic machine learning because there
is a mismatch between how humans and machines know what they know."
-Sue Halpern, New York Review of Books
Human-AI Collaboration in Data Science Wang, Dakuo; Weisz, Justin D.; Muller, Michael ...
Proceedings of the ACM on human-computer interaction,
11/2019, Letnik:
3, Številka:
CSCW
Journal Article
Recenzirano
The rapid advancement of artificial intelligence (AI) is changing our lives in many ways. One application domain is data science. New techniques in automating the creation of AI, known as AutoAI or ...AutoML, aim to automate the work practices of data scientists. AutoAI systems are capable of autonomously ingesting and pre-processing data, engineering new features, and creating and scoring models based on a target objectives (e.g. accuracy or run-time efficiency). Though not yet widely adopted, we are interested in understanding how AutoAI will impact the practice of data science. We conducted interviews with 20 data scientists who work at a large, multinational technology company and practice data science in various business settings. Our goal is to understand their current work practices and how these practices might change with AutoAI. Reactions were mixed: while informants expressed concerns about the trend of automating their jobs, they also strongly felt it was inevitable. Despite these concerns, they remained optimistic about their future job security due to a view that the future of data science work will be a collaboration between humans and AI systems, in which both automation and human expertise are indispensable.
Prior research highlights the critical role of AI in enhancing second language (L2) learning. However, the factors that practically affect L2 learners to engage with AI resources are still ...underexplored. Given the widespread availability of digital devices among college students, they are particularly poised to benefit from AI-assisted L2 learning. As such, this study, grounded in an extended Technology Acceptance Model (TAM), investigates the predictors of college L2 learners' actual use of AI tools, focusing on AI self-efficacy, AI-related anxiety, and their overall attitude toward AI. Data was gathered from 429 L2 learners at Chinese universities via an online questionnaire, utilizing four established scales. Through structural equation modeling (SEM) via AMOS 24, the results indicate that AI self-efficacy could negatively affect AI anxiety, and positively influence both learners' attitude toward AI and their actual use of AI tools. Besides, AI anxiety negatively predicted the actual use of AI. Moreover, AI self-efficacy was a positive predictor of AI use through reducing AI anxiety, enhancing attitude toward AI, or a combination of both. This study also discusses the theoretical and pedagogical implications and suggests directions for future research.