This article uses a socio-legal perspective to analyze the use of ethics guidelines as a governance tool in the development and use of artificial intelligence (AI). This has become a central policy ...area in several large jurisdictions, including China and Japan, as well as the EU, focused on here. Particular emphasis in this article is placed on the Ethics Guidelines for Trustworthy AI published by the EU Commission’s High-Level Expert Group on Artificial Intelligence in April 2019, as well as the White Paper on AI, published by the EU Commission in February 2020. The guidelines are reflected against partially overlapping and already-existing legislation as well as the ephemeral concept construct surrounding AI as such. The article concludes by pointing to (1) the challenges of a temporal discrepancy between technological and legal change, (2) the need for moving from principle to process in the governance of AI, and (3) the multidisciplinary needs in the study of contemporary applications of data-dependent AI.
This conceptual paper addresses the issues of transparency as linked to artificial intelligence (AI) from socio-legal and computer scientific perspectives. Firstly, we discuss the conceptual ...distinction between transparency in AI and algorithmic transparency, and argue for the wider concept ‘in AI’, as a partly contested albeit useful notion in relation to transparency. Secondly, we show that transparency as a general concept is multifaceted, and of widespread theoretical use in multiple disciplines over time, particularly since the 1990s. Still, it has had a resurgence in contemporary notions of AI governance, such as in the multitude of recently published ethics guidelines on AI. Thirdly, we discuss and show the relevance of the fact that transparency expresses a conceptual metaphor of more general significance, linked to knowing, bringing positive connotations that may have normative effects to regulatory debates. Finally, we draw a possible categorisation of aspects related to transparency in AI, or what we interchangeably call AI transparency, and argue for the need of developing a multidisciplinary understanding, in order to contribute to the governance of AI as applied on markets and in society.
The present article argues that the fact that personal data holds great value, in combination with a lack of transparency in its commercial use, leads to a need for consumer policy that strengthens ...consumer protection. The widespread practice of user agreements and consent-based regulation of personal data collection is not satisfactory for balancing these information-asymmetric markets. The lack of transparency deriving from the complex and massive datafication of consumers – where consumers are profiled, data is brokered and the algorithmically automated decision-making is opaque – speaks to the need for improved supervision at a more structural level above and beyond the individual consumer's choices, preferably by more active consumer protection authorities.
Despite numerous studies of geographic variation in healthcare cost and utilization at the local, regional, and state levels across the U.S., a comprehensive characterization of geographic variation ...in outcomes has not been published. Our objective was to quantify variation in US health outcomes in an all-payer population before and after risk-adjustment.
We used information from 16 independent data sources, including 22 million all-payer inpatient admissions from the Healthcare Cost and Utilization Project (which covers regions where 50% of the U.S. population lives) to analyze 24 inpatient mortality, inpatient safety, and prevention outcomes. We compared outcome variation at state, hospital referral region, hospital service area, county, and hospital levels. Risk-adjusted outcomes were calculated after adjusting for population factors, co-morbidities, and health system factors. Even after risk-adjustment, there exists large geographical variation in outcomes. The variation in healthcare outcomes exceeds the well publicized variation in US healthcare costs. On average, we observed a 2.1-fold difference in risk-adjusted mortality outcomes between top- and bottom-decile hospitals. For example, we observed a 2.3-fold difference for risk-adjusted acute myocardial infarction inpatient mortality. On average a 10.2-fold difference in risk-adjusted patient safety outcomes exists between top and bottom-decile hospitals, including an 18.3-fold difference for risk-adjusted Central Venous Catheter Bloodstream Infection rates. A 3.0-fold difference in prevention outcomes exists between top- and bottom-decile counties on average; including a 2.2-fold difference for risk-adjusted congestive heart failure admission rates. The population, co-morbidity, and health system factors accounted for a range of R2 between 18-64% of variability in mortality outcomes, 3-39% of variability in patient safety outcomes, and 22-70% of variability in prevention outcomes.
The amount of variability in health outcomes in the U.S. is large even after accounting for differences in population, co-morbidities, and health system factors. These findings suggest that: 1) additional examination of regional and local variation in risk-adjusted outcomes should be a priority; 2) assumptions of uniform hospital quality that underpin rationale for policy choices (such as narrow insurance networks or antitrust enforcement) should be challenged; and 3) there exists substantial opportunity for outcomes improvement in the US healthcare system.
Soil types mapping and the spatial variation of soil classes are essential concerns in both geotechnical and geoenvironmental engineering. Because conventional soil mapping systems are time-consuming ...and costly, alternative quick and cheap but accurate methods need to be developed. In this paper, a new optimized multi-output generalized feed forward neural network (
GFNN
) structure using 58 piezocone penetration test points (
CPTu
) for producing a digital soil types map in the southwest of Sweden is developed. The introduced
GFNN
architecture is supported by a generalized shunting neuron (
GSN
) model computing unit to increase the capability of nonlinear boundaries of classified patterns. The comparison conducted between known soil type classification charts,
CPTu
interpreting procedures, and the outcomes of the
GFNN
model indicates acceptable accuracy in estimating complex soil types. The results show that the predictability of the
GFNN
system offers a valuable tool for the purpose of soil type pattern classifications and providing soil profiles.
In the context of geo-infrastructures and specifically tunneling projects, analyzing the large-scale sensor-based measurement-while-drilling (MWD) data plays a pivotal role in assessing rock ...engineering conditions. However, handling the big MWD data due to multiform stacking is a time-consuming and challenging task. Extracting valuable insights and improving the accuracy of geoengineering interpretations from MWD data necessitates a combination of domain expertise and data science skills in an iterative process. To address these challenges and efficiently normalize and filter out noisy data, an automated processing approach integrating the stepwise technique, mode, and percentile gate bands for both single and peer group-based holes was developed. Subsequently, the mathematical concept of a novel normalizing index for classifying such big datasets was also presented. The visualized results from different geo-infrastructure datasets in Sweden indicated that outliers and noisy data can more efficiently be eliminated using single hole-based normalizing. Additionally, a relational unified PostgreSQL database was created to store and automatically transfer the processed and raw MWD as well as real time grouting data that offers a cost effective and efficient data extraction tool. The generated database is expected to facilitate in-depth investigations and enable application of the artificial intelligence (AI) techniques to predict rock quality conditions and design appropriate support systems based on MWD data.
Just an ordinary Jew Larsson, Stefan
Nordisk judaistik,
11/2018, Letnik:
29, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The apostle Paul, author of many letters in the New Testament, is often considered to be the father of Christian antisemitism and a staunch opponent of keeping the Torah. This perspective has been ...shared both by Jews and Christians throughout the centuries, until the late twentieth century. For the last forty years or so, a new paradigm on Paul has taken shape, one where Jewish scholarship and research on ancient Judaism is making a significant difference. The picture of a Second Temple-period Pharisee is emerging, possibly with connections to early forms of Merkabah mysticism. There are no longer any reasons but ‘tradition’ that Paul should not be a part of Jewish studies, and this article gives some of the arguments for this timely re-appropriation of one of the best-known Jews in history.
Social and economic change in the built environment is increasingly driven by processes of datafication. These often find expression through smart phone apps and private platforms that seek to upset ...the status quo by mediating consumer and producer interactions, and by monetising the data these produce. This paper uses the practice-oriented concept of ‘disruptive data’ to draw attention away from specific technologies and towards the broader political economic logics that underlie them. In so doing, disruption is reframed as a capitalist strategy for creating and capitalising on uncertainty. The rapid change to Dublin’s taxi industry over the past decade illustrates these dynamics. By following how ride-hailing apps, most notably Hailo, were introduced into and effected the city, the importance of regulatory context but also wider flows of data and capital are stressed. Data disruptions occur not at the level of the app or platform, but at the economic relations in which they are embedded. By paying attention to the historical details of data disruption, the specificities of change processes are revealed without losing track of their broader economic function.
Due to associated uncertainties, modelling the spatial distribution of depth to bedrock (DTB) is an important and challenging concern in many geo-engineering applications. The association between ...DTB, the safety and economy of design structures implies that generating more precise predictive models can be of vital interest. In the present study, the challenge of applying an optimally predictive three-dimensional (3D) spatial DTB model for an area in Stockholm, Sweden was addressed using an automated intelligent computing design procedure. The process was developed and programmed in both C++ and Python to track their performance in specified tasks and also to cover a wide variety of different internal characteristics and libraries. In comparison to the ordinary Kriging (OK) geostatistical tool, the superiority of the developed automated intelligence system was demonstrated through the analysis of confusion matrices and the ranked accuracies of different statistical errors. The results showed that in the absence of measured data, the intelligence models as a flexible and efficient alternative approach can account for associated uncertainties, thus creating more accurate spatial 3D models and providing an appropriate prediction at any point in the subsurface of the study area.
•Two automated deep learning intelligence models through C++ and Python were developed.•3D pattern of subsurface geo-spatial DTB distributions for Stockholm, Sweden was presented.•The visualised predicted model using C++ showed more accurate results than Python and OK.•Flaws of OK in modelling the limited and heterogeneous distributed data were demonstrated.•The capacity of intelligence models for subsurface DTB characterizing was improved.