Artificial Intelligence (AI)‐based systems are widely employed nowadays to make decisions that have far‐reaching impact on individuals and society. Their decisions might affect everyone, everywhere, ...and anytime, entailing concerns about potential human rights issues. Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training, and deployment to ensure social good while still benefiting from the huge potential of the AI technology. The goal of this survey is to provide a broad multidisciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well‐grounded in a legal frame. In this survey, we focus on data‐driven AI, as a large part of AI is powered nowadays by (big) data and powerful machine learning algorithms. If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features such as race, sex, and so forth.
This article is categorized under:
Commercial, Legal, and Ethical Issues > Fairness in Data Mining
Commercial, Legal, and Ethical Issues > Ethical Considerations
Commercial, Legal, and Ethical Issues > Legal Issues
Overview of topics related to bias in data‐driven AI systems discussed in this survey.
The proliferation of artificial intelligence systems and their reliance on massive datasets have led to a renewed demand on privacy of data. Both the large data processing need and its associated ...data privacy demand have led to the development of techniques such as Federated Learning, a distributed machine learning technique with privacy preservation built-in. Within Federated Learning, as with other machine learning based techniques, the concern and challenges of ensuring that the decisions being made are fair and equitable to all users is paramount. This paper presents an up-to-date review of the motivations, concepts, characteristics, challenges, and techniques/methods related to fairness in Federated Learning from the literature. It also highlights open challenges and future research directions in evaluating and enforcing fairness in Federated Learning systems.
Algorithmic fairness in social context Huang, Yunyou; Liu, Wenjing; Gao, Wanling ...
BenchCouncil Transactions on Benchmarks, Standards and Evaluations,
September 2023, 2023-09-00, 2023-09-01, Letnik:
3, Številka:
3
Journal Article
Recenzirano
Odprti dostop
Algorithmic fairness research is currently receiving significant attention, aiming to ensure that algorithms do not discriminate between different groups or individuals with similar characteristics. ...However, with the popularization of algorithms in all aspects of society, algorithms have changed from mere instruments to social infrastructure. For instance, facial recognition algorithms are widely used to provide user verification services and have become an indispensable part of many social infrastructures like transportation, health care, etc. As an instrument, an algorithm needs to pay attention to the fairness of its behavior. However, as a social infrastructure, it needs to pay even more attention to its impact on social fairness. Otherwise, it may exacerbate existing inequities or create new ones. For example, if an algorithm treats all passengers equally and eliminates special seats for pregnant women in the interest of fairness, it will increase the risk of pregnant women taking public transport and indirectly damage their right to fair travel. Therefore, algorithms have the responsibility to ensure social fairness, not just within their operations. It is now time to expand the concept of algorithmic fairness beyond mere behavioral equity, assessing algorithms in a broader societal context, and examining whether they uphold and promote social fairness. This article analyzes the current status and challenges of algorithmic fairness from three key perspectives: fairness definition, fairness dataset, and fairness algorithm. Furthermore, the potential directions and strategies to promote the fairness of the algorithm are proposed.
•We summarize the current state of algorithmic fairness and research directions.•We point out problems with existing research on algorithmic fairness.•We suggest researching algorithmic fairness as a societal infrastructure.
A survey on datasets for fairness‐aware machine learning Le Quy, Tai; Roy, Arjun; Iosifidis, Vasileios ...
Wiley interdisciplinary reviews. Data mining and knowledge discovery,
May/June 2022, 2022-05-00, 20220501, Letnik:
12, Številka:
3
Journal Article
Recenzirano
Odprti dostop
As decision‐making increasingly relies on machine learning (ML) and (big) data, the issue of fairness in data‐driven artificial intelligence systems is receiving increasing attention from both ...research and industry. A large variety of fairness‐aware ML solutions have been proposed which involve fairness‐related interventions in the data, learning algorithms, and/or model outputs. However, a vital part of proposing new approaches is evaluating them empirically on benchmark datasets that represent realistic and diverse settings. Therefore, in this paper, we overview real‐world datasets used for fairness‐aware ML. We focus on tabular data as the most common data representation for fairness‐aware ML. We start our analysis by identifying relationships between the different attributes, particularly with respect to protected attributes and class attribute, using a Bayesian network. For a deeper understanding of bias in the datasets, we investigate interesting relationships using exploratory analysis.
This article is categorized under:
Commercial, Legal, and Ethical Issues > Fairness in Data Mining
Fundamental Concepts of Data and Knowledge > Data Concepts
Technologies > Data Preprocessing
An increasing number of decisions regarding the daily lives of human beings are being controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging from healthcare, ...transportation, and education to college admissions, recruitment, provision of loans, and many more realms. Since they now touch on many aspects of our lives, it is crucial to develop ML algorithms that are not only accurate but also objective and fair. Recent studies have shown that algorithmic decision making may be inherently prone to unfairness, even when there is no intention for it. This article presents an overview of the main concepts of identifying, measuring, and improving algorithmic fairness when using ML algorithms, focusing primarily on classification tasks. The article begins by discussing the causes of algorithmic bias and unfairness and the common definitions and measures for fairness. Fairness-enhancing mechanisms are then reviewed and divided into pre-process, in-process, and post-process mechanisms. A comprehensive comparison of the mechanisms is then conducted, toward a better understanding of which mechanisms should be used in different scenarios. The article ends by reviewing several emerging research sub-fields of algorithmic fairness, beyond classification.
Members of a supply chain often make profit comparisons. A retailer exhibits peer‐induced fairness concerns when his own profit is behind that of a peer retailer interacting with the same supplier. ...In addition, a retailer exhibits distributional fairness when his supplier's share of total profit is larger than his own. While existing research focuses exclusively on distributional fairness concerns, this study investigates how both types of fairness might interact and influence economic outcomes in a supply chain. We consider a one‐supplier and two‐retailer supply chain setting, and we show that (i) in the presence of distributional fairness alone, the wholesale price offer is lower than the standard wholesale price offer; (ii) in the presence of both types of fairness, the second wholesale price is higher than the first wholesale price; and (iii) in the presence of both types of fairness, the second retailer makes a lower profit and has a lower share of the total supply chain profit than the first retailer. We run controlled experiments with subjects motivated by substantial monetary incentives and show that subject behaviors are consistent with the model predictions. Structural estimation on the data suggests that peer‐induced fairness is more salient than distributional fairness.
Machines increasingly decide over the allocation of resources or tasks among people resulting in what we call Machine Allocation Behavior. People respond strongly to how other people or machines ...allocate resources. However, the implications for human relationships of algorithmic allocations of, for example, tasks among crowd workers, annual bonuses among employees, or a robot’s gaze among members of a group entering a store remains unclear. We leverage a novel research paradigm to study the impact of machine allocation behavior on fairness perceptions, interpersonal perceptions, and individual performance. In a 2 × 3 between-subject design that manipulates how the allocation agent is presented (human vs. artificial intelligent AI system) and the allocation type (receiving less vs. equal vs. more resources), we find that group members who receive more resources perceive their counterpart as less dominant when the allocation originates from an AI as opposed to a human. Our findings have implications on our understanding of the impact of machine allocation behavior on interpersonal dynamics and on the way in which we understand human responses towards this type of machine behavior.
•Receiving more resources from an AI shapes how dominant group members are perceived.•Collaborative Tetris is an effective platform for exploring fairness in groups.•Fairness is better understood as a dynamic phenomenon that develops over time.•A machine’s allocation behavior is crucial to understanding its impact on groups.