UP - logo
E-viri
Celotno besedilo
Recenzirano
  • Think Your Artificial Intel...
    Bellamy, Rachel K.E.; Dey, Kuntal; Hind, Michael; Hoffman, Samuel C.; Houde, Stephanie; Kannan, Kalapriya; Lohia, Pranay; Mehta, Sameep; Mojsilovic, Aleksandra; Nagar, Seema; Ramamurthy, Karthikeyan Natesan; Richards, John; Saha, Diptikalyan; Sattigeri, Prasanna; Singh, Moninder; Varshney, Kush R.; Zhang, Yunfeng

    IEEE software, 07/2019, Letnik: 36, Številka: 4
    Journal Article

    Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of application context, specifics of the decisions being made, resolution of conflicting stakeholder viewpoints, and so forth-mitigating bias from machine-learning software is important and possible but difficult and too often ignored.