AI and bias

From Competendo - Digital Toolbox
Jump to: navigation, search
The strength and weakness of AI relies on data and processing power, but also on the quality of the algorithms employed. These are human constructions, designed under specific premises and for specific purposes. A challenge in the application of such technology in social contexts is to understand how implicit decisions behind the construction of the algorithms influence the results. This is an urgent challenge because AI is increasingly spread while the transparency of the algorithms is often lacking.
AI4me.png

Massachusetts Institute of Technology (MIT) computer scientist Joy Buolamwini started digging into racial and gender-based bias embedded in technology after realising that the facial recognition software in her office could not detect her until she donned a white mask. Simply put, the software was prevalently trained with white men pictures so it was unable to recognize Buolamwini’s features because it didn’t know they existed. Such a lack of information has serious implications. In fact, when this technology is employed by law enforcement agencies in public spaces to identify wanted criminals, it can lead to the arrest of innocent people mistaken as suspects.

In 2016 Buolamwini founded the Algorithmic Justice League which released two landmark studies:

  • Gender Shades, in 2018, uncovered that facial analysis software released by IBM, Microsoft and Amazon was less accurate when analyzing dark-skinned and feminine faces, compared to light-skinned and masculine faces (Buolamwini & Gebru, 2018).
  • Voicing Erasure, in 2020, addressed racial bias in speech recognition algorithms (Koennecke et al., 2020)

Buolamwini is also known as the “poet of code”. In 2018 she delivered a powerful spoken word piece: “AI, Ain’t I a woman?” highlighting the ways in which artificial intelligence can misinterpret the images of iconic black women: Oprah Winfrey, Serena Williams, Michelle Obama, Sojourner Truth, Ida B. Wells, and Shirley Chisholm.

Other examples might be found in systems of learning analytics. The project Learning Analytics und Diskriminierung (LADi) examines bias on the basis of real systems. Their insights lead us to conclude that discrimination and lacking fairness of systems require constant bias-sensitive human monitoring and alignment (Riazy & Simbeck, 2019) which often exceeds the capabilities of the users, for instance in recruiting or learning contexts. A team from the project also found that a risk of unfair treatment appears when educators’ choices are too influenced by recommendations of a system – and they tend to trust a system although not having enough information about how it works (Mai, Köchling, Wehner, 2021).

Data Bias

Distortion/abbreviation resulting from data production and data processing. P. Lopez distinguishes between three forms of data bias.

Technical Bias

Discrepancy between what should be represented in data and what was measured. In example: wrong postal codes or technically wrong conclusions lead to disadvantages or discrimination...

Socio-technical Bias

Discrepancy between the dataset and the analysed phenomenon. In example, caused by structural over- or under-representation of certain groups in the data...

Social Bias

Analysis reproduces, on the basis of a correct dataset, existing social discrimination. In example, automated hiring systems prefer men to women on the basis of the existing gender composition of staff...

Source: Lopez, P. (2021). Discrimination through Data Bias


Explore further


References

  • Buolamwini, J. & Gebru, T.. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in PMLR 81:77-91
  • Lopez, P. (2019). Diskriminierung durch Data Bias. Künstliche Intelligenz kann soziale Ungleichheiten verstärken. In: WZB Mitteilungen 171: Von Computern und Menschen, March 2021. Die digitalisierte Gesellschaft.WZB Berlin Social Science Center.
  • Koenecke, A; Nam, A.; Lake, E.; Nudell, J.; Quartey, M.; Mengesha, Z.; Toups, C.; Rickford, J. R.; Jurafsky, D. (2020). Racial disparities in automated speech recognition. PNAS April 7, 2020 117 (14) 7684-7689; first published March 23, 2020; https://doi.org/10.1073/pnas.1915768117
  • Mai, L.; Köchling, A. and Wehner, M. (2021). ‘This Student Needs to Stay Back’: To What Degree Would Instructors Rely on the Recommendation of Learning Analytics?. In Proceedings of the 13th International Conference on Computer Supported Education – Volume 1: CSEDU, ISBN 978-989-758-502-9; ISSN 2184-5026, pages 189-197. https://doi.org/10.5220/0010449401890197
  • Riazy, S. & Simbeck, K., (2019). Predictive Algorithms in Learning Analytics and their Fairness. In: Pinkwart, N. & Konert, J. (Hrsg.), DELFI 2019. Bonn: Gesellschaft für Informatik e.V.. (S. 223-228). https://doi.org/10.18420/delfi2019_305




Valentina Vivona

Researcher at the Osservatorio Balcani Caucaso Transeuropa (OBCT), a think tank focused on South-East Europe, Turkey and the Caucasus located in Trento (Italy).


handbook for Facilitators: Learning the Digital

Digital-book-cover.png

This text was published in: M. Oberosler (ed.), E. Rapetti (ed.), N. Zimmermann (ed.), G. Pirker, I. Carvalho, G. Briz, V. Vivona (2021/22). Learning the Digital. Competendo Handbook for Facilitators.

Created in the frame of the project DIGIT-AL - Digital Transformation Adult Learning for Active Citizenship.

Dare-network.jpg
Erasmusplus.jpg