Abstract: Schneider, Diana & Siebert, Scarlet (2019, 6./7.5.): Session Proposal: Applying artificial intelligence on vulnerable target groups: chances and challenges, 18th Annual STS Conference Graz 2019, Graz. [Tandem 5], [Tandem 4]

Digitalization in general and artificial intelligence (AI) in particular, e.g. applications of big data analytics and robotics, are radically changing society. This applies not only to the world of industry and politics, but also to an increasing extent to social services like education and healthcare, where vulnerable groups like children, elderly or disabled people are targeted. In this context, societal challenges, e.g. the demographic change, are powerful narratives for a technology-push, that is supposed to foster self-determination, participation, and equality of these groups. For instance, applications of smart home shall allow the elderly to stay in their familiar environment longer (Wessling, 2013), while social robots are supposed to foster the participation of children with special needs in educational settings (Dautenhahn et al., 2009; Kim et al., 2013). With the assessment of big data, unemployed people shall receive adequate offers concerning their job opportunities (Fanta, 2018) and refugees shall get sufficient treatments concerning their health (Baeck, 2017). Furthermore, dangers to the welfare of children shall be identified at an early stage (e.g. Gillingham & Graham, 2016). At the same time, the question arises if technology might transfer social disparities into the digital world. For instance, algorithms for predictive policing seem to replicate inequality because they are based on biased data that leads to accusing ethnic and religious minorities more often than the white majority (e.g. Tayebi & Glässer, 2018; Datta et al., 2015). Living in a socially deprived neighbourhood in the analogue world accounts for a bad digital score, which might then lead to analogously executed punishments. Although AI is already being used in highly sensitive areas such as kindergartens, welfare state institutions, and authorities, the effects of this technology on these areas have hardly been researched, if at all. The assessment of advantages and disadvantages of AI in these areas is still in its infancy. Therefore, this session seeks to discuss challenges and chances of the application of AI on vulnerable target groups, that shall function as a “burning glass” for the current state and future trends of possibilities to experience self-determination, participation, and equality in a digital society. These groups include, e.g., children, the elderly, people with disabilities, unemployed people as well as refugees.
By taking into account different disciplines, the session follows the concept of integrated research (Stubbe, 2018), that might enable a broader view on the technological impact on individuals (micro level) and institutions (macro level) and help answering the following questions systematically (Manzeschke et al., 2013): In which ways is the application of artificially intelligent technologies ethically questionable with respect to a certain target group? Which ethical challenges do emerge from the application of these technologies? How can these challenges be mitigated or even dissolved? To answer these questions, we would like to focus on conceptual and theoretical work. However, empirical findings, that report on challenges or solutions concerning the application of artificially intelligent technologies on vulnerable target groups, are welcomed as well.


References:


Baeck, J.-P. (2017, Mai 29). Überwachungssoftware für Geflüchtete: Der gläserne Flüchtling. Die Tageszeitung: taz. Abgerufen von https://www.taz.de/!5409816/

Datta, A. et al. (2015): Automated Experiments on Ad Privacy Settings. A Tale of Opacity, Choice, and Discrimination, In: Proceedings on Privacy Enhancing Technologies (1), S. 92-112.

Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-Bagci, H., Mirza, N. A., & Blow, M. (2009). KASPAR – a minimally expressive humanoid robot for human-robot interaction research. http://dx.doi.org/10.1080/11762320903123567

Fanta, A. (2018, Oktober 13). Österreichs Jobcenter richten künftig mit Hilfe von Software überArbeitslose. Abgerufen 23. Oktober 2018, von https://netzpolitik.org/2018/oesterreichsjobcenter-richten-kuenftig-mit-hilfe-von-software-ueber-arbeitslose/

Gillingham, P. & Graham, T. (2016): ”Big Data“ in social work: The development of a critical perspective on social work´s latest ”electronic turn“, In: Australian Social Work, March 2016

Kim, E. S., Berkovits, L. D., Bernier, E. P., Leyzberg, D., Shic, F., Paul, R., & Scassellati, B. (2013). Social robots as embedded reinforcers of social behavior in children with autism. Journal of Autism and Developmental Disorders, 43(5), 1038–1049. https://doi.org/10.1007/s10803-012-1645-2

Manzeschke, A., Weber, K., Rother, E., & Fangerau, H. (2013). Ergebnisse der Studie „Ethische Fragen im Bereich Altersgerechter Assistenzsysteme“ (neue Ausg). Berlin: VDI.

Stubbe, J. (2018). Innovationsimpuls „Integrierte Forschung“. Diskussionspapier des BMBFForschungsprogramms „Technik zum Menschen bringen“. Berlin: VDI/VDE Innovation + Technik GmbH. Abgerufen von https://www.technik-zum-menschenbringen.
de/dateien/service/veranstaltungen/diskussionspapier-integrierte-forschung-2018-05-25.pd

Tayebi, M. A., & Glässer, U. (2018). Social Network Analysis in Predictive Policing: Concepts, Models and Methods (Softcover reprint of the original 1st ed. 2016). Springer

Wessling, C. (2013, Dezember 17). Smart Home für Senioren. Zwischen Unterstützung und
Überwachung. Abgerufen von https://www.handelsblatt.com/technik/das-technologieupdate/healthcare/smart-home-fuer-senioren-zwischen-unterstuetzung-undueberwachung/9223758.html