[Back]


Talks and Poster Presentations (without Proceedings-Entry):

D. Allhutter, F. Cech, F. Fischer, G. Grill, A. Mager:
"Algorithmic profiling of job seekers in Austria: how to make austerity politics effective";
Talk: 4th European Technology Assessment Conference, Bratislava; 2019-11-04 - 2019-11-06.



English abstract:
In October 2018, the Public Employment Service Austria (Arbeitsmarktservice, AMS) announced plans to roll out an algorithmic profiling system. Based on a statistical model of employment seekers´ prospects on the job market from previous years, the system is designed to classify present and future job seekers into three categories. The first category consists of those with high chances to find a job within half a year, the second one includes those with mediocre prospects on the job market, and the third category consists of those clients with a bad outlook of employment in the next two years. The declared goal is to focus support on job seekers part of the second category, because for them "active labor market programs" (ALMP) are expected to have the most impact on improving chances to find employment. In contrast, job seekers from the first category are expected to find employment without much support, while job seekers from the third category are deemed as basically unemployable, no matter whether they participate in ALMPs or not. The algorithmic system should thus enable an efficient allocation of ALMPs by investing in job seekers for who ALMPs are expected to be most cost-effective for the AMS. In turn several representatives have referred to the third category as one for the "hopeless". This classification practice with its choices of properties for the statistical model, prompted a public debate on algorithmic bias, stigmatization and discrimination of people predicted to have a high risk of long-term unemployment (Johnson et al. 2018). Women, for example, are assessed as less employable in one published model than men. Similarly, job seekers from outside of the European Union are listed as having lower chances on the labour market than citizens of EU member states. This poses crucial questions in terms of the implicit politics of the profiling system: What are the implications of using retrospective data to calculate future chances of individuals on the job market? What variables and categories are considered in the statistical modeling and which ones are neglected? How does the system influence the job market and clients´ prospects and which further implications can be expected in the practices of the AMS?
Based in critical data studies (Iliadis and Russo 2016, Rieder and Simon 2016, Striphas 2015, Van Dijck 2014), surveillance studies (Gandy 2016) and research on fairness, accountability and transparency in machine learning (Sandvig et al 2014, Veale et al. 2018) this talk will discuss the inherent politics of the AMS profiling system. An in-depth analysis of the available technical documentations and public statements will focus on three main areas of concern:

. First, the AMS claims the system just captures the "harsh reality" of the labour market and predicts job seekers´ chances by taking into account existing inequalities. Our analysis problematizes claims that the system is neutral and objective. We argue it merely approximates the labour market´s current state based on chosen data sources, attributes and methods reflecting value-laden judgements of the system designers. This claim of the AMS denies the system´s role in shaping the job market and the clients´ prospects.
. Second, we will discuss the distribution of agency and accountability within the socio-technical assemblage, particularly between the algorithmic profiling system and the case workers. This has implications for questions of accountability and the capacity for situated evaluations of the clients.
. Third, the concealed technical functionality of the system and its social consequences raise questions of transparency. Our talk will advance discussions on transparency and accountability requirements for deploying algorithmic systems in public agencies.

Using the example of the Austrian profiling system, our analysis will make a contribution to broader discussions on fairness, accountability and transparency in algorithmic systems and machine learning; both in the public sector and in other sensitive areas.

Created from the Publication Database of the Vienna University of Technology.