Aída Ponce del Castillo is a Senior Researcher in the Foresight Unit of the European Trade Union Institute (ETUI). She conducts research on emerging technologies and their impact on the world of working. We spoke about her research, the role of the AI Act, the Platform Worker’s Directive, and collective bargaining in governing AI in the workplace.
What is the European Trade Union Institute, and can you introduce your work on AI governance?
Aída: The European Trade Union Institute provides research and training courses to the European Trade Union movement. Our research is divided across various areas including economic governance, industrial relations, occupational health and safety, working conditions, and foresight. Our Foresight Unit focuses on emerging technologies, working with trade unions on internal foresight exercises that anticipate changes to work in the medium to long term. We also conduct anticipatory research to ensure unions are equipped with anticipatory tools and approaches for societal and technological changes.
You have recently published a book on AI, labour, and society. Could you tell us about this?
Aída: The
publication is a collection of articles written by academics, experts, and contributors working in the field. The objective was to reflect about AI governance with a broad multidisciplinary approach, integrating a variety of geographical and cultural perspectives. We need this diversity of perspectives to engage with the complexity and pervasive nature of AI technologies, their impacts on society, and across different segments of society; on people as citizens, workers, creators, governees, and governors. The book also aims to equip workers and their representatives with the knowledge to contribute to this discussion and make informed decisions.
The book is structured thematically. It begins by examining the relationships between society and the lifecycle of AI, probing how the technology is created, the conditions in which it is developed and deployed, and the current and ideal relationship between humans and the technology. The second section examines the socio-technical and economic forces underpinning AI technologies, exploring the actors and conditions which are driving development, exploring the ethical implications, and whether ethics is considering the impacts on natural resources and the environment.
The next section dives into the legal implications for the world of work – although there are lessons for other disciplines such as consumer protection. We suggest there is a need to draw a line between the concept of worker monitoring and worker surveillance, arguing that the latter creates a situation of structural asymmetry between employers and workers in terms of information and control, while also leading to abuses of workers' rights. We also examine issues of liability; the division of responsibility between the developer, employer, and the worker; and the boundaries of acceptable AI use. Overall, workplace AI systems influence agency and introduce risks in a multitude of ways, at the infrastructure levels (e.g. cybersecurity risks), at the decision-making level, at the level of work organisation, on allocation of tasks, during recruitment, and on the working conditions of workers both collectively and individually.
Overall, this research concludes that the working conditions of all workers who are behind and in front of and exposed to AI systems need to be improved, from workers extracting the natural resources needed for the compute infrastructure, to data labellers, to workers subjected to or using AI systems in their jobs.
Considering legal regulation, how will the EU’s AI Act impact the use of AI in the workplace?
Aída: The AI Act (AIA) is a product safety regulation which classifies AI systems according to their level of risks. It aims to bring AI products safely onto the EU market. Whilst the AIA’s focus is product safety it does have several obligations that impact the workplace and the employment context.
The AIA bans certain unacceptable systems. In relation to work, Article 26(c) bans the use of emotion recognition systems that are used to identify or infer emotions or intentions of natural persons in the workplace. The prohibition has an exception for emotion recognition systems which are used for therapeutical, medical or safety reasons, e.g. detecting fatigue to prevent accidents. It is unclear whether this prohibition also applies to the recruitment process since this is outside the context of employment.
The majority of obligations and product safety requirements are focused on high-risk AI systems and on providers of such systems. The list of high-risk systems includes the use of AI systems for employment, workers management and access to self-employment. Providers and /or deployers who are employers, will have to comply with the AIA. In particular, they will have to conduct a self-assessed conformity assessment. Before putting a high-risk AI system at the workplace, they need to inform and consult workers’ representatives and the affected workers who will be subject to the use of the high-risk AI system.
Unfortunately, the boundaries to indicate when a system will fall into the high-risk category are not clear-cut and providers can self-declare that their system is not high-risk. Providers might assess and classify the risks of their use cases to better fit the minimal risk category. In addition, the AIA’s list of high-risk use-cases can be updated, modified, and removed by the EU Commission through delegated acts, without the involvement of workers and their representatives.
The legal requirements for high-risk AI systems will be supplemented with technical standards. These are being developed by standardisation organisation CEN/CENELEC. Article 40 AIA states that relevant stakeholders, for example trade unions, consumer organisations and environmental organisations, should be able to effectively participate in the standardisation process. In practice, the European Trade Union Confederation (ETUC) has been contributing. Additionally, it is argued that many provisions of the AI Act which involve qualitative and ethical considerations – such as the Fundamental Rights Impact Assessment, sustainability, and codes of conduct – should not be operationalised through technical standardisation.
The national implementation of the AIA is going to be crucial. Providers and deployers will need to figure out what method is most suitable for them to fulfil the legal requirements. At national level, it is likely that we will see disparity amongst Member States in the resources allocated for national authorities carrying out activities related to the AIA and disparities in their investment in the AI sector. This could lead to inconsistent enforcement, perhaps mirroring the issues seen with the GDPR.
Looking forward, the AIA offers several mechanisms that workers, trade unions, and social partners can use to support good governance of workplace AI. The AIA encourages the involvement of workers and representatives in the elaboration of codes of conduct (Article 60.3), to participate in the standardisation process (Article 40), and to be informed about AI systems (Article 29). The AIA serves as a minimum harmonisation instrument for work (Article 2.5.c), allowing for negotiation of additional and stronger AI-specific protections via collective agreements and passed in labour law (article 2.11). As a result, workers, trade unions, and social partners can, through coordinated action, contribute to national laws and collective agreements that will raise the level of AI literacy and produce meaningful agency.
The AIA is not the only relevant legislative achievement. The recently approved directive on improving working conditions in platform work incorporates a chapter that contains obligations for labour platforms and establishes rights for workers concerning automated monitoring or decision-making systems. The directive is a pioneering and innovative instrument, blending labour law with new rules on occupational safety and health, such as the management of psychosocial risk factors, and privacy and data protection.
On the topic of social dialogue and collective bargaining, how has and will this promote the good governance of AI in the workplace?
Aída: Collective bargaining and social dialogue have long addressed challenges posed by emerging technologies. However, as AI systems increasingly permeate across different sectors, it will be important for workers to learn from collective actions and agreements established in other sectors. For example, following a 148-day strike, Hollywood actors and writers, with their trade unions, reached a collective agreement with the Alliance of Motion Picture and Television Producers, Inc. The catalyst for the dispute was the potential use of generative AI for script creation by studios which would side-line human writers. The agreement establishes safeguards to ensure that AI remains under the control of writers rather than studios and prohibits studios from utilising AI for the writing or editing of scripts or treatments originally created by human writers. I hope that such agreements will be replicated in Europe, negotiated by workers to meet the needs and specificities of the sectors in which they work, and empower workers to safeguard their roles in an increasingly automated working environment.