Example logo
Algorithm Governance Roundup #14
DSA infringement proceedings | Community Spotlight: Aída Ponce del Castillo, European Trade Union Institute (ETUI)
Welcome to April's Algorithm Governance Roundup. This month I spoke to Aída Ponce del Castillo, Senior Researcher in the Foresight Unit of the European Trade Union Institute (ETUI), about her research on emerging technologies and the role of the AI Act, the Platform Worker’s Directive, and collective bargaining in governing AI in the workplace.

As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you!

Many thanks and happy reading!

Esme Harrington
Subscribe
This Month's Roundup
In Europe, the European Commission (EC) has opened formal proceedings to assess whether Meta has breached the DSA by failing to tackle deceptive advertising and political content on Facebook and Instagram. The EC will also investigate infringements related to the lack of effective third-party real-time civic discourse and election-monitoring tools following Meta’s deprecation of CrowdTangle. In addition, the EC suspects that Meta’s mechanisms for flagging illegal content, user redress, and internal complaints are not compliant.

The EC has also formally designated the online fashion retailer Shein as a Very Large Online Platform (VLOP) under the DSA. Shein has four months to comply with the DSA’s most stringent obligations, including to analyse the systemic risks associated with the dissemination of illegal content and products and the design or functioning of its services.

The EC has also opened infringement procedures against Member States Estonia, Poland and Slovakia for failing to designate national DSA regulators, known as the Digital Services Coordinators. The EC has also sent formal letters to Cyprus, Czechia, and Portugal for failing to empower their regulators with the necessary powers and competences to carry out their tasks.

The privacy non-profit Noyb has filed a complaint against OpenAI for failing to comply with several obligations under the GDPR. The complaint alleges that ChatGPT’s “hallucinations” about individual people infringes the requirement that personal data is accurate. It requests that the Austrian data protection authority investigates OpenAI’s measures to ensure accuracy and forces OpenAI to comply with the complainant’s data subject access request.

In Canada, the government has held the first reading of the Online Safety Act. The proposed Act regulates seven categories of harm on social media services: Non-consensual intimate images, child sexual abuse material, hate speech, incitement of violence, incitement or extremism, self-harm directed at children, and content to bully children. The OSA would introduce obligations related to risk assessment and mitigation, user empowerment tools, and mandated data access for researchers.

In Germany, the Federal Office for Information Security has published a report on the opportunities and risks of Generative AI. It is intended to educate companies and authorities on basic cybersecurity and safety implications and serve as a basis for internal systematic risk analysis.

In the UK, the Scottish government has introduced mandatory registration of public sector uses of AI in the publicly accessible Scottish AI register.

The UK government has introduced a criminal offence for people who create sexually explicit deepfakes without consent through an amendment to the Criminal Justice Bill. Offenders face a criminal record and an unlimited fine. If the image is shared more widely, offenders could be sent to jail.

The Competition and Markets Authority (CMA) has outlined three key risks to effective competition on AI Foundation Models. Firms controlling critical inputs could restrict access, powerful incumbents could exploit their positions, and partnerships involving key players could exacerbate existing positions of market power. The CMA is also inviting comments on AI partnerships between Microsoft and Mistral AI, Amazon and Anthropic, and Microsoft’s hiring of former employees of Inflection AI.

The Information Commissioner’s Office (ICO) has published its strategic approach to AI regulation. The paper sets out the ICO's existing and future work on AI, including forthcoming policy and guidance on Generative AI, regulatory sandbox and assurance projects, and regulatory action related to biometric technologies.

Ofcom has published a consultation to inform the UK Online Safety Act's codes of practice on the categorisation of online services, along with its advice to the Secretary of State on the thresholds for categorisation.

The Digital Regulators Forum (DRCF) has launched the AI and Digital Hub pilot to provide informal advice to innovators with complex regulatory questions that cross DRCF members' regulatory remits.

The Trades Union Congress, AI Law Consultancy at Cloisters Chambers, and Cambridge University Minderoo Centre for Technology and Democracy have published the Artificial Intelligence (Regulation and Employment Rights) Bill alongside an explanatory report.

In the US, the National Institute of Standards and Technology (NIST) has released four draft publications on AI, including a Generative AI Profile under the NIST Risk Management Framework, a report on reducing risks of synthetic content, and a plan for global engagement on AI standards.

The Department of Homeland Security has established an AI Safety and Security Board to provide advice on the safe and secure development and deployment of AI in critical infrastructure. Board members include representatives from AI industry, a range of critical infrastructure sectors, public officials, the civil rights community, and academia.

The New York State Bar Association (NYSBA) Task Force on AI has published a report on the legal, social and ethical impacts of AI and generative AI on the legal profession. It recommends the NYSBA adopts draft AI/Generative AI guidelines; educates judges, lawyers, and regulators on the technology; and identify risks not addressed by existing laws to inform new legislation or regulation.

The national security agencies in the US, Australia, Canada, New Zealand and United Kingdom have jointly published best practices for deploying AI systems securely within organisations.
Research
The AI Index Report 2024, Stanford University Institute for Human-Centred Artificial Intelligence

AI Intersections Database, Mozilla Foundation

AI Nationalism(s): Global Industrial Policy Approaches to AI, AI Now

An Elemental Ethics for Artificial Intelligence: Water as Resistance Within AI’s Value Chain, Sebastián Lehuedé

Artificial intelligence, labour and society, Edited by Aída Ponce Del Castillo

Collaboratively Adding Context to Social Media Posts Reduces the Sharing of False News, Tomas Renault, David Restrop-Amariles, Aurore Troussel-Clément

Colombia Convening on Openness and AI, Mozilla and Colombia Institute of Global Politics
  • Policy Readout
  • Technical Readout
Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?, Shayne Longpre, Robert Mahari, Naana Obeng-Marnu, William Brannon, Tobin South, Jad Kabbara, and Sandy Pentland

Governance and Interdependence in Data driven Supply Chains, Jennifer Cobbe

Ideologies of AI and the Consolidation of Power, Edited by Jenna Burrell and Jacob Metcalf

No Embargo in Sight: Meta Lets pro-Russia Propaganda Ads Flood the EU, Paul Bouchaud, Marc Faddoul and Raziye Buse Çetin, AI Forensics

Responsibly Buying Artificial Intelligence: A ‘Regulatory Hallucination’, Albert Sanchez-Graells

Use of Entity Resolution in India: Shining a light on how new forms of automation can deny people access to welfare, Amnesty Tech

When Is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis, Francesca Palmiotto
Opportunities
The Mozilla Foundation is recruiting 10 Mozilla Senior Fellows for its Senior Fellowship Program to support work on advancing trustworthy AI and an ethical, responsible, and inclusive internet environment. The program is open to experienced practitioners, technologists, researchers, policy experts, and activists from around the world.
Application deadline is 06 May.

The Ada Lovelace Institute is hiring a Researcher (EU Public Policy) to design, deliver, and support research into the impacts of AI and data on people and society, including work on EU and international governance and regulation proposals. The role will be based in Brussels, with requests to work from London considered.
Application deadline is 07 May.

UC Berkeley is seeking Tech Policy Fellows to join a non-residential program conducting research, sharing expertise, and developing technical or policy interventions that support responsible technology development and use.
Application deadline is 31 May 2024.
Community Spotlight: Aída Ponce Del Castillo, European Trade Union Institute
Aída Ponce del Castillo is a Senior Researcher in the Foresight Unit of the European Trade Union Institute (ETUI). She conducts research on emerging technologies and their impact on the world of working. We spoke about her research, the role of the AI Act, the Platform Worker’s Directive, and collective bargaining in governing AI in the workplace.

What is the European Trade Union Institute, and can you introduce your work on AI governance?
Aída:
The European Trade Union Institute provides research and training courses to the European Trade Union movement. Our research is divided across various areas including economic governance, industrial relations, occupational health and safety, working conditions, and foresight. Our Foresight Unit focuses on emerging technologies, working with trade unions on internal foresight exercises that anticipate changes to work in the medium to long term. We also conduct anticipatory research to ensure unions are equipped with anticipatory tools and approaches for societal and technological changes.

You have recently published a book on AI, labour, and society. Could you tell us about this?
Aída:
The publication is a collection of articles written by academics, experts, and contributors working in the field. The objective was to reflect about AI governance with a broad multidisciplinary approach, integrating a variety of geographical and cultural perspectives. We need this diversity of perspectives to engage with the complexity and pervasive nature of AI technologies, their impacts on society, and across different segments of society; on people as citizens, workers, creators, governees, and governors. The book also aims to equip workers and their representatives with the knowledge to contribute to this discussion and make informed decisions.

The book is structured thematically. It begins by examining the relationships between society and the lifecycle of AI, probing how the technology is created, the conditions in which it is developed and deployed, and the current and ideal relationship between humans and the technology. The second section examines the socio-technical and economic forces underpinning AI technologies, exploring the actors and conditions which are driving development, exploring the ethical implications, and whether ethics is considering the impacts on natural resources and the environment.

The next section dives into the legal implications for the world of work – although there are lessons for other disciplines such as consumer protection. We suggest there is a need to draw a line between the concept of worker monitoring and worker surveillance, arguing that the latter creates a situation of structural asymmetry between employers and workers in terms of information and control, while also leading to abuses of workers' rights. We also examine issues of liability; the division of responsibility between the developer, employer, and the worker; and the boundaries of acceptable AI use. Overall, workplace AI systems influence agency and introduce risks in a multitude of ways, at the infrastructure levels (e.g. cybersecurity risks), at the decision-making level, at the level of work organisation, on allocation of tasks, during recruitment, and on the working conditions of workers both collectively and individually.

Overall, this research concludes that the working conditions of all workers who are behind and in front of and exposed to AI systems need to be improved, from workers extracting the natural resources needed for the compute infrastructure, to data labellers, to workers subjected to or using AI systems in their jobs.

Considering legal regulation, how will the EU’s AI Act impact the use of AI in the workplace?
Aída:
The AI Act (AIA) is a product safety regulation which classifies AI systems according to their level of risks.  It aims to bring AI products safely onto the EU market. Whilst the AIA’s focus is product safety it does have several obligations that impact the workplace and the employment context.

The AIA bans certain unacceptable systems. In relation to work, Article 26(c) bans the use of emotion recognition systems that are used to identify or infer emotions or intentions of natural persons in the workplace. The prohibition has an exception for emotion recognition systems which are used for therapeutical, medical or safety reasons, e.g. detecting fatigue to prevent accidents. It is unclear whether this prohibition also applies to the recruitment process since this is outside the context of employment.

The majority of obligations and product safety requirements are focused on high-risk AI systems and on providers of such systems. The list of high-risk systems includes the use of AI systems for employment, workers management and access to self-employment. Providers and /or deployers who are employers, will have to comply with the AIA. In particular, they will have to conduct a self-assessed conformity assessment. Before putting a high-risk AI system at the workplace, they need to inform and consult workers’ representatives and the affected workers who will be subject to the use of the high-risk AI system.

Unfortunately, the boundaries to indicate when a system will fall into the high-risk category are not clear-cut and providers can self-declare that their system is not high-risk. Providers might assess and classify the risks of their use cases to better fit the minimal risk category. In addition, the AIA’s list of high-risk use-cases can be updated, modified, and removed by the EU Commission through delegated acts, without the involvement of workers and their representatives.

The legal requirements for high-risk AI systems will be supplemented with technical standards. These are being developed by standardisation organisation CEN/CENELEC. Article 40 AIA states that relevant stakeholders, for example trade unions, consumer organisations and environmental organisations, should be able to effectively participate in the standardisation process. In practice, the European Trade Union Confederation (ETUC) has been contributing. Additionally, it is argued that many provisions of the AI Act which involve qualitative and ethical considerations – such as the Fundamental Rights Impact Assessment, sustainability, and codes of conduct – should not be operationalised through technical standardisation.

The national implementation of the AIA is going to be crucial. Providers and deployers will need to figure out what method is most suitable for them to fulfil the legal requirements. At national level, it is likely that we will see disparity amongst Member States in the resources allocated for national authorities carrying out activities related to the AIA and disparities in their investment in the AI sector. This could lead to inconsistent enforcement, perhaps mirroring the issues seen with the GDPR.

Looking forward, the AIA offers several mechanisms that workers, trade unions, and social partners can use to support good governance of workplace AI. The AIA encourages the involvement of workers and representatives in the elaboration of codes of conduct (Article 60.3), to participate in the standardisation process (Article 40), and to be informed about AI systems (Article 29). The AIA serves as a minimum harmonisation instrument for work (Article 2.5.c), allowing for negotiation of additional and stronger AI-specific protections via collective agreements and passed in labour law (article 2.11). As a result, workers, trade unions, and social partners can, through coordinated action, contribute to national laws and collective agreements that will raise the level of AI literacy and produce meaningful agency.

The AIA is not the only relevant legislative achievement. The recently approved directive on improving working conditions in platform work incorporates a chapter that contains obligations for labour platforms and establishes rights for workers concerning automated monitoring or decision-making systems. The directive is a pioneering and innovative instrument, blending labour law with new rules on occupational safety and health, such as the management of psychosocial risk factors, and privacy and data protection.

On the topic of social dialogue and collective bargaining, how has and will this promote the good governance of AI in the workplace?
Aída:
Collective bargaining and social dialogue have long addressed challenges posed by emerging technologies. However, as AI systems increasingly permeate across different sectors, it will be important for workers to learn from collective actions and agreements established in other sectors. For example, following a 148-day strike, Hollywood actors and writers, with their trade unions, reached a collective agreement with the Alliance of Motion Picture and Television Producers, Inc. The catalyst for the dispute was the potential use of generative AI for script creation by studios which would side-line human writers. The agreement establishes safeguards to ensure that AI remains under the control of writers rather than studios and prohibits studios from utilising AI for the writing or editing of scripts or treatments originally created by human writers. I hope that such agreements will be replicated in Europe, negotiated by workers to meet the needs and specificities of the sectors in which they work, and empower workers to safeguard their roles in an increasingly automated working environment.
​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus