In Europe, Finland’s Henna Virkkunen has been announced as the European Commission’s
executive vice-president for tech sovereignty, security and democracy. Her portfolio will cover digital policies, including enforcement of platform regulation. Commissioners will be
evaluated for potential conflicts of interest by the Parliament’s Committee on Legal Affairs before the relevant policy committees start their more wide-ranging hearings.
The European Commission has hosted the first
meeting of the AI Board to discuss the development and uptake of AI in the EU and the AI Act. The Board is comprised of high-level representatives from the Commission and Member States. It plans to produce an AI governance framework to facilitate participation of the Member States in the implementation of the AI Act.
The Commission has announced that over a hundred companies have
signed the EU AI Pact, including multinationals and SMEs across the IT, telecoms, healthcare, banking and automative sectors. The Pact supports industry’s voluntary commitments to begin applying the principles of the AI Act ahead of its entry into force. The ‘core’ commitments involve conducting an AI governance strategy, a high-risk AI systems mapping, and promoting AI literacy amongst staff. It is
reported that future of the Pact is uncertain following the resignation of Commissioner Thierry Breton.
The Commission has announced the
Chairs and Vice-Chairs of the four Working Groups of the Code of Practice for general-purpose AI under the AI Act. The Working Groups focus on transparency and copyright, risk identification and assessment, technical risk mitigation, and internal risk management and governance of providers.
The European Parliament has set up a
cross-committee working group to monitor the implementation of the AI Act. The Internal Market and Consumer Protection (IMCO) and Civil Liberties, Justice and Home Affairs (LIBE) committees set up the working group following concerns with the transparency of the AI Office’s staffing and the role of civil society in the implementation process.
The European Parliament Think Tank has published an
impact assessment on the proposed directive on adapting non-contractual civil liability rules to AI (the AI Liability Directive, AILD). The study identifies issues with the European Commission’s impact assessment and proposes the AILD should extend to general purpose and other high-impact AI systems, as well as broadening to become a software liability regulation.
The European Commission has sent
requests for information to YouTube, TikTok and Snapchat under the Digital Services Act. The platforms must provide information about the design and functioning of the platforms’ algorithmic systems, and their role in amplifying systemic risks including the mental health of users, the dissemination of harmful content, the protection of minors, and the potential for recommender systems to contribute to addictive behaviour.
In France, for the first time, the Commission Nationale de l’Informatique et des Libertés (CNIL), the Data Protection Authority, has
consulted the Autorité de la Concurrence, the Competition Authority, on a draft recommendation on mobile applications. The Autorité has suggested that some of CNIL's recommended privacy measures may go beyond what is strictly required by the GDPR and may hinder or distort free competition. In particular, the Autorité raises concerns that an accumulation of data can entrench market power of certain actors and notes that the level of data protection offered can be a competitive parameter.
In Germany, the Advisory Board of the German Digital Services Coordinator (DSC)
met for the first time. The Board is an independent body of experts from academia, industry and civil society that supports the DSC. It aims to ensure effective implementation of the Digital Service Act and raise academic questions, in particular relating to the handling of data.
In Ireland, the An Coimisiún um Chosaint Sonraí, the Data Protection Commission (DPC), has formally requested an
opinion on AI training from the European Data Protection Board (EDPB). The opinion invites the EDPB to consider the extent to which personal data is processed at various stages of the training and operation of an AI model, including first- and third-party data, and considerations to take into account when assessing the legal basis relied upon by the data controller.
Ireland’s DPC has also opened a
statutory inquiry into whether Google has conducted a data protection impact assessment (DPIA) prior to processing personal data for the development of its foundation AI model, PaLM 2.
In the Netherlands, the Autoriteit Persoonsgegevens, the Data Protection Authority, has published its
report on AI and algorithmic risks. The report explores how information provision in democracy may be threatened by AI, challenges in the democratic control of AI, and the risks of profiling and selecting AI systems.
In the UK, the Information Commissioner’s Office, the Data Protection Authority, has published a
final call for evidence on allocating controllership across the generative AI supply chain. It also shares the ICO’s analysis on generative AI and allocating responsibility for data processing.
The Department for Science, Innovation and Technology’s Responsible Technology Adoption Unit has updated its
portfolio of AI Assurance Techniques with new case studies across a range of sectors, including communications, manufacturing, agriculture and public administration.
In the US, the Californian Governor Gavin Newsom has
vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have placed safety requirements on developers of large frontier AI models. Governor Newsom said the legislation was "well intentioned" but would have applied "stringent standards to even the most basic functions". He plans to consult with external experts and the state legislature to develop alternative guardrails.
The Attorney General of New Mexico has filed a
lawsuit against Snap alleging its’ policies and design, including ephemeral content and recommendation algorithms, facilitate child sexual exploitation. It also alleges that Snap has designed its platform to be addictive which has led to poor mental health outcomes amongst young users.
The Office of the New York Attorney General has adopted a
guide on protecting citizens from AI-generated election misinformation.
The United Nations hosted the
Summit of the Future where 193 Member States negotiated a
Global Digital Compact. The Compact aims to strengthen international data governance and govern AI, including by establishing an international scientific panel on AI.
The United Nations’ AI Advisory Body has published its
final report on Governing AI for Humanity. Check out our April 2024
interview with the United Nation’s Office of the Secretary General’s Envoy on Technology which supported this work.