In Europe, the European Commission has
published a case study that applies the DSA risk management framework to Russian disinformation campaigns. The framework offers a methodology for risk assessment and mitigation, and establishes baseline benchmarks for future audits.
The Council of Europe has
published the latest working draft of its Framework Convention on AI, Human Rights, Democracy and the Rule of Law. These principles include transparency and oversight, accountability, equality and discrimination, safety and security, privacy, and remedies.
The European Court of Human Rights has
ruled on the use of facial recognition technology (FRT) by police in Moscow. The court found that FRT was a particularly intrusive interference with the right to private life in Article 8 and therefore required the highest level of justification.
In France, CNIL has
opened submissions on how the GDPR applies to AI. In particular, they are interested in purpose limitation; methods of selecting, cleaning and minimising data; data protection by default; and the conditions that actors can rely on legitimate basis as a legal basis for processing the personal data in training data. Submissions should be sent to
ia@cnil.fr.
In Spain, the Spanish Council of Ministers has
approved the creation of the Spanish Agency for the Supervision of AI (AESIA). This is the first European agency created specifically for AI oversight and will be the Spanish national supervisory authority under the AI Act.
In the UK, the Government has
announced it will host a global AI Safety Summit in November. The event will convene international governments, industry and researchers to develop a shared approach on the safe development and use of frontier AI technology. Civil society organisations have critiqued the Summit’s focus on a narrow definition of AI safety and existential risk. Groups including the Ada Lovelace Institute have
recommended a wider definition of harms and risk that centres communities and affected people.
The Digital Regulators Cooperation Forum (CMA, Ofcom, ICO and FCA) has
announced a new pilot advisory service to assist businesses launching AI and digital innovations. The service will give businesses tailored advice on how to meet regulatory requirements.
The Parliamentary Science Innovation and Technology Committee has
heard evidence from the new Chief Scientific Adviser, Dame Angela McLean, on her work priorities, including AI governance.
The Centre for Data Ethics and Innovation and techUK has
developed a portfolio of AI assurance techniques. The portfolio provides guidance and real-world examples to support the development of trustworthy AI. The portfolio is open for submissions, which should be sent to
ai.assurance@cdei.gov.uk.
In the US, the White House has
secured voluntary AI-related commitments from companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The eight commitments include independent security testing; information sharing with government, civil society and academia; processes to enable third-party discovery and reporting of vulnerabilities; and watermarking for audio and video. This is the first of its kind, however the commitments have been
criticised as broad and unspecific, including by Professor Emily Bender who
pointed to the focus on spurious risks, carve-outs for synthetic text and a lack of commitment around data and model documentation.
In New York, the anti-bias law for hiring algorithms has begun to be
enforced, following some delay. Employers using hiring algorithms must disclose this information, submit the algorithm for an independent audit, and subsequently publish the results of the audit.
In Montana, a new law restricting the use of facial recognition technology (FRT) has
taken effect. The law prohibits state and local agencies, including law enforcement, from using real-time FRT. Retrospective FRT can be used under certain circumstances, subject to human review and audit procedures. Third-party FRT vendors must inform individuals in writing that their biometric data is being collected or stored; share the specific purpose and length of time data is being collected, stored, or used for; and receive written consent if they capture or in any way obtain individuals’ data.
Common Sense Media has
developed an independent AI ratings and reviews system for AI products. It will assess AI products on their responsible AI practices, handling of private data, and suitability for children, amongst other factors. The project will have a particular focus on AI products used by children and educators.
The OECD has
published a catalogue of tools and metrics for trustworthy AI. These include technical tools to mitigate bias, measure performance, and audit systems; and non-technical tools that focus on procedural processes and certification.
Foundation models and Generative AI
In the UK, the House of Lords Communications and Digital Committee has
launched an inquiry into Large Language Models to “examine what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks”. The inquiry will evaluate the work of Government and regulators, examine how well the AI White Paper addresses current and future technological capabilities, and review the implications of approaches taken elsewhere in the world.
The Prime Minister and Technology Secretary have
announced £100 million initial funding for a Foundation Model Taskforce. The task force will invest in foundation model infrastructure and procurement in the public sector, with pilots launching in the next six months. It will aim to encourage the broad adoption of the technology and act as a global standard bearer for AI safety.
The Digital Regulators Cooperation Forum has
held a workshop on the implications, risks and benefits of Generative AI on each of the regulators’ work and remit. The DRCF are keen to receive input from external stakeholders on these issues, which should be sent to
drcf@ofcom.org.uk.
In China, the Cyberspace Administration is
revising the Draft Measures for the Management of Generative AI Services. The provisions focus on the manipulation of public opinion through false information alongside data leaks, fraud and privacy breaches. The Measures will also impose safeguards for personal information and user input.
In Japan, the G7 has
published the Hiroshima Process on Generative AI. The report sets out trends and incidents related to Generative AI, and outlines the underlying principles for a forthcoming voluntary code of conduct. Previously, the G7 Data Protection Authorities also
published a statement on Generative AI and
discussed their approach to regulation at the first Japan Privacy Symposium. The regulators' main concerns are legal basis; the need for security safeguards; and the need for mitigation and monitoring measures to ensure personal data is accurate, complete and is not discriminatory or unlawful. G7 members
Italy,
Canada,
Japan and the
United States have all investigated or engaged with OpenAI on data protection compliance.
The OECD has
published a report on Generative AI to inform policy considerations and support decision makers. The report explores the societal and policy challenges of the technology, including its impact on labour markets, copyright, societal biases, disinformation, and manipulated content.