In China, the World AI Conference and High-Level Meeting on Global AI Governance issued the
Shanghai Declaration on global AI governance. It covers AI development; AI safety, with an emphasis on data protection, cybersecurity and AI-specific rules; and advocates for a global AI governance mechanism with a central role for the United Nations.
In the EU, the European Commission has shared its
preliminary view that X is breaching the DSA. There are three areas of non-compliance: 1) dark patterns, related to the design and operation of its verified accounts feature; 2) advertising transparency, related to its failure to provide a searchable and reliable advertisement repository; and 3) data access, related to failures to provide access to public data, prohibiting independent data collection, and charging disproportionately high fees for access to its API interface.
The European Data Protection Board has published a
statement recommending that Member States designate their national Data Protection Authorities (DPAs) as market surveillance authorities under the AI Act because they have expertise regulating AI. It also argues that data protection and the AI Act will often be complementary instruments and it will be beneficial for regulated actors to have a single point of contact.
The European Data Protection Board has also published
guidelines on generative AI for EU institutions, bodies, offices and agencies. These include guidance to distinguish whether the use of generative AI involves the processing of personal data and when to conduct a data protection impact assessment.
The European Innovation Council has published
guidelines on generative AI and copyright. It includes an overview of the current legislative landscape, an examination of the risks related to ownership and commercial exploitation of AI-generated content, and a compliance checklist to assist evaluating generative AI service providers.
In France, the French Government and the Republic of China published a
joint declaration on AI committing to further international cooperation including at France’s Summit on Artificial Intelligence in 2025.
In Italy, the Government has
proposed an AI Law to the Senato della Repubblica, the upper house of the Italian Parliament. The Bill sets out principles for the development and use of AI, including sector-specific obligations for healthcare; designates the Agency for National Cybersecurity as the national competent authority under the AI Act; and clarifies copyright rules related to human authorship. The Bill has been assigned to and discussed by two
parliamentary commissions and at a
plenary meeting.
Senato della Repubblica's Impact Assessment Office has published
research on AI governance and regulation. The research summarises the Italian strategy for AI which has a central role for the Council presidency, a focus on national security interests, and the creation of an AI public agency. The document also highlights ongoing legislative challenges concerning data protection, predictive justice and copyright.
In Switzerland, the Federal Council has published
research on dark patterns (available in French). The research examines sixteen of the most common dark patterns and analyses their legality under constitutional law, competition law, data protection law and contract law.
In the Netherlands, the Autoriteit Persoonsgegevens, the Data Protection Authority, has published
guidance on data scraping by individuals and private organisations, with a particular focus on scraping to train AI systems. The guidance states scraping will almost always violate GDPR due to a lack of legal basis, stating that legitimate interest should not be relied upon if the only interest is commercial.
In Spain, the Agencia Española Protección Datos, the Data Protection Authority, has published
guidance on addictive patterns. The research provides a fourfold taxonomy of addictive design: 1) forced action (e.g. gamification and attention capture), 2) social engineering (e.g. urgency and personalisation), 3) interface interference, and 4) persistence, and explores the implications for data protection.
In the UK, Ofcom has published two consultations on draft guidance for
transparency reporting and
information gathering under the Online Safety Act.
Ofcom has also published two pieces of research concerning generative AI and online safety. The first
report on deepfakes presents survey results on its impact to online safety and analyses potential mitigation measures. The second
report on red teaming examines its potential as a safety measure and includes a methodology and ten best practices. Ofcom is open to feedback via email at technologypolicy@ofcom.org.uk.
In the US, the
Senate has passed the Kids Online Safety Act which places a duty of care on online platforms. It requires them to minimise deceptive and addictive design features, enable children to opt-out of algorithmic recommendations, conduct annual independent audits, and provide researchers with access to data. The Bill has been criticised by range of actors including the
American Civil Liberties Union and the
Electronic Frontier Foundation due to concerns of censorship and the privacy implications of age verification systems. The Bill will also need to pass in the House of Representatives.
The Department of Commerce’s National Telecommunications and Information Administration has published
policy recommendations in support of making key components of AI models more open. It also calls for the US government to actively monitor the risks of open-weight models via an evidence-gathering program.
The EU, UK and US competition authorities have published a
joint statement on competition in the generative AI and foundation model market. The statement highlights concentration of key inputs and entrenching market power as key risks and proposes fair dealing, interoperability and choice as key principles to enable competition.
The OECD has updated its
Principles on AI to cover general-purpose and generative AI, and more directly address challenges involving privacy, intellectual property, safety, information integrity and environmental sustainability. It also explicitly calls for actors across the AI value chain to adopt
Responsible Business Conduct.