In Australia, the eSafety Commissioner has
published a summary of X’s moderation of online hate, provided in response to a notice issued under the Online Safety Act. The summary includes information on trust and safety staffing before and after Twitter’s acquisition, and on Twitter’s enforcement of online hate policies.
In Denmark, Datatilsynet, the Data Protection Authority,
published an opinion on contract as the legal basis for using datasets in the development of AI language models.
In Europe, the AI Act has been
endorsed in the Single Market and Justice Committees by an overwhelming majority. This follows member states
reaching an agreement on the text at the start of the month. The pre-final text is available
here. The plenary vote in the European Parliament will be 10-11 April, followed by Ministerial approval.
To support the implementation and enforcement of the AI Act, the European Commission (EC) has
established a European AI Office which is responsible for monitoring, investigating, and enforcing the obligations on General-Purpose AI models and systems. To achieve this, it will develop methodologies and benchmarks for evaluation. It will also coordinate the enforcement of AI systems covered under other EU legislation, such as social media recommender systems under the DSA.
The EC also
published a communication outlining its internal preparations for the implementation of the AI Act. This includes building institutional and operational capacity, working with EU public administrations on their adoption and use of AI, and working with start-ups and GovTech companies.
The EC has
opened a call for contribution on generative AI and competition. The deadline for submissions is 11 March. In addition, it announced it's looking into agreements between large digital market players and generative AI developers and providers - in particular, whether Microsoft’s investment in OpenAI is reviewable under the EU Merger Regulation.
The EC
opened formal proceedings against X under the DSA for suspected violations related to deceptive design, access to data for research, and risk management of illegal content. The Commission will now collect evidence to build a potential infringement case.
The EC, ECAT, and PEReN
hosted a hackathon to develop open-source tools to support the operationalisation of the DSA’s access to data obligations. These include tools for the collection, parsing, and centralising of platform transparency documentation, and the creation of an LLM-based system to help design research projects.
The Council of Europe has
adopted new guidelines on the responsible implementation of AI systems in journalism. It covers the use of AI systems in different stages of journalistic production, ranging from the decision to use AI systems, the acquisition of AI tools, incorporating AI systems into professional practice, and the external dimension of using AI in newsrooms.
In Italy, Garante, the Data Protection Authority,
notified OpenAI that it has found evidence of GDPR breaches. OpenAI must submit its counterclaims within 30 days. This follows the temporary ban on processing imposed last March.
Garante has also
imposed a fine on the Municipality of Trento for failing to comply with GDPR in a street surveillance project, involving the development of an AI system that predicts risks to public safety. The DPA ordered the municipality to delete all the data gathered during the project.
In Indonesia, Kominfo, the Ministry of Communication and Informatics,
opened a consultation on its national AI Ethics Guidelines. These are based upon UNESCO’s Recommendation on the Ethics of AI.
In Ireland, Coimisiún na Meán, the Irish regulator for broadcasting, video, online safety, and media,
published a consultation on its Online Safety Code. It is intended to complement the DSA and requires video-sharing platform services to take appropriate measures to protect children from harmful content and the general public from certain illegal content. The consultation proposes a range of measures including for recommender systems based on profiling data to be off by default.
The Irish Minister of State with responsibility for Digital has
established an AI Advisory Council, which will provide the government with independent expert advice focused on building public trust and promoting the development of trustworthy, person-centred AI.
In the Netherlands, the Autoriteit Persoonsgegevens, the Data Protection Authority, and the Department for the Coordination of Algorithmic Oversight (DCA) has
published the second Algorithmic Risks Report. This covers overarching developments, generative AI and foundation models, AI in the workplace and education, and policy and regulations.
In the UK, the government has
published its response to the consultation on the AI White Paper, affirming the principles-based approach. It is
accompanied by guidance to support regulators implementing these principles.
The AI Safety Institute has
published a blog explaining its approach to evaluations, including techniques and criteria which focus on the most advanced AI systems.
The Central Digital and Data Office has
published a framework on the use of generative AI within government. It offers guidance for procurement and developing internal products. It also covers using generative AI safely and responsibly, including ethics, data protection and privacy, security, and governance approaches.
The Competition and Markets Authority (CMA) has
published an overview of how it intends to operate the new digital markets competition regime proposed by the Digital Markets, Competition and Consumers Bill. The regime will apply to firms designated by the CMA as having Strategic Market Status (SMS) in relation to one or more digital activities. Firms with SMS may have conduct requirements imposed which relate to self-preferencing, interoperability, fair terms, and transparency in relation to algorithms.
The CMA is
investigating the partnership between Microsoft and OpenAI in a merger inquiry. This is the first part of the CMA’s information gathering process and comes prior to any launch of a formal investigation. A coalition of civil society organisations has
filed a submission on the inquiry.
The House of Lords Communications and Digital Committee
published a report on LLMs and generative AI. It makes recommendations on the UK’s AI White Paper. It also argues that it is not fair for generative AI developers to use copyright-holder’s data for commercial purposes without permission or compensation.
In the US, the Department of Commerce
announced the creation of the AI Safety Institute Consortium, with 200 participants including government, academics, civil society and industry such as OpenAI, Google, Microsoft, Meta, Amazon, Apple and NVIDIA. The consortium is an outcome of the White House's executive order on AI and will develop guidelines for red-teaming, capability evaluations, risk management, safety and security, and watermarking synthetic content.
The Federal Trade Commission
launched an inquiry into investments and partnerships involving generative AI companies and cloud service providers. It issued compulsory orders to Alphabet, Amazon, Anthropic, Microsoft and OpenAI requiring them to provide information, including on the implications for product releases, governance, and competition for AI inputs and resources.
The FTC
hosted the FTC Tech Summit on AI, convening academia, civil society, government, regulators, and industry to discuss computational infrastructure, data and models, and consumer applications.
The FTC
published a blog on the privacy requirements for AI model-as-a-service companies, which warns that companies must respect user data, including confidential business data, or risk breaching privacy commitments and undermining fair competition.
EPIC has
filed a complaint with the FTC against Thomson Reuters for the development and operation of a faulty fraud detection system used by public agencies across 42 states.
IBM and Meta have
launched AI Alliance, an international consortium across industry, academia, research, and government to promote the development of open-source AI. The initiative will develop and deploy benchmarks, evaluation standards, and tools.
LAION has
taken down its dataset LAION-5B after a Stanford study
found it contained 3226 suspected instances of child sexual abuse material, 1008 of which were externally validated. LAION-5B has been used to train Stable Diffusion and other AI products.
The New York Times has
filed a lawsuit against OpenAI and Microsoft. The complaint alleges that millions of articles from The New York Times were used to train chatbots that now compete with it.
The UN Office of the Tech Envoy has
published its interim report on governing AI for humanity, supported by the UN’s AI Advisory Body. The final report will be published in the summer following stakeholder input. The deadline for submitting feedback on the report online is 31 March.
UNESCO has
released the Global AI Ethics & Governance Observatory in partnership with the ITU, Alan Turing Institute, and Patrick J McGovern Foundation. The observatory provides policy resources, best practices, and assessment methodologies.