In the EU, the Parliament and Council have
reached a provisional agreement on the AI Act after 36 hours of negotiations. The final text will be published in the coming weeks and must be formally adopted by both the Parliament and Council.
Whilst the final text has not been published yet, reports
detail the list of prohibited applications; obligations for high-risk systems, including fundamental rights impact assessments; transparency requirements for foundation models; and further obligations for high-impact foundation models. The agreed text also creates an AI Office within the EU Commission which will contribute to enforcement and oversight.
The European Commission has
opened a consultation concerning its implementing act on transparency reports for platform content moderation, as required by the Digital Services Act. The consultation includes mandatory templates that specify the type and form of content to be reported. The deadline for consultation responses is 24 January.
A group of AI experts have
launched the International Algorithmic Auditors Association (IAAA) which is the first non-profit professional organisation for AI auditing. The IAAA aims to promote collaboration and shared knowledge on standards, methodologies, and tools.
In Germany, the Bundeskartellamt competition authority has
published a case summary explaining why Microsoft’s involvement in OpenAI did not trigger a notification obligation under German merger control rules.
In the UK, the UK government held the first global
AI Safety Summit. The Summit brought together countries, leading AI companies, and researchers to discuss the risks of frontier AI models, which are defined as capable general-purpose AI models.
In preparation for the Summit, the UK’s Frontier AI Taskforce published its
Second Progress Report, announcing an expansion of its research team and new partnerships with AI organisations. It also shared details of the establishment of the UK’s AI Research Resource (‘Isambard-AI’) at the University of Bristol, which will be the UK’s largest compute facility.
Attendees of the AI Summit signed the
Bletchley Declaration, which recognises the risks and opportunities of AI systems and commits to further research on frontier AI models to support risk-based policymaking. Signatories have
committed to support a State of the Science Report to understand the capabilities and risks of frontier AI. Country signatories have also
committed to work together to research AI safety testing.
At the Summit, the UK government also announced the creation of the UK’s
AI Safety Institute, which will conduct research on frontier AI safety by examining, evaluating, and testing emerging AI systems. The research will inform UK and international policymaking and provide technical tools for governance and regulation. The Institute will have access to the AI Research Resource (‘Isambard-AI’) to conduct this research. It will build on the work of the Frontier AI Taskforce and the AI Safety Summit.
Following attendance at the UK’s AI Safety Summit, the US White House
announced the creation of the US’s AI Safety Institute. The Institute will operationalise NIST’s risk management framework by creating guidelines, tools, benchmarks, and best practices for evaluation and mitigation. The White House also announced several other initiatives including a virtual hackathon to combat AI driven fraudulent phone calls, and a call for cooperation on international standards to identify and trace digital and AI-generated content.
Ofcom has
published its implementation approach for the Online Safety Act, including an indicative timeline of public consultations. To begin this process, Ofcom has
opened a consultation on draft codes and guides related to the illegal harms duties. The deadline for response is 23 February. Ofcom has also
opened a consultation on draft guidance on age assurance for online pornography services. The deadline for response is 05 March.
The National Cyber Security Centre has
published guidance on how to design, develop, and deploy AI systems in ways that ensure their cybersecurity. The guidelines have been
endorsed by 17 other countries, including all G7 members.
In Australia, the government has
announced its intention to amend the Basic Online Safety Expectations and statutorily review the Online Safety Act to ensure social media platform recommendation systems do not amplify harmful or extreme content, including racism, sexism and homophobia.
In Canada, the Minister of Innovation, Science and Industry has
announced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. The code provides companies with common standards to demonstrate that they are developing and using generative AI systems responsibly. This will apply until more formal AI regulation takes effect. (The Canadian AI and Data Act is currently being
debated at the committee stage.)
In South Korea, the Personal Information Protection Committee (PIPC)
opened a public consultation on establishing an AI privacy team (
original announcement). The consultation is open until 14 January. The head of the PIPC has
stated South Korea’s commitment to becoming a leader in setting AI guidelines globally.
In the US, the White House has
announced an Executive Order on Safe, Secure and Trustworthy AI. The Executive Order requires developers of the most powerful AI systems to share safety test results with the government. It also requires government agencies, including the National Institute of Standards (NIST), to develop standards, tools, and tests to ensure trustworthiness, and the Department of Commerce to develop guidance for watermarking and authentication of AI-generated content. Other elements focus on cybersecurity, privacy, civil rights, consumers, worker protections, and innovation.
The Executive Office of the President’s Office of Management and Budget has
published a draft policy for federal agency and government development, procurement, and use of AI. The draft guidance requires agencies to set up governance structures, develop an AI strategy, and implement safeguards including impact assessments and independent evaluations.
The California District Court has
granted part of Stability AI, DeviantArt, and Midjourney’s motion to dismiss a copyright case, except for the direct copyright infringement allegation against Stability. The claimants are able to refile the complaint.