In Australia, the Department of Industry, Science and Resources has published
feedback to its proposal to introduce mandatory guardrails for AI in high-risk settings. The
proposal suggests developers and deployers should establish a range of mitigation measures, including processes for internal accountability and risk management, data governance measures, and monitoring model performance once deployed.
In Belgium, the Data Protection Authority’s General Secretariat has published a report on the
interplay of the GDPR and the Artificial Intelligence Act during system development, providing practical guidance for developers to comply with both regulations.
In Europe, the European Commission (EC) has published the
draft delegated act on researcher access to platform data under Article 40 of the Digital Services Act (DSA). It proposes technical conditions and procedures that researchers must follow for data access and creates a public mechanism to view applications via a Data Access Portal. My colleague Mathias Vermeulen has published an initial analysis
here. The deadline to comment on the EC’s draft is 26 November.
The EC has sent
requests for information to Pornhub, Stripchat and XVideos under the DSA. The EC has asked for additional information on content moderation, including the human resources dedicated to the task, moderators’ qualifications and linguistic expertise, and the indicators of accuracy of automated moderation systems. In addition, the EC has requested information relating to ad repositories, following concerns they are not easily searchable and do not allow multi criteria queries and API tools.
The EC has published the
minutes of the first official AI Board meeting, attended by a range of Member State public agencies. During the meeting, the Board discussed the development of the Code of Practice for general-purpose AI, the scientific panel, and guidelines for prohibited practices under the AI Act.
The EU AI Office hosted the
first plenary on the Code of Practice for general-purpose AI under the AI Act. The Code of Practice process will involve
four expert working groups meeting three times to discuss drafts, with the final version expected in April 2025. In the meeting, the AI Office presented the preliminary results of its initial public consultation.
The EC has also opened a consultation on the
establishment of the scientific panel under the AI Act. It will be composed of independent experts who advise on and assist the AI Office and market surveillance authorities during implementation and enforcement. The deadline to respond to the consultation is 15 November.
The German Federal Office for Information Security and the French Cybersecurity Agency has published a report on the
secure use of AI coding assistants in the software development process. It recommends that developers conduct a systemic risks analysis prior to introducing AI tools, that they check and reproduce generated source code, and that productivity gains within development teams are compensated by scaling quality assurance teams.
In Ireland, the Coimisiún na Meán published the
final version of the Online Safety Code. It imposes measures to protect minors on video-sharing platforms established in Ireland, including Facebook, Instagram, YouTube, Udemy, TikTok, LinkedIn, X, Pinterest, and Tumblr. The code prohibits content that is dangerous for minors and requires platforms to implement age verification and parental control tools. Following consultation with the EC, the Code
no longer applies to recommendation systems because they are covered by the DSA.
The Coimisiún na Meán has also certified the
first Out-of-Court Dispute Settlement Body under the DSA. Appeals Centre Europe is now certified to handle DSA disputes related to content moderation decisions until September 2029. It will handle disputes relating to Facebook, YouTube, and TikTok’s application and enforcement of their terms of service. The body has
received a $15 million (€13.7 million) “one-time, non-returnable, non-renewable” grant from Meta's Oversight Board Trust.
In the UK, the government has introduced the
Data (Use and Access) Bill which will alter the UK’s existing data protection regime. This includes enabling some forms of automated decision-making provided there are safeguards to protect data subjects, including the ability to request human review and challenge the decision. The bill also introduces new measures for researcher access to online safety data.
The Department for Science, Innovation and Technology has launched the
Regulatory Innovation Office to support the growth of four areas of technology: engineering biology, space, AI and digital in healthcare, and connected and autonomous technology.
In the US, the White House has published a one year update on the
implementation of its Executive Order on AI. Federal agencies have completed over a hundred actions, including requiring foundation model developers to
report safety and security testing to the Department of Commerce on a quarterly basis, and securing agreements for the US AI Safety Institute to
conduct pre-deployment safety testing of models developed by Anthropic and OpenAI.
The Executive Office of the President has published a
memorandum on the responsible acquisition of AI in government agencies, building on a
past memorandum. It requires agencies to establish policies to promote collaboration during procurement and encourages interagency knowledge sharing on issues and best practices.
A Texan lawmaker has published the
draft Texas Responsible AI Governance Act, which will be formally introduced in the 2025 legislative session. The bill sets a reasonable standard of care for developers, distributors and deployers of high-risk AI systems (HRAIS) to prevent known or foreseeable algorithmic discrimination. It includes requirements to conduct semi-annual HRAIS impact assessments, develop a risk management policy, keep records and report issues, and be transparent towards consumers.