Example logo
Algorithm Governance Roundup #19
Community spotlight: Office of the Australian Information Commissioner | Code of Practice for GPAI
Welcome to the last Algorithm Governance Roundup of 2024 – and it's a cracker! This has been a busy year for AI governance folk – from the passage and ongoing implementation of the AI Act, a spate of investigations into large platforms under the Digital Services Act and a concretisation of benchmarks, tools, and methodologies across audit and impact assessments – I’m certainly looking forward to catching-up on my reading list! 

Our final community spotlight of the year is the Office of the Australian Information Commissioner (OAIC). We spoke about their recent privacy guidance for training and deploying generative AI systems and their collaboration with other regulators under the Australian Digital Platform Regulators Forum.

Last month, AWO published Generative AI’s open source challenge, written by my colleagues Nick Botton and Mathias Vermeulen, and commissioned by the Digital Infrastructure Insights Fund (D//F). They argue that policymakers should work to balance the risks and benefits of openness of Generative AI models through policy that improves independent researchers' involvement in model development and maintenance. Check it out and let us know your thoughts!

Many thanks and happy reading!

Esme
Subscribe
This Month's Roundup
In Australia, the Government has passed the Online Safety Amendment (Social Media Minimum Age) Bill which requires social media platforms to prevent users under the age of 16 from accessing their services. The regulated social media platforms will need to take reasonable steps, such as age assurance processes, and must ensure the privacy of personal information collected for such processes.

In Canada, the Ontario Human Rights Commission and Law Commission of Ontario has published a Human Rights AI Impact Assessment. The methodology aims to assist developers and administrators of AI systems to identify, assess, minimise or avoid discrimination and ensure human rights obligations throughout the AI lifecycle.

In Europe, the European Commission hosted an online workshop on the evaluation of general-purpose AI models with systemic risks. Key risk topics included CBRN (chemical, biological, radiological, and nuclear threats), cyber, loss of control, discrimination, privacy, disinformation and other systemic risks to public health, safety, democratic processes or fundamental rights.

The European AI Office published the first draft of the General-Purpose AI Code of Practice. The draft was prepared by independent experts who were appointed as the Chairs and Vice-Chairs of the four Code of Practice working groups, with contributions from a multi-stakeholder consultation and workshop. It includes details on transparency and enforcement of copyright rules, a taxonomy of systemic risks, risk assessment methodologies and mitigation measures for GPAI with systemic risks.

The providers of the 19 Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs) have published the first risk assessment and audit reports under the Digital Services Act. The reports include assessments to identify and analyse the risks stemming from their services, such as illegal content or disinformation, and the mitigation measures put in place. This includes TikTok, X, Instagram, Facebook and Google Search, and all the publications have been collected by Alex Hohlfield here.

The Commission has opened formal proceedings against TikTok on election risks under the Digital Services Act. This concerns TikTok’s obligation to properly assess and mitigate systemic risks linked to election integrity, in the context of the Romanian presidential elections. The proceedings will focus on TikTok’s recommender systems and risks linked to coordinated inauthentic manipulation and automated exploitation, and political advertisements and paid-for political context.

The Council of Europe has adopted the HUDERIA methodology to assess the impact of AI systems on human rights. It aims to help public and private actors to identify and address risks and impacts to human rights, democracy and the rule of law through the creation of a risk mitigation plan and regular assessments. In 2025, it will be complemented by a HUDERIA Model which will provide supporting materials and resources, tools and scalable recommendations.

In France, in advance of the AI Action Summit, the French government has launched the AI Convergence challenge to showcase initiatives working across public interest AI, the future of work, innovation and culture, trust in AI and global AI governance.

In Singapore, the Monetary Authority has published a report on AI Model Risk Management. The body conducted a thematic review of banks’ AI model risk management approaches and set out best practices across governance, oversight and risk management systems.

In the UK, the Department for Science, Innovation and Technology (DSIT) has opened a consultation on the AI Management Essentials (AIME) Tool. This is a self-assessment tool to help organisations assess and implement responsible AI management systems and processes. The deadline to respond to the consultation is 29 January.

DSIT and the Department for Business and Trade have also published fourteen records under the Algorithmic Transparency Recording Standard. This includes information on the Ministry of Justice using an open-source tool to assist researchers and the Ministry of Justice using an AI writing assistant to improve job adverts.

Parliament’s Science, Innovation and Technology Select Committee has opened an inquiry into Social Media, Misinformation and Harmful Algorithms. This inquiry is investigating the links between recommender algorithms, generative AI and the spread of harmful or false content online, in the wake of the anti-immigration riots that occurred across the UK. The deadline to submit evidence to the inquiry is 18 December.

The Online Safety Act has come into force, and the illegal content codes of practice and categorisation approach have been laid in Parliament. The illegal content codes will provide compliance guidance to regulated services, including how to carry out risk assessments for illegal content. The categorisation approach sets out how Ofcom will designate the riskiest platforms which must comply with additional obligations (‘Category 1 services’).

Ofcom has published an open letter to online service providers regarding the application of the Online Safety Act to generative AI and chatbots. The letter explains these AI systems will fall within scope if they are made available on a website that allows users to share the generated text, images or videos with others, or to create or upload their own chatbots so that it is available to others.

Ofcom is also calling for evidence to inform its report on researcher access to social media data. The deadline for submissions is 17 January 2025.

The Information Commissioner's Office has disclosed, in response to an Freedom of Information request, a Data Protection Impact Assessment for its deployment of Copilot for Microsoft 365.

In the US, the Department of Commerce and Department of State co-hosted the inaugural International Network of AI Safety Institutes meeting. It aims to promote global cooperation on AI safety and the initial members include Australia, Canada, France, Japan, Kenya, Republic of Korea, Singapore, UK, and the EU AI Office. The meeting covered three topics: 1) mitigations for risks associated with synthetic AI generated content, particularly transparency techniques and system safeguards; 2) evaluation and testing exercises for foundation models and 3) risk assessments, with an endorsement of a Joint Statement on Risk Assessment of Advanced AI Systems.

To support the first topic, the US AI Safety Institute has published guidance on reducing the risks posed by synthetic content. It identifies a series of voluntary approaches to address risks from AI-generated content such as CSAM, impersonation and fraud. To support the second topic, the US AISI, UK AISI and Singapore AISI completed their first joint testing exercise on Meta’s Llama 3.1 405B across general academic knowledge, ‘closed domain’ hallucinations and multi-lingual capabilities.

The US government also announced the establishment of a Testing Risks of AI for National Security (TRAINS) Taskforce to bring together experts from the Departments of Commerce, Defense, Energy, Homeland Security, the National Security Agency (NSA) and National Institutes of Health (NIH) to address national security concerns.

The Christchurch Call Initiative on Algorithmic Outcomes has published a report on privacy preserving algorithm audit methods, in collaboration with partners OpenMined, Microsoft, LinkedIn and DailyMotion. For the project, four independent researchers audited the recommender systems of LinkedIn and DailyMotion using the open-source library PySyft to facilitate remote data access and differential privacy.
Research
Access to Data for Research: Lessons for the National Data Library, Minderoo Centre for Technology and Democracy

AI Hallucinations and Data Subject Rights under the GDPR: Regulatory Perspectives and Industry Responses, Theodore Cristakis

AIR-Bench 2024: A Safety Benchmark Based on Risk Categories from Regulations and Policies, Yi Zeng, Yu Yang, Andy Zhou, Jeffrey Ziwei Tan, Yuheng Tu, Yifan Mai, Kevin Klyman, Minzhou Pan, Ruoxi Jia, Dawn Song, Percy Liang, Bo Li

A New Benchmark for the Risks of AI, Will Knight

A Taxonomy of Systemic Risks from General-Purpose AI, Risto Uuk, Carlos Ignacio Gutierrez, Lode Lauwaert, Carina Prunkl, Lucia Velasco

Clio: Privacy-Preserving Insights into Real-World AI Use, Alex Tamkin, Miles McCain, Kunal Handa, Esin Durmus, Liane Lovitt, Ankur Rathi, Saffron Huang, Alfred Mountfield, Jerry Hong, Stuart Ritchie, Michael Stern, Brian Clarke, Landon Goldberg, Theodore R. Sumers, Jared Mueller, William McEachen, Wes Mitchell, Shan Carter, Jack Clark, Jared Kaplan, Deep Ganguli, Anthropic

Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State, Amnesty International

Copyright, the AI Act and extraterritoriality, Joâo Pedro Quintais, Institute for Information Law

EU AI Act - Governance Architecture and Implementation Framework, Future of Privacy Forum

Generative AI’s open source challenge: policy options to balance the risks and benefits of openness in AI regulation, Nick Botton, Mathias Vermeulen, AWO

Global Trends in AI Governance: Evolving Country Approaches, Sharmista Appaya, Jeremy Ng, World Bank Group

How the EU AI Act Can Increase Transparency Around AI Training Data, Zuzanna Warso, Maximillian Gahntz

Misguided: AI regulation needs a shift in focus, Agathe Balayn, Seda Gürses

Non-Public Data Access for Researchers: Challenges in the Draft Delegated Act of the Digital Services Act, Lukas Seiling, Ulrike Klinger, Jakob Ohme

Now you are speaking my language: why minoritised LLMs matter, Hannah Claus, Ada Lovelace Institute

One year later, how has the White House AI Executive Order delivered on its promises?, Aaron Klein, Cameron F. Kerry, Courtney C. Radsch, Mark MacCarthy, Sorelle Friedler, and Nicol Turner Lee

OpenAI’s new defense contract completes its military pivot, James O’Donnell, MIT Technology Review

Researcher access to platform data: Experts weigh in on the Delegated Act, John Albert, DSA Observatory

Technologist Roundtable: Key Issues in AI and Data Protection Post-Event Summary and Takeaways, Rob can Eijk, Stacey Gray, Marlene Smith, Future of Privacy Forum

The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template, Alessandro Mantelero

The Reality of AI and Biorisk, Aidan Peppin, Anka Reuel, Stephen Casper, Elliot Jones, Andrew Strait, Usman Anwar, Anurag Agrawal, Sayash Kapoor, Sanmi Koyejo, Marie Pellat, Rishi Bommasani, Nick Frosst, Sara Hooker

Spending wisely: Redesigning the landscape for the procurement of AI in local government, Mavis Machirori, Ada Lovelace Institute

What Makes a Good AI Benchmark?, Anka Reuel, Amelia hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel J. Kochenderfer, Stanford University Human-Cetred Artificial Intelligence

Why ‘open’ AI Systems are actually closed, and why this matters, David Gray Widder, Meredith Whittaker, Sarah Myers West, Nature
Opportunities
Trinity College Dublin’s AI Accountability Lab is seeking three PhD students to research across a range of interdisciplinary AI accountability projects.

Human Rights Watch is hiring a Senior Researcher in Technology and Human Rights.
Deadline for submissions is 05 January.

The EU AI Office is hiring Policy Officers to work on implementation of the EU AI Act and Legal Officers to work on supervision and enforcement. 
Deadline for application is 15 January.

Full Fact is hiring a Policy Manager to lead their work on combatting misinformation. The role is hybrid and based in London.
Deadline for application is 15 January.

The United Nations Human Rights Office of the High Commissioner is calling for submissions to the Special Rapporteur on Freedom of Opinion and Expression’s report on ‘Freedom of Expression and Elections in the Digital Age’.
Deadline for submissions is 16 January.
Community Spotlight: Office of the Australian Information Commissioner
What is the Office of the Australian Information Commissioner (OAIC), and what work are you doing related to AI?
OAIC: The Office of the Australian Information Commissioner (OAIC) is an independent regulator responsible for promoting and upholding privacy and information access rights in Australia. Our regulatory activities include conducting investigations, handling complaints, reviewing decisions made under the Freedom of Information (FOI) Act and providing guidance and advice to the public, organisations and Australian Government agencies.

A current priority area for the OAIC is advancing online privacy protections for Australians. As part of this, we are working to respond to the privacy risks arising from AI, including the effects of powerful generative AI capabilities being increasingly accessible across the economy. The release of these technologies publicly and their distribution at no cost to the user amplifies the scale of potential privacy impacts. The OAIC is working to build awareness of privacy risks of these technologies and the obligations of the entities that we regulate. 

The OAIC recently published guidance on privacy and AI in relation to the use of commercially available AI product & the development of Generative AI, can you tell us about this?
OAIC: The OAIC’s recent privacy guidance on AI has been developed in recognition of the increasing concern from Australians about the privacy risks of AI systems. The OAIC’s 2023 Australian Community Attitudes to Privacy Survey (ACAPS) identified significant community concern with the use of personal information in AI systems, with 43% of Australians considering AI using their personal information to be one of the biggest privacy risks they face today. 

Our guidance articulates how the Australian Privacy Act 1988 applies to the development of generative AI models and the use of commercially available AI products, and sets the boundaries of what is and is not an appropriate use of personal information. Our ultimate goal is to make privacy compliance easier when it comes to AI, recognising that this benefits both consumers as well as organisations.
 
In publishing this guidance, we join a number of data protection authorities who have provided comment on the application of their respective data protection frameworks to either AI generally or generative AI specifically. We commenced this task with an appreciation that generative AI models are often developed for use across borders, meaning organisations developing these models need to engage with differing frameworks. To reduce the burden on businesses that operate globally, where possible we have sought to align our work with international guidance on privacy obligations in the context of AI. However, ultimately our guidance interprets the Australian Privacy Act as it currently stands, and there are a few features of our law that are important to note:
  • The Privacy Act does not have a legitimate interests basis for data processing or a business improvement exception.
  • Sensitive information can only be collected in very limited circumstances, and generally consent is required.
These features of Australian law have implications for when personal information, especially sensitive information, can be used in relation to AI systems, particularly generative AI models.

Key takeaways from our guidance for developing and training generative AI are that:
  • Developers should carefully consider whether their AI model will involve the collection, storage, use or disclosure of personal information, either by design or through an overly broad collection of data for training. Doing this early in the process will help to mitigate any privacy risks
  • Personal information is a broad category, and the risk of data re-identification needs to be considered. 
Our guidance for the use of commercially available AI products highlights that:
  • When looking to adopt a commercially available product, organisations should conduct due diligence to ensure the product is suitable to its intended uses
  • Privacy obligations will apply to any personal information input into an AI system, as well as the output data generated by AI (where it contains personal information)
  • As a matter of best practice, organisations should not enter personal information, and particularly sensitive information, into publicly available generative AI tools, due to the significant and complex privacy risks involved. 
The National AI Centre developed a voluntary AI Safety Standard, what is it and how does it interact with the privacy regime?
OAIC: The Voluntary AI Safety Standard has been developed by the Australian Government’s National AI Centre to help organisations develop and deploy AI systems in Australia safely and reliably. The Standard consists of 10 voluntary guardrails that apply to all organisations across the AI supply chain, including transparency and accountability requirements. It does not seek to create new legal obligations, but rather helps organisations deploy and use AI systems in accordance with existing Australian laws. 
 
The AI Safety Standard does not replace obligations under Australia’s privacy laws. However, there is significant intersection between the Standard and the obligations under the Privacy Act and Australian Privacy Principles (APPs), including in relation to data quality and security, as well as transparency and governance obligations.

What is the Australian Digital Platform Regulators Forum, and what work has DP-REG done on AI?
OAIC: The Digital Platform Regulators Forum (DP-REG) is an avenue for Australian regulators to share information about, and collaborate on, cross-cutting issues and activities relating to the regulation of digital platforms. The heads of the Australian Competition and Consumer Commission (ACCC), Australian Communications and Media Authority (ACMA), OAIC and the Office of the eSafety Commissioner (eSafety) constitute DP-REG.

DP-REG was established in March 2022 to support a streamlined and cohesive approach to the regulation of digital platforms in Australia. There are a wide range of interventions underway across the Australian Government, including in response to the ACCC’s 2019 Digital Platforms Inquiry Final Report and as part of recent legislative reforms to strengthen Australia’s online safety regime. Cooperation is invaluable as regulators working across these initiatives have shared goals and face many of the same challenges – addressing emerging consumer harms, encouraging innovation while balancing protections, and countering the market power of these large, complex and diverse multinational entities.

The OAIC works together with other members to achieve DP-REG’s goals to build regulatory capacity, promote regulatory coherence and respond to emerging risks and opportunities. A critical and overarching focus for the OAIC is considering how privacy intersects with competition, consumer protection, online safety and data in issues that the various regulators consider.

This year, DP-REG has been focussed on understanding, assessing and responding to the benefits, risks and harms of technology, including AI models, and this work will continue as a strategic priority. This builds on past DP-REG work including publication of working papers on algorithms and the large language models used in generative artificial intelligence. DP-REG continues to monitor local and international developments in AI to contribute to the Australian Government’s response to AI.

What other AI governance work are you working on?
OAIC: In October 2024, the OAIC and 16 of our international data protection and privacy counterparts released a concluding joint statement on data scraping, which follows on from an initial joint statement released in 2023, and sets out further expectations about how social media companies can better protect personal information on their platforms. The concluding statement highlights that organisations should comply with privacy and data protection laws when using personal information, including from their own platforms, when developing large language models.
 
In relation to emerging technologies more broadly, Privacy Commissioner Carly Kind has recently released a determination that found that hardware retail chain Bunning Group Limited breached Australians’ privacy by collecting their personal and sensitive information through a facial recognition technology (FRT) system in operation across 63 stores between November 2018 and November 2021. The OAIC has also published new privacy guidance for businesses considering using FRT in a commercial or retail setting.
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm.newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus