Example logo
Algorithm Governance Roundup #15
Community Spotlight: Edward Santow, Human Technology Institute at the University of Technology Sydney | Code of Conduct for GPAI
Welcome to July’s Algorithm Governance Roundup. I'm excited to bring you this bumper edition after my summer break!

This month I spoke to Edward Santow, one of the Directors of the Human Technology Institute at the University of Technology Sydney in Australia. We spoke about building strategic AI expertise amongst decision-makers, the impact of AI on corporate governance obligations, and the mandatory NSW AI Assessment Framework for public agency use of AI.

Join our team! AWO are hiring a solicitor to work on leading litigation and advisory matters concerning technology, data rights and information law. Applications close on 04 August. To learn more about our legal team, check out this recent interview between David Carroll and our Legal Director Ravi Naik about data rights in the age of AI!

As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm.newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance.

I would love to hear from you!

Many thanks and happy reading!

Esme Harrington
Subscribe
The Roundup
In China, the World AI Conference and High-Level Meeting on Global AI Governance issued the Shanghai Declaration on global AI governance. It covers AI development; AI safety, with an emphasis on data protection, cybersecurity and AI-specific rules; and advocates for a global AI governance mechanism with a central role for the United Nations.

In the EU, the European Commission has shared its preliminary view that X is breaching the DSA. There are three areas of non-compliance: 1) dark patterns, related to the design and operation of its verified accounts feature; 2) advertising transparency, related to its failure to provide a searchable and reliable advertisement repository; and 3) data access, related to failures to provide access to public data, prohibiting independent data collection, and charging disproportionately high fees for access to its API interface.

The European Data Protection Board has published a statement recommending that Member States designate their national Data Protection Authorities (DPAs) as market surveillance authorities under the AI Act because they have expertise regulating AI. It also argues that data protection and the AI Act will often be complementary instruments and it will be beneficial for regulated actors to have a single point of contact.

The European Data Protection Board has also published guidelines on generative AI for EU institutions, bodies, offices and agencies. These include guidance to distinguish whether the use of generative AI involves the processing of personal data and when to conduct a data protection impact assessment.

The European Innovation Council has published guidelines on generative AI and copyright. It includes an overview of the current legislative landscape, an examination of the risks related to ownership and commercial exploitation of AI-generated content, and a compliance checklist to assist evaluating generative AI service providers.

In France, the French Government and the Republic of China published a joint declaration on AI committing to further international cooperation including at France’s Summit on Artificial Intelligence in 2025.

In Italy, the Government has proposed an AI Law to the Senato della Repubblica, the upper house of the Italian Parliament. The Bill sets out principles for the development and use of AI, including sector-specific obligations for healthcare; designates the Agency for National Cybersecurity as the national competent authority under the AI Act; and clarifies copyright rules related to human authorship. The Bill has been assigned to and discussed by two parliamentary commissions and at a plenary meeting.

Senato della Repubblica's Impact Assessment Office has published research on AI governance and regulation. The research summarises the Italian strategy for AI which has a central role for the Council presidency, a focus on national security interests, and the creation of an AI public agency. The document also highlights ongoing legislative challenges concerning data protection, predictive justice and copyright.

In Switzerland, the Federal Council has published research on dark patterns (available in French). The research examines sixteen of the most common dark patterns and analyses their legality under constitutional law, competition law, data protection law and contract law.

In the Netherlands, the Autoriteit Persoonsgegevens, the Data Protection Authority, has published guidance on data scraping by individuals and private organisations, with a particular focus on scraping to train AI systems. The guidance states scraping will almost always violate GDPR due to a lack of legal basis, stating that legitimate interest should not be relied upon if the only interest is commercial.

In Spain, the Agencia Española Protección Datos, the Data Protection Authority, has published guidance on addictive patterns. The research provides a fourfold taxonomy of addictive design: 1) forced action (e.g. gamification and attention capture), 2) social engineering (e.g. urgency and personalisation), 3) interface interference, and 4) persistence, and explores the implications for data protection.

In the UK, Ofcom has published two consultations on draft guidance for transparency reporting and information gathering under the Online Safety Act.

Ofcom has also published two pieces of research concerning generative AI and online safety. The first report on deepfakes presents survey results on its impact to online safety and analyses potential mitigation measures. The second report on red teaming examines its potential as a safety measure and includes a methodology and ten best practices. Ofcom is open to feedback via email at technologypolicy@ofcom.org.uk.

In the US, the Senate has passed the Kids Online Safety Act which places a duty of care on online platforms. It requires them to minimise deceptive and addictive design features, enable children to opt-out of algorithmic recommendations, conduct annual independent audits, and provide researchers with access to data. The Bill has been criticised by range of actors including the American Civil Liberties Union and the Electronic Frontier Foundation due to concerns of censorship and the privacy implications of age verification systems. The Bill will also need to pass in the House of Representatives.

The Department of Commerce’s National Telecommunications and Information Administration has published policy recommendations in support of making key components of AI models more open. It also calls for the US government to actively monitor the risks of open-weight models via an evidence-gathering program.

The EU, UK and US competition authorities have published a joint statement on competition in the generative AI and foundation model market. The statement highlights concentration of key inputs and entrenching market power as key risks and proposes fair dealing, interoperability and choice as key principles to enable competition.

The OECD has updated its Principles on AI to cover general-purpose and generative AI, and more directly address challenges involving privacy, intellectual property, safety, information integrity and environmental sustainability. It also explicitly calls for actors across the AI value chain to adopt Responsible Business Conduct.
Research
Recipients of the Best Paper Award at this year's ACM Conference on Fairness, Accountability and Transparency:
  • Akal Badi ya Bias: An Exploratory Study of Gender Bias in Hindi Language Technology, Rishav Hada, Safiya Husain, Varun Gumma, Harshita Diddee, Aditya Yadavalli, Agrima Seth, Nidhi Kulkarni, Ujwal Gadiraju, Aditya Vashistha, Vivek Seshadri and Kalika Bali
  • Algorithmic Pluralism: A Structural Approach to Equal Opportunity, Shomik Jain, Vinith Suriyakumar, Kathleen Creel and Ashia Wilson
  • Auditing Work: Exploring the New York City Algorithmic Bias Audit Regime, Lara Groves, Jacob Metcalf, Alayna Kennedy, Briana Vecchione and Andrew Strait
  • Learning about Responsible AI On-The-Job: Learning Pathways, Orientations, and Aspirations, Michael Madaio, Shivani Kapania, Rida Qadri, Ding Wang, Andrew Zaldivar, Remi Denton and Lauren Wilcox
  • Real Risks of Fake Data: Synthetic Data, Diversity-Washing and Consent Circumvention, Cedric Deslandes Whitney and Justin Norman
  • Recommend Me? Designing Fairness Metrics with Providers, Jessie J. Smith, Aishwarya Satwani, Robin Burke and Casey Fiesler
The AI Act Roller Coaster: How Fundamental Rights Protection Evolved in the EU Legislative Process, Francesca Palmiotto

AI governance tracker of each country per region, Alix Ramillon, Effective Altruism

Analyzing TikTok’s “Other Searched For” Feature, AI Forensics and Interface

Artificial Intelligence and Human Rights: Using AI as a weapon of repression and its impact on human rights, H. Akin Ünver, European Parliament Think Tank

Code & conduct: How to create third-party auditing regimes for AI systems, Lara Groves, Ada Lovelace Institute

Data Rights in the Age of AI, David Carroll, Justin Hendrix with Ravi Naik

International Scientific Report on the Safety of Advanced AI, UK Department for Science, Innovation and Technology and the AI Safety Institute

Lessons from the FDA for AI, Sarah Meyers West and Amba Kak, AI Now

Light on Safety: TikTok Lite Sacrifices User Protections in the Global Majority, Odanga Madung, Claudio Agosti and Slavatore Romano, AI Forensics and Mozilla

Open Problems in Technical AI Governance, Anka Reul, Ben Bucknall and others.

Report on Safer Social Media and Online Platform Use for Youth, The Kids Online Health and Safety Task Force, Substance Abuse and Mental Health Services Administration

Under the radar? Examining the evaluation of foundation models, Elliot Jones, Mahi Hardalupas, William Agnew, Ada Lovelace Institute

When content moderation is not about content: How Chinese social media platforms moderate content and why it matters, Luzhou Li and Kui Zhou
Opportunities
AWO is hiring a Solicitor to join our team. You will work on leading litigation and advisory matters concerning technology, data rights and information law. The role is remote and open to UK and EU qualified lawyers.
Application deadline is 04 August.

The EU AI Office has opened a call for expressions of interest to participate in drawing up the Code of Practice for general purpose AI (GPAI) under the AI Act. Eligible stakeholders across industry and civil society will be invited to join the Code of Practice Plenary. This is structured across four Working Groups that will convene virtually for drafting sessions between September 2024 and April 2025.
Application deadline is 25 August.

The EU AI Office has also published a consultation on trustworthy GPAI. The consultation covers transparency and copyright; risk taxonomies, assessments and mitigations for GPAI with systemic risk; and reviewing and monitoring the Codes of Practice.
Consultation response deadline is 10 September.

​The French Ministry for Europe and Foreign Affair’s Laboratory for Women’s Rights Online is calling for projects to identify, prevent and curb online and technology-facilitated gender-based violence. The selected projects will receive support from the Laboratory’s members and funding from the Ministry.
Application deadline is 23 September.
Community Spotlight: Edward Santow, Human Technology Institute at University of Technology Sydney
Edward Santow is one of the directors of the Human Technology Institute at the University of Technology Sydney in Australia. He is also an author of the New South Wales (NSW) AI Assessment Framework and a member of the AI Review Committee. Edward has a background in human rights law, previously acting as Australia’s human rights commissioner. We spoke about building strategic AI expertise amongst decision-makers, the impact of AI on corporate governance obligations and the mandatory NSW AI Assessment Framework for public agency use of AI.

What is the Human Technology Institute at the University of Technology Sydney, and what work are you doing on AI governance?
Edward: The Human Technology Institute at University of Technology Sydney (UTS) has three labs focused on artificial intelligence. The first lab is focused on building strategic expertise, the ability of individuals in decision-making roles to have the basic understanding necessary to make good strategic decision about when and how to use AI systems. We have noticed that a common cause of AI incidents is a lack of strategic expertise rather than a failure of technical skills. Therefore, we aim to improve strategic expertise amongst senior decision makers in government, industry, and civil society.

Our second lab is led by Professor Sally Cripps. This is focused on researching and applying tools (particularly causative Bayesian systems) to social and policy problems.

The third lab addresses two challenges facing AI law and policy. The first concerns the application and enforcement of existing laws to the development and use of AI. This is not an unregulated space – most of our law is technology neutral – so we work with key stakeholders, such as government, industry, and civil society to apply and comply with legislation. The second challenge concerns the gaps in existing legislation, which we identify and develop new policies to address.

One of our recent projects focuses on AI and corporate governance. The obligations required by corporate governance legislation are generally informed by the way companies operate. For example, the core duties in corporate governance legislation have not changed to explicitly require company directors to consider cybersecurity. However, organisational and technological changes mean companies now manage sensitive information electronically. As a result, company directors consider cybersecurity as part of their corporate governance obligations. This also applies to companies incorporating AI systems into company operations. For example, if an AI system is being used by a bank to make loan decisions or optimise efficiencies in important functions. Where this is the case, company directors’ responsibilities will change to incorporate this and they will need to ask searching questions of leadership and senior executives about a range of AI-related issues, such as data provenance, data handling, and personal data.

What is the New South Wales AI Assessment Framework?
Edward: In 2021, the New South Wales (NSW) Government decided to develop the NSW AI Assurance Framework to help government agencies understand their obligations during the development and deployment of AI systems. I was invited to be one of the authors of the framework. It was developed in a highly iterative process, with over seventy drafts before the framework was submitted to the Government for approval.

The NSW AI Assurance Framework became mandatory in March 2022. In 2023, my colleague at the Human Technology Institute, Professor Nick Davis, was invited to lead a review of the AI Assurance Framework. Based on this review, the NSW Government published a new iteration called the NSW AI Assessment Framework. It provides a methodology for departments and agencies to conduct a rigorous self-assessment of risks alongside guidance on the laws and policy applicable throughout the lifecycle of AI, including procurement. This includes questions relating to community benefit, fairness, privacy and security, transparency, and accountability, aimed at identifying risk level and necessary mitigations. This risk level will determine whether the body needs to submit the assessment to the AI Review Committee, proceed without changes, make changes, or halt the project altogether.

I’m a member of the AI Review Committee, which is responsible for reviewing and providing independent advice on certain assessments conducted under the Framework. The Committee consists of government representatives, academics, civil society, and industry. There are three circumstances that trigger an independent review: 1) the project is worth more than $5 million AUSD, or 2) the project originates from a government fund called the Digital Reconstruction Fund or the Digital Restart Fund, or 3) the department or agency assesses the project as high risk. When a project is referred to the Committee, we engage in a dialectic with the department or agency to investigate what controls have been put in place, what changes need to be made, and whether the project should be paused or halted in its current form.

In principle, the discussions facilitated by the Review Committee are confidential. This is to encourage public departments and agencies to openly share their concerns without fear of criticism. Once concluded, the responsible Minister has the discretion to decide whether to make the deliberations of the review public or not. Either way, both the assessment and review are a useful transparency mechanism providing the necessary precondition for public bodies to be accountable to Government.

How has the Framework been deployed? What impact have you seen and expect going forwards?
Edward: It is mandatory for NSW departments and agencies to use the Framework when designing, developing, deploying, procuring, or using systems containing AI components. We have found that certain agencies, including the Department for Transport of New South Wales, are particularly heavy users of AI and the Framework. This department has a range of use cases from low-risk logistics, where AI can offer efficiencies in timetabling, as well as some higher risk cases. We have seen real value in the department repeatedly undertaking the assessment process, building institutional strength and competency.

The success of the Framework requires getting both the content and the process right. In terms of content, it requires distilling masses of legislation, best practice principles, and technical standards to ensure we identify the right requirements across the AI lifecycle. This will continue to be a work in progress, and we view the Framework as a living document. On the other hand, process requires some form of culture change within departments and agencies, encouraging them to take a conscientious approach and incorporate the Framework throughout AI projects rather than as a last-minute exercise.

This model requires public bodies to have the necessary strategic skills to conduct self-assessments, which is something that is being improved at both the State and Federal level. This is why the Human Technology Institute are focused on strategic upskilling, to ensure government and other actors have the minimum viable understanding of AI to make good decisions (a phrase coined by my colleague Professor Nick Davis).
​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus