Algorithm Governance Roundup #10
|
Community Spotlight: Benedict Dellot, Ofcom | DSA Transparency Database
|
|
|
|
Welcome to October’s Algorithm Governance Roundup. This month we spoke to Benedict Dellot, a Technology Policy Principal at Ofcom, the UK’s communications regulator. We spoke about the AI White Paper, Ofcom’s preparations for the Online Safety Bill, and the Digital Regulation Cooperation Forum’s algorithmic processing workstream, which includes work on generative AI. Our team at AWO is accepting public interest applications for our algorithm governance services on a rolling basis throughout the year. We’re offering free audits, assessments, or strategic advisory up to the value of €5,000 each to organisations undertaking public interest work. We’re excited about finding great partners so please get in touch at enquiries@awo.agency with a brief description of who you are, your algorithm governance challenge, and your timeline. As a reminder, we take submissions. We are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you! Many thanks and happy reading! Esme Harrington, AWO
|
In the EU, the European Commission has created the DSA Transparency Database. The Database catalogues the explanation of all content moderation restrictions imposed by online platforms on users, as required by Article 17 of the Digital Services Act. The Spanish presidency of the EU Council has shared three discussion papers on the AI Act with the European Parliament and European Commission in preparation for the trilogue negotiations. The papers cover fundamental rights, sustainability, and workplace use. The presidency also circulated a document that proposes a tiered approach to foundation models alongside horizontal transparency obligations. In the first trilogue, lawmakers agreed on provisions concerning the classification of high-risk AI and general guidance for foundation models, including supervision. In France, the CNIL has published its guidance on developing and using AI in compliance with data protection law. The guidance includes explanations on how to apply GDPR principles, such as data minimisation and purpose limitation, complete data protection impact assessments, and comply with privacy by design during the development of AI. In the Netherlands, the Netherlands Institute of Human Rights ruled that Vrije Universiteit Amsterdam did not discriminate against a student on the basis of race by using anti-cheating software. The applicant alleged that the software repeatedly failed to recognise the student’s face on the basis of her skin colour, requiring her to take exams with a bright light in her face. The Board’s judgment did not accept this allegation on consideration of the facts. The Racism and Technology Center argues this ruling demonstrates the difficulty to legally prove algorithmic discrimination. In the UK, the Information Commissioner’s Office (ICO) has issued a preliminary enforcement notice against Snapchat for failing to properly assess the privacy risks of its generative AI chatbot, including to children. The government has launched the Fairness Innovation Challenge. UK companies can apply for up to £400,000 in investment to develop socio-technical approaches that address bias and discrimination in AI systems. The First-tier Tribunal has allowed Clearview AI’s appeal against the ICO. The Tribunal found that the ICO did not have jurisdiction to issue an enforcement notice and monetary fine against Clearview AI because the processing it undertook was beyond the material scope of the GDPR. The government’s Frontier AI Taskforce is establishing an AI safety research team. The Taskforce has also published its first progress report, announcing an advisory panel and partnerships with technical organisations. In the lead up to this week's AI Safety Summit, the Department for Science, Innovation and Technology has published a discussion paper on Frontier AI and emerging processes for Frontier AI safety. The government has also published nine AI Safety Policies received from Google DeepMind, Anthropic, OpenAI, Microsoft, Amazon and Meta. In the US, New York City has published its Artificial Intelligence Action Plan. The plan will develop a framework for city agencies to evaluate AI tools and risks, build skills and support responsible implementation. The United Nations has announced the High-Level Advisory Body on Artificial Intelligence. The Body will provide analysis and recommendations on the international governance of AI. It consists of 38 experts, from industry, government and the public sector, academia, and civil society.
|
Access Now and the European Center for Not-for-Profit Law has published Towards Meaningful Fundamental Rights Impact Assessments Under The DSA. The report covers minimum requirements for identifying and assessing fundamental rights impacts, stakeholder involvement, and benchmarks to assess the negative effects of automated content moderation on freedom of expression. The Ada Lovelace Institute has published Foundation Models in the Public Sector. This report identifies potential use cases and attendant risks, explores the practices needed for safe and ethical deployment, and reviews UK legislation and guidance to identify where models were directly addressed or mentioned. The AI Now Institute has published Computational Power and AI, which explores how industry concentration acts as a shaping force in how computational power is manufactured and accessed by tech developers. Researchers at the Allen Institute for AI, Mozilla Foundation, Stanford University, and the University of Washington have published The Surveillance AI Pipeline. This article analyses three decades of computer vision research papers and patents to identify a focus on data extraction about human bodies. It argues that this research has powered an ongoing expansion of surveillance. The Data Provenance Initiative has published The Data Provenance Initiative: A Large Scale Audit of Dataset Licensing & Attribution in AI. The paper audits and traces 1,800+ text-to-text fine-tuning datasets to improve transparency. The Data Provenance Explorer.catalogues each dataset's sources, licenses, creators, and metadata. Data & Society has published AI Red-Teaming Is Not a One-Stop Solution to AI Harms. This policy brief explores best practices and red-teaming's limitations in mitigating real-world harms and holistically assessing safety. The Department for Science, Innovation and Technology and Oxford Information Labs have published AI Standards Hub: Pilot Evaluation. The report reviews the progress of the UK AI Standards Hub, which aims to enable active stakeholder participation in international AI standardisation efforts. A researcher at Hertie School has published When is a Decision Automated? A Taxonomy for a Fundamental Rights Analysis. This article explores case studies in migration, asylum, and mobility, and proposes a taxonomy of automated decision making (ADM) in the public sector. It also identifies the fundamental rights at stake for individuals and the sector-specific legislation applicable to ADM. Researchers at MIT, Princeton, and Stanford have published the Foundation Model Transparency Index. The index has 100 indicators that assess the transparency of developers' practices around developing and deploying foundation models. Indicators cover social aspects such as labour and environment, and technical aspects such as data, compute, and model training processes. In response, EleutherAI has published How the Foundation Model Transparency Index Distorts Transparency, which demonstrates limitations in the Index, including through encouraging gamification. The Minderoo Centre for Technology and Democracy has published Policy Brief: Generative AI. The brief maps out an ethical framework for the governance of generative AI, through the creation of a UK AI Bill. The Mozilla Foundation has published IRL Podcast: With AIs Wide Open. This episode covers the benefits and risks of open source large language models and interviews researchers working on responsible open source datasets and models. Researchers at Stiftung Neue Verantwortung have published Auditing TikTok: The Research API Falls Woefully Short. This compares the number of data variables that can be collected through TikTok’s research API and scraped directly from the platform. It finds that the API is inadequate, offers too few variables, and is subject to restrictive access.
|
AlgorithmWatch is calling for applications to the second algorithmic accountability reporting fellowship. Deadline for application is 12 November.
The Federal Trade Council is calling for research presentations for PrivacyCon 2024. Topics could include automated systems, deep fakes, and worker surveillance. Deadline for applications is 06 December. Social Science Research Council is calling for applications to the third cohort of the Just Tech Fellowship. Fellows receive two-years awards of $100,000 annually. Previous fellows have included AI researchers. Deadline for application is 31 January at 11:59 EST.
The UK government's Frontier AI Taskforce is seeking expressions of interest of work. In particular, it is seeking individuals with experience building out safety infrastructure and developing risk assessments to inform policymakers on LLMs. Deadline is ongoing.
|
AI Fringe Hybrid: 30 October - 03 November, London
Alongside the UK Government’s AI Safety Summit, there will be a series of events hosted by academia, civil society and industry. The Fringe will include a series of panels and keynotes that consider AI's specific domains and use cases, cross-cutting challenges and how to create a responsible AI ecosystem. ODI Summit 2023 Virtual: 07 November, 10:00 - 17:00 GMT
The ODI Summit bring together civil society, the public and private sectors, and academia to discuss data and AI.
The Turing Lectures: The future of generative AI Hybrid: 06 December 19:00 - 20:30 GMT
The event will explore the potential futures of generative AI, and what they could mean for AI applications as the technology progresses. Tickets are between £0 - £20.
|
Community Spotlight: Benedict Dellot, Ofcom
|
Benedict Dellot is a Technology Policy Principal at Ofcom, the UK’s communications regulator. He works on research and policy development across a range of topics including generative AI, recommender systems, and AI regulation. He also contributes to the Digital Regulation Cooperation Forum’s Algorithmic Processing Project.
Q: Could you introduce us to Ofcom and provide an overview of Ofcom’s preparatory work to become the regulator for the Online Safety Bill? Benedict: Ofcom is the UK’s communications regulator. That means we regulate everything from telecoms to spectrum, through to news media plurality and broadcast content on TV and radio. We are also responsible for regulating online safety following the Online Safety Bill receiving Royal Assent.
The Bill creates new safety duties for online services that operate in the UK. This includes a range of services that allow a user to share content with others online, such as social media platforms, gaming sites, dating apps and chat forums. It also includes search services.
The safety duties require in-scope services to take measures to protect their users. This includes taking steps to prevent them from encountering illegal content, like terror and fraud content, as well as to prevent child users from encountering particular types of legal but harmful content, like self-harm and eating disorder material. The Bill also requires porn sites to age gate their services, preventing children from accessing adult content. Ofcom has been busy readying itself to oversee the new regime. One of our most important jobs has been to build our team of policy and technology experts, with close to 300 people now working on online safety in Ofcom. This includes a dedicated team of technical experts, including data scientists and machine learning specialists.
We’ve also been gathering the evidence we need to produce our guidance for regulated services, which will include several Codes of Practice that inform services of the steps they can take to meet their new duties.
Q: Under the OSB, Ofcom has powers to conduct investigations and algorithmic assessments. Could you tell us about these powers and Ofcom’s preparations? Benedict: The Online Safety Bill will require in-scope services to use appropriate measures to protect their users from illegal and harmful content – and we’d expect many of these measures to involve the use of algorithmic systems, like age estimation tools and content moderation systems.
To supervise and enforce the regime, Ofcom may therefore need to conduct assessments of services’ algorithmic systems, where this is proportionate and relevant to Ofcom’s functions. This will help us understand if the algorithmic systems are well deployed and effective. For example, we may wish to assess whether a service’s content moderation system is capable of detecting and removing terrorist content. Or we may wish to assess whether a service’s age estimation technology is effective at detecting underage users.
In those cases, we could choose to conduct an empirical test to understand the performance of the technology. We could also choose to examine the wider context in which the technology is deployed, such as whether staff have been appropriately trained to use the technology and interpret its output, and whether the service has done the correct due diligence on suppliers if the system was procured externally.
We’ll continue to build our understanding of algorithmic assessment methods over the coming year, drawing on lessons learned from other regulators, in particular those involved in the Digital Regulation Cooperation Forum (DRCF).
Q: Could you tell us more about the Digital Regulation Cooperation Forum and its work on AI governance? Benedict: The Digital Regulation Cooperation Forum brings together four of the largest digital regulators; Ofcom, the Financial Conduct Authority (FCA), the Information Commissioner’s Office (ICO), and the Competition and Markets Authority (CMA). The DRCF was created to enable us to share best practice and insights, as well as to improve coherence in our regulatory approaches.
We have a range of projects and activities underway. This includes a joint horizon scanning programme, where we’re collectively examining the impact of emerging technologies, such as quantum computing and the metaverse. We’ve also just completed an exercise to understand industry’s appetite for a multi-agency advisory service. This would enable the four regulators to provide streamlined guidance to firms at the forefront of AI and digital innovation. The Government has just announced funding to take this idea forward, supporting the creation of a DRCF-hosted ‘innovation hub’ to be piloted over the next two years.
On top of this, the DRCF has hosted an Algorithmic Processing Project, which among other things has been a space to share best practice on algorithmic assessments. We’ve also been looking at what we can do to support the responsible development of the third-party algorithmic audit ecosystem, which encompasses academics, independent researchers and private firms.
A particular focus of the DRCF this year has been building our collective understanding of generative AI and its implications for our sectors, whether that’s finance, telecommunications or social media. Later this year we’re planning to hold a roundtable with industry adopters of generative AI to hear more about their experiences, the challenges they face using these tools responsibly, and what we can do as regulators to help them to innovate with more confidence.
Q: Could you tell us about the UK’s AI White Paper and Ofcom approach to the framework? Benedict: The AI White Paper is a non-statutory framework to guide AI regulation in the UK. It consists of a number of principles, including safety, security, transparency, and fairness. The White Paper asks regulators to promote adherence to these principles using the existing powers at their disposal, placing them centre stage in the new regime. It’s different to the EU’s model, which is statutory and where central authorities will be defining in law what counts as a high-risk system. Ofcom is supportive of the UK government’s approach, which we believe provides regulators with the necessary flexibility to respond to new risks as they arise.
The White Paper also establishes a number of central functions within government to better facilitate AI regulation, include horizon scanning and risk monitoring capabilities. These will enable government and regulators to collectively spot emerging AI technologies, identify regulatory gaps or confusion, and intervene where required.
At Ofcom, we’re upskilling colleagues on the AI principles so that they can apply them to their work. We’re also exploring the opportunity to establish a standardised procedure for monitoring AI risks and opportunities that apply across our regimes and sectors.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O
Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|