Example logo
Algorithm Governance Roundup #9
Community Spotlight: Safe AI Companion Collective | OECD catalogue of tools and metrics for trustworthy AI
Welcome to September’s Algorithm Governance Roundup. We hope you had a great summer. This month our community spotlight is on the Safe AI Companion Collective (SAICC), a collective of legal and technical experts working on issues related to AI companion chatbots. We spoke about the SAICC’s complaints against Chai, an AI companion chatbot provider, which they filed with the Belgium data protection and consumer protection authorities.

As a reminder, our team at AWO is accepting public interest applications for our algorithm governance services on a rolling basis throughout the year. We’re offering free audits, assessments, or strategic advisory, up to the value of €5,000 each to organisations undertaking public interest work. We’re excited about finding great partners, so please get in touch at enquiries@awo.agency with a brief description of who you are, your algorithm governance challenge, and your timeline.

We take submissions. We are a small team that selects content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you!


Many thanks and happy reading!


AWO team
This Month's Roundup
In Europe, the European Commission has published a case study that applies the DSA risk management framework to Russian disinformation campaigns. The framework offers a methodology for risk assessment and mitigation, and establishes baseline benchmarks for future audits.

The Council of Europe has published the latest working draft of its Framework Convention on AI, Human Rights, Democracy and the Rule of Law. These principles include transparency and oversight, accountability, equality and discrimination, safety and security, privacy, and remedies.

The European Court of Human Rights has ruled on the use of facial recognition technology (FRT) by police in Moscow. The court found that FRT was a particularly intrusive interference with the right to private life in Article 8 and therefore required the highest level of justification.

In France, CNIL has opened submissions on how the GDPR applies to AI. In particular, they are interested in purpose limitation; methods of selecting, cleaning and minimising data; data protection by default; and the conditions that actors can rely on legitimate basis as a legal basis for processing the personal data in training data. Submissions should be sent to ia@cnil.fr.

In Spain, the Spanish Council of Ministers has approved the creation of the Spanish Agency for the Supervision of AI (AESIA). This is the first European agency created specifically for AI oversight and will be the Spanish national supervisory authority under the AI Act.

In the UK, the Government has announced it will host a global AI Safety Summit in November. The event will convene international governments, industry and researchers to develop a shared approach on the safe development and use of frontier AI technology. Civil society organisations have critiqued the Summit’s focus on a narrow definition of AI safety and existential risk. Groups including the Ada Lovelace Institute have recommended a wider definition of harms and risk that centres communities and affected people.

The Digital Regulators Cooperation Forum (CMA, Ofcom, ICO and FCA) has announced a new pilot advisory service to assist businesses launching AI and digital innovations. The service will give businesses tailored advice on how to meet regulatory requirements.

The Parliamentary Science Innovation and Technology Committee has heard evidence from the new Chief Scientific Adviser, Dame Angela McLean, on her work priorities, including AI governance.

The Centre for Data Ethics and Innovation and techUK has developed a portfolio of AI assurance techniques. The portfolio provides guidance and real-world examples to support the development of trustworthy AI. The portfolio is open for submissions, which should be sent to ai.assurance@cdei.gov.uk.

In the US, the White House has secured voluntary AI-related commitments from companies Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The eight commitments include independent security testing; information sharing with government, civil society and academia; processes to enable third-party discovery and reporting of vulnerabilities; and watermarking for audio and video. This is the first of its kind, however the commitments have been criticised as broad and unspecific, including by Professor Emily Bender who pointed to the focus on spurious risks, carve-outs for synthetic text and a lack of commitment around data and model documentation.

In New York, the anti-bias law for hiring algorithms has begun to be enforced, following some delay. Employers using hiring algorithms must disclose this information, submit the algorithm for an independent audit, and subsequently publish the results of the audit.

In Montana, a new law restricting the use of facial recognition technology (FRT) has taken effect. The law prohibits state and local agencies, including law enforcement, from using real-time FRT. Retrospective FRT can be used under certain circumstances, subject to human review and audit procedures. Third-party FRT vendors must inform individuals in writing that their biometric data is being collected or stored; share the specific purpose and length of time data is being collected, stored, or used for; and receive written consent if they capture or in any way obtain individuals’ data.

Common Sense Media has developed an independent AI ratings and reviews system for AI products. It will assess AI products on their responsible AI practices, handling of private data, and suitability for children, amongst other factors. The project will have a particular focus on AI products used by children and educators.

The OECD has published a catalogue of tools and metrics for trustworthy AI. These include technical tools to mitigate bias, measure performance, and audit systems; and non-technical tools that focus on procedural processes and certification.

Foundation models and Generative AI

In the UK, the House of Lords Communications and Digital Committee has launched an inquiry into Large Language Models to “examine what needs to happen over the next 1–3 years to ensure the UK can respond to their opportunities and risks”. The inquiry will evaluate the work of Government and regulators, examine how well the AI White Paper addresses current and future technological capabilities, and review the implications of approaches taken elsewhere in the world.

The Prime Minister and Technology Secretary have announced £100 million initial funding for a Foundation Model Taskforce. The task force will invest in foundation model infrastructure and procurement in the public sector, with pilots launching in the next six months. It will aim to encourage the broad adoption of the technology and act as a global standard bearer for AI safety.

The Digital Regulators Cooperation Forum has held a workshop on the implications, risks and benefits of Generative AI on each of the regulators’ work and remit. The DRCF are keen to receive input from external stakeholders on these issues, which should be sent to drcf@ofcom.org.uk.

In China, the Cyberspace Administration is revising the Draft Measures for the Management of Generative AI Services. The provisions focus on the manipulation of public opinion through false information alongside data leaks, fraud and privacy breaches. The Measures will also impose safeguards for personal information and user input.

In Japan, the G7 has published the Hiroshima Process on Generative AI. The report sets out trends and incidents related to Generative AI, and outlines the underlying principles for a forthcoming voluntary code of conduct. Previously, the G7 Data Protection Authorities also published a statement on Generative AI and discussed their approach to regulation at the first Japan Privacy Symposium. The regulators' main concerns are legal basis; the need for security safeguards; and the need for mitigation and monitoring measures to ensure personal data is accurate, complete and is not discriminatory or unlawful. G7 members Italy, CanadaJapan and the United States have all investigated or engaged with OpenAI on data protection compliance.

The OECD has published a report on Generative AI to inform policy considerations and support decision makers. The report explores the societal and policy challenges of the technology, including its impact on labour markets, copyright, societal biases, disinformation, and manipulated content.
Research and Articles

Accountable Tech, AI Now and EPIC have published Zero Trust AI Governance Framework. This report provides three principles to guide policymakers as they address the societal risks posed by AI: 1) Time is of the essence - start by vigorously enforcing existing laws now, 2) Bold, easily administrable bright-line rules are necessary, 3) At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful.

AlgorithmWatch has published three reports:
  • How to define platforms' systemic risks to democracywhich outlines a methodology that will serve as a benchmark for the AlgorithmWatch team as they evaluate Digital Services Act risk assessments.
  • The AI Act and General Purpose AIwhich proposes recommendations to inform the EU’s AI Act negotiations on General Purpose AI. These include clarifying definitions; addressing complexity, scale and power asymmetries; and avoiding accountability gaps.

AWO has published Effective protection against AI harms. The paper - commissioned by the Ada Lovelace Institute - examines the UK government's claims that the UK’s existing legal frameworks provide sufficient protection for individuals against AI harms. It does this using three realistic AI harm scenarios – in financial services, employment, and the public sector – which show that there are a range of gaps in effective protection for ordinary people, especially the lack of meaningful and in-context transparency for automated decision-making. The analysis raises questions about the UK's current approach to AI regulation, set out in the Government's AI White Paper.

Researchers at Cornell University, Harvard University and Microsoft Research have published How Different Groups Prioritise Ethical Values for Responsible AI, which finds that AI practitioners’ priorities for responsible AI are not representative of the wider US population.

The Hertie School Centre for Digital Governance has published Climate Breakdown as Systemic Risk in the Digital Services Act. The research analyses the links between climate policy and platform regulation, and how this could be factored into DSA enforcement.

Ofcom has published Evaluating recommender systems in relation to illegal and harmful content. The findings will inform the development of Ofcom’s policy guidance for the new Online Safety regime.

The Mozilla Foundation has published This is not a System Card: Scrutinising Meta’s Transparency Announcements which analyses Meta's transparency updates for Facebook and Instagram. The report concludes that the tradeoffs involved in algorithmic ranking decisions remain unclear, and the information shared in the system cards holds little utility for end users, auditors or civil society.

The Mozilla Foundation has also published Openness & AI: Fostering Innovation & Accountability in the EU’s AI ActThe article argues that the AI Act should allow for proportional obligations in the case of open source projects, while creating strong guardrails to ensure they are not exploited to hide from legitimate regulatory scrutiny.

Meta has published Human rights report: Insights and actions 2022. Chapter 10 describes the company's responsible AI measures, which include AI system cards, open-source datasets to mitigate bias in NLP, and fairness indicators for computer vision models.

TikTok has published An update on fulfilling our commitments under the Digital Services Act, which explains its' new measures to comply with the DSA, including on recommender system transparency.

Foundation models and Generative AI

Researchers at Australian National University have published Right to be Forgotten in the Era of Large Language Models: Implications, Challenges, and Solutions. This article explores the challenges when implementing the GDPR’s right to be forgotten in relation to large language models and potential technical solutions such as differential privacy, model unlearning, model editing, and prompt engineering.

The Competition and Markets Authority has published AI Foundation Models: Initial Report. The report identifies key principles for the foundation model market: accountability, access, diversity, choice, flexibility, fair-dealing, and transparency. The CMA plans to engage with stakeholders on these principles, including foundation model developers and deployers, consumer groups, academics, and fellow regulators.

The Centre for Data Ethics and Innovation has published Public perceptions towards the use of foundation models in the public sector. The research found that respondents were open to the use of foundation models within the public sector, provided models were reliable and accompanied by appropriate governance measures.

The Mozilla Foundation has published The human decisions that shape generative AI: Who is accountable for what? The article identifies the key moments of human decision-making in the pre-training and fine-tuning phases of developing generative AI products.

Mozilla Foundation Senior Fellow Dr Abeba Birhane, associate professor Vishnu Naresh Boddeti, and Aksha.ai researchers Vinay Brabhu and Sang Han have published On Hate Scaling Laws for Data-Swamps. This research investigates the effect of scaling datasets on hateful content through a comparative audit of the LAION-400M and LAION-2B-en, which contain 400 million and 2 billion samples respectively. The authors evaluate the downstream impact of scaling on the racial bias of models trained on these datasets. Find the Mozilla summary here.

UNESCO has published Guidance for generative AI in education and research, which aims to support countries implementing immediate action and planning long-term policies, and ensure a human-centred approach for the technology.
Opportunities
BRAID is calling for researchers to apply to BRAID Fellowships. The fellowships are part of the overall engagement strand of BRAID, a national research programme led by the University of Edinburgh in collaboration with the Ada Lovelace Institute and the BBC. BRAID is dedicated to integrating the whole range of arts and humanities research more fully into the responsible AI ecosystem.
Applications open on 29 September

The UK Parliament Public Accounts Committee is calling for evidence for its inquiry on Ofcom’s ‘Preparedness for online safety regulation’.
Deadline for submission is 13 October

OpenAI is calling for experts to join the OpenAI Red Teaming Network. Members will inform risk assessment and mitigation efforts at various stages of the model and product development lifecycle.
Upcoming Events
Designing Institutional Frameworks for the Ethical Governance of AI in the Netherlands
In Person: 05 October 09:30 - 12:30 CET, The Hague
UNESCO’s Social and Human Sciences Sector, the European Commission and the Dutch Authority for Digital Infrastructure are hosting a launch event for their joint project. The research will examine complex questions on AI governance, develop a series of cases studies and produce best practice guidance.

Turing Lectures: Addressing the risks of generative AI
Hybrid: 17 October 19:00 - 20:30, London
The Alan Turing Institute is hosting a lecture that will examine what Generative AI means for online and offline safety, and the ways society might be able to mitigate risks.

Launch of the Fairness Innovation Challenge
In Person: 19 October 10:00 - 13:00 BST, The Royal Society, London
The Centre for Data Ethics and Innovation is hosting a launch event for the Fairness Innovation Challenge, featuring a briefing, Q&A and panel discussion. The Challenge aims to encourage the development of socio-technical approaches that address bias and discrimination in AI systems. The challenge opens on 16 October.

AI Fringe
In Person: 30 October - 03 November, London
Alongside the UK Government’s AI Safety Summit, there will be a series of events hosted by academia, civil society and industry. The Fringe will include a series of panels and keynotes that consider AI's specific domains and use cases, cross-cutting challenges and how to create a responsible AI ecosystem.
Community Spotlight: Safe AI Companion Collective
The Safe AI Companion Collective (SAICC) is a collective of legal and technical experts working on issues related to AI companion chatbots. We spoke to the team about their recent complaints against Chai, a companion chatbot provider, which were filed with the Belgium data protection and consumer protection authorities.

Q: What is the SAICC and what was the motivation behind setting it up?
SAICC: The Safe AI Companion Collective are a collective of four experts working to address complex issues related to AI companion chatbots. Nathalie Smuha is a legal scholar and philosopher at KU Leuven focusing on AI and law. She also coordinated the work of the EU’s high-level expert group on AI. Mieke De Ketelaere has a strong technical background as an Adjunct Professor in Sustainable, Ethical and Trustworthy AI at Vlerick Business School. Thomas Ghys specialises in privacy and runs Webclew which is a privacy audit platform. Pierre Dewitte is a PhD candidate at KU Leuven Centre for IT and IP researching data protection by design and is involved in multiple GDPR enforcement actions.

The motivation behind SAICC is threefold:

First, we want to raise awareness about the risks associated with companion chatbots. These are chatbots designed to provide companionship, such as virtual friends, romantic partners, or life coaches. These chatbots are often released onto the market without proper control and can end up in the hands of children or vulnerable individuals, which poses significant risks of harm.

Second, we want to demonstrate that existing legislation can be enforced against chatbots. Data protection and consumer protection law provide useful enforcement tools to address risks without having to wait for new legislation, such as the AI Act.

Finally, our goal is to inspire others to file similar complaints. We hope that more complaints will increase the likelihood that the issue will be escalated to the European Data Protection Board. We have published templates and documents to assist others in taking this action.

Q: Could you tell us about your recent investigation and complaint concerning chatbots? What breaches of data protection law did you uncover?
SAICC: We chose to investigate the Chai’s companion chatbot because of its community-based element which troubles the developer/end-user divide. Our investigation revealed a complex supply chain starting with the aggregation of data and resulting in user-personalised chatbot models that generate explicit content. This supply chain involves:
  1. Data Aggregation: The initial training dataset used was the ‘Pile’, which is an aggregation of data from various sources including PubMed Central, ArXiv, GitHub, the FreeLaw Project, Stack Exchange, the US Patent and Trademark Office, PubMed, Ubuntu IRC, HackerNews, YouTube, PhilPapers, and NIH ExPorter.
  2. Model Training: Eleuther AI used the ‘Pile’ to train a general-purpose Large Language Model called GPT-J-6B.
  3. Fine-Tuning: Chai took the model GPT-J-6B off the shelf and fine-tuned it using a dataset called ‘Lit-6B’, which mostly contains explicit content. This fine-tuning tailored the model to generate explicit language.
  4. User Fine-Tuning: Chai allows users to further fine-tune the model using their own prompts and interactions and share them with the broader community. We found the community had shared many personalised chatbot models that generate very explicit content.
We filed two complaints against Chai, one with the Belgium data protection authority and the other with the Belgian FPS Economy's contact point for consumer protection law. In the data protection complaint, we specifically targeted the processing of personal data for service purposes, which includes the processing of account data, usage data, and conversation histories. We also examined processing in relation to targeted advertising. There were four key breaches in our service purpose complaint:
  1. Breach of Lawfulness: Chai’s privacy policy is unclear about which legal basis for processing personal data is relied upon. If is consent, its implementation does not meet the requirements in Article 4(11) of the GDPR because it is not being freely given, specific, informed, and unambiguous.
  2. Breach of Transparency: Chai's privacy policy also lacked critical information required by Articles 13 and 14 of the GDPR, which are essential for transparency. Given that Chai is aware it is used by minors and vulnerable individuals, there is a higher threshold for making the policy understandable to users.
  3. Lack of mechanism to obtain valid consent from children: Chai is aware that minors use its’ service. However, it lacks a mechanism to obtain valid consent from them by either verifying that consent is given or authorised by a person with parental responsibility, as required by Article 8(2) of the GDPR. Crucially, Chai doesn't have a mechanism to verify whether users are minors. This is particularly concerning considering that Chai is used for erotic role-play purposes by adults, raising child protection concerns.
  4. Lack of Risk Identification and Mitigation: Chai was placed on the market without a risk identification and mitigation process. Instead, Chai have released minor fixes in reaction to public outcry. For example, the chatbot was implicated in a suicide in Belgium and Chai responded by implementing a detection filter for suicidal thoughts. However, the filter is so easily circumvented that is serves little purpose in practice. It is not a legal requirement to publish a full DPIA. However, it is an important element of due process and companies should be transparent about whether a DPIA or risk assessment was completed and any mitigations that were introduced. We believe Chai’s lack is a clear violation of the GDPR's risk-based approach which is rooted in Articles 5(2), 24(1), 25(1) and 35 of the GDPR.
Companion chatbots pose many harms to users, such as discrimination by perpetuating biases found in datasets. We have also seen instances of users becoming emotionally dependant on chatbots and experiencing separation anxiety when the chatbot is changed or deleted. These harms must be taken into account by data controllers; data protection by design, a risk-based approach, and DPIAs, are crucial elements in that process.

Q: A lot of the regulatory discussion around chatbots focuses on the AI Act. Could you discuss the importance of regulators enforcing existing legislation?
SAICC: We decided to use existing legislation rather than waiting for the AI Act for several reasons. Firstly, the harms associated with companion chatbots, such as emotional dependency, discrimination, and privacy violations, are already materialising. It is urgent that the harms are addressed right away rather than waiting for the AI Act to be passed. It will also take time for the enforcement structure of the AI Act to mature – the GDPR has only recently reached this stage. By leveraging existing legislation like the GDPR, we can take immediate action to address these concerns.

Secondly, the AI Act does not currently categorise companion chatbots as high-risk AI systems in Annex III. This means that the bulk of the AI Act's provisions may not be directly relevant to companion chatbots. Whilst there is a provision in Article 7 that allows the European Commission to extend the Annex III list in the future, this process is uncertain.

Finally, provisions on foundation models have been introduced in Article 28b in the European Parliament’s version of the Act. However, the definition is too vague and likely would not cover companion chatbots. Considering the supply chain of Chai’s companion chatbot, Eleuther AI’s GPT-J-6B might fall into the definition of a ‘foundation model’ but the Chai model fine-tuned with Lit-6B would not. This provides a simple way for providers like Chai to escape the scope of obligations.

Q: Could you tell us about the resources you are sharing?
SAICC: We are committed to sharing our knowledge and resources to empower others to take action. Our website https://saicc.info serves as a living and evolving platform which we will continue to update with news on the complaints. We have provided the complete data protection and consumer protection complaints filed against Chai. These can serve as templates for those wishing to file similar complaints.

We also offer additional documents, including slide decks, reports, long-form articles, and blog posts from the literature. These materials cover a wide range of topics including an upcoming piece on the application of data protection by design to companion chatbots.

Q: What are the next steps for this work?
SAICC: The Belgium Data Protection Authority deemed complaint admissible, so we are preparing our case. The DPA is coordinating with other supervisory authorities, and we hope that they will engage with the task force on Generative AI which was set up by the European Data Protection Board. We also are actively engaging with the consumer protection authorities on the consumer protection complaint.

We would love for anyone to reach out to us at contact@saicc.info. In particular, we are interested to gather feedback from anyone working in industry.
​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus