Algorithm Governance Roundup #22
|
Community Spotlight: Aarushi Sahore, Brick Court Chambers | DSA Investigations and Enforcement
|
|
|
|
Welcome to AWO’s Algorithm Governance Roundup. This month I spoke to Aarushi Sahore, barrister at Brick Court Chambers, about RTM v Bonne Terre Ltd and Hestview Ltd, a landmark case on data protection in the context of online gambling. AWO represented the claimant RTM. As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm.newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you! Many thanks and happy reading! Esme Harrington
|
In the EU, the AI Office published the third draft of the Code of Practice for General Purpose AI (GPAI). The draft outlines four key areas: 1) commitments, 2) transparency, 3) copyright, and 4) safety and security, which addresses obligations on risk assessment and mitigations, such as model evaluations, incident reporting and cybersecurity. This draft removed discrimination from the list of mandatory risks to assess, raising concerns amongst MEPs and civil society.
The AI Office also published the implementing act to establish a scientific panel of independent experts under the AI Act. The panel is responsible for issuing alerts when a qualified majority believes a GPAI model poses concrete risks or meets the threshold for classification as a GPAI model with systemic risk. The panel will be composed of up to 60 experts, and each Member State must nominate one candidate. Experts must not have an employment or contractual relationship with an AI provider.
The AI Board convened its third meeting to share insights on Member States’ implementation strategies for the AI Act. The AI Office also updated the Board on its progress implementing the regulation.
Two EU countries launched investigations into six potential violations of the AI Act’s prohibitions, which took effect in February. However, the Act’s penalty regime will not enter into force until August 2025, and enforcement powers will not be granted until August 2026. This could result in legal challenges if fines are issued before the enforcement provisions take effect.
Thirty Members of the European Parliament sent the European Commission (EC) a letter on ‘open-washing’ and the definition of ‘open source’ under the AI Act. They argue Meta’s classification of Llama as open source – removing it from many of the AI Act’s obligations – is misleading because Meta still restricts use, access to certain parts of the code and information about training data. This is based on the definition of open source developed by the Open Source Initiative, who have previously stated that Llama is not open source.
The EC has published its AI Continent Action Plan to promote AI innovation. It focuses on five pillars: 1) building large-scale data and computing infrastructure, 2) increasing access to large and high-quality data, 3) developing AI and fostering adoption in strategic sectors, 4) strengthening AI skills and talents, and 5) regulatory simplification. The last pillar includes launching the AI Act Service Desk to support compliance with the AI Act.
On the Digital Services Act (DSA), the EC published an age verification app on GitHub to support the implementation of Article 28 on the protection of minors. The Head of Unit for implementing the DSA announced testing of the age verification app will begin in May, with the involvement of platforms and civil society. The system will be imposed on pornographic platforms under the DSA Guidelines on the Protection of Minors and may be used by other platforms to demonstrate compliance. This is intended to bridge the gap until the EU Digital Identity (EUDI) Wallets become available by the end of 2026.
According to the EC, TikTok has made improvements to protect forthcoming elections in Romania from foreign interference, following a formal investigation under the DSA. TikTok has improved processes to detect and label political content and accounts, recruited additional Romanian language experts, and an additional 120 experts to work on covert influence campaigns and advertising integrity. The first round of Romanian elections was invalidated following allegations of Russian interference on social media.
The Court of Justice of the EU denied XVideos’ appeal for temporary relief of its advertising transparency obligations under the DSA. In addition, the General Court is hearing a case on Zalando’s designation as a Very Large Online Platform (VLOP).
Seven out-of-court dispute settlement bodies formed an ODS network to strengthen the enforcement of users’ rights under the DSA. The network received 4,500 complaints in early 2025 but warned that most users remain unaware of their right to challenge platforms’ content decisions before an independent body.
In France, Arcom, the Digital Services Coordinator (DSC), has appointed four new trusted flaggers under the DSA. This includes anti-piracy organisation ALPA, the International Fund for Animal Welfare, the consumer agency INDECOSA-CGT and the cyberviolence organisation Point de Contact. Trusted flaggers are responsible for submitting illegal content reports to online platforms, which must be processed with priority.
Several authors’ and publishers’ unions filed legal action against Meta over alleged unauthorised use of copyrighted works to train AI. The Société des Gens de Lettres (SGDL), the Syndicat national des auteurs et des compositeurs (SNAC) and the Syndicat national de l'édition (SNE) demand Meta remove any training datasets created without authorisation. This follows the publication of an international charter on ‘Culture and Innovation’ by 38 international organisations.
In Ireland, the An Coimisún um Chosaint Sonraí, the data protection regulator, has opened an inquiry into X for using EU users’ publicly-accessible posts to train Grok large language models. The inquiry will examine compliance with a range of key provisions of the GDPR, including lawfulness and transparency.
The Coimisún um Chosaint Sonraí is also reportedly preparing to fine TikTok over €500 million for exporting EU users’ data to China, in violation of the GDPR.
The Coimisiún na Meán, the Digital Services Coordinator, stated it is reviewing 400 DSA-related complaints received over the past year. The Coimisún is currently staffed by 45 people – with plans to expand – and is dividing enforcement tasks with the Commission.
Several NGOs have filed a complaint against Meta with the Coimisiún na Meán. The claim alleges its news feed uses deceptive designs to make it difficult to select a feed that is not based on profiling, in violation of the DSA. The NGOs include Gesellschaft für Freiheitsrechte, Bits of Freedom, EDRi and Convocation Design + Research.
In Korea, the Communications Commission’s guidelines for the protection of users of generative AI services have entered into force. It requires AI developers to obtain user consent for training on personal data, and service providers to inform users about data practices, obtain consent, and ensure data is used safely and legitimately through internal monitoring.
In Luxembourg, the parliament adopted Bill 8309 to implement the EU’s Digital Services Act. The law designates the Autorité de la Concurrence, the competition authority, as the Digital Services Coordinator. The authority also signed a cooperation agreement with seven other online enforcement authorities to ensure consistent application.
In the Netherlands, X lost summary proceedings against an individual after refusing to grant access to his personal data. A previous court ruled that X must provide him with access to his personal data under the GDPR, but X did not comply.
In the U.K., the AI Security Institute announced the recipients of its Systemic Safety Grants. The twenty projects will carry out independent research to safeguard societal systems and the critical infrastructure where AI is deployed. This includes projects focused on workers, misinformation and disinformation, individual privacy and AI agent interactions.
The Competition and Markets Authority has concluded that Microsoft’s partnership with OpenAI does not qualify for investigation under the merger provisions of UK competition law.
The Information Commissioner’s Office, the data protection regulator, has announced investigations into TikTok, Reddit and Imgur’s privacy protections for children. This includes an investigation into TikTok’s use of 3- to 17- year olds’ personal information to recommend content, and the use of age assurance measures by Reddit and Imgur.
Ofcom has fined OnlyFans £1.05 million for failing to provide accurate information about its age assurance measures. This follows an investigation into whether the platform failed to block under-18s from viewing restricted material, in breach of rules for video-sharing platforms which predate the OSA.
In the U.S., the White House Office of Management and Budget published a memo requiring federal agencies to develop AI strategies. This includes appointing chief AI officers, implementing minimum-risk management practices for high-impact uses, and developing a generative AI policy.
NIST published its final report on adversarial machine learning. This sets out a comprehensive taxonomy of AI/ML cybersecurity threats and countermeasures across ML methods, life cycle stages of attack, and attacker goals, objectives, capabilities and knowledge.
The Governor of Utah passed several bills to govern the use of generative AI. SB 332 and SB 226 update Utah’s AI Policy Act to require entities in high-risk regulated professions to disclose when consumers are interacting with a generative AI system. It defines “high-risk” as instances where the system collects sensitive personal information or involves significant decision-making, such as in financial, legal, medical and mental health contexts. HB 452 introduces new rules for mental health chatbots, including a prohibition on the use of opaque advertising during user interactions, and the sale or sharing of identifiable user health data.
The Association of Southeast Asian Nations (ASEAN) has adopted the Roadmap on Responsible AI 2025-2030. It includes actionable and tailored steps for policymakers to enhance public-sector AI capacity, strengthen regional AI partnerships, develop secure data-sharing platforms, and foster a multi-stakeholder dialogue on AI governance.
The Frontier Model Forum announced an information-sharing agreement between member firms. The agreement covers threats, vulnerabilities and capability developments related to national security and public safety, such as chemical, biological, radiological and nuclear (CBRN) and advanced cyber threats.
|
Effective Moderation of Social Media to Curb Genocidal Content, Nuredin Ali Abdelkadir, Tianling Yang, Shivani Kapania, Meron Estefanos, Fasica Berhane Gebrekidan, Zecharias Zelalem, Messai Ali, Rishan Berhe, Dylan Baker, Zeerak Talat, Milagros Miceli, Alex Hanna and Timnit Gebru
|
The Institute for AI Policy and Strategy is seeking applicants for its IAPS AI Policy Fellowship 2025. This is a funded three-month program running from 01 September to 21 November in Washington D.C. or remotely. Deadline for application is 07 May.
|
A briefing from Equinet representatives at the CEN-CENELEC Joint Technical Committee 21, who are responsible for developing the harmonised standards for the AI Act, with a case study on discrimination.
Virtual: 6 - 7 May The Bennett Institute for Public Policy and OECD are hosting a workshop to bring together policymakers, researchers, and experts to identify key areas for policy-oriented research along the AI value chain.
In Person: 21 – 23 May, Brussels, Belgium This multidisciplinary conference offers the cutting edge in legal, regulatory, academic and technological development in privacy and data protection.
At CPDP, I’ll be speaking on Mozilla Foundation’s panel ‘Towards a Safe Harbour for Public Interest AI Research’ at 08:45 – 10:00 on 22 May, and my colleague Nick will be speaking at AWO's panel on 'The EU's Next Move on Online Advertising' at 16:00 on 23 May in the Baixu room.
|
Community Spotlight: Aarushi Sahore
|
Aarushi Sahore is a barrister at Brick Court Chambers with a broad commercial and public law practice, including cutting-edge cases in privacy and data protection. We spoke about RTM v Bonne Terre Ltd and Hestview Ltd, a landmark case on data protection in the context of online gambling. AWO represented the claimant RTM.
Can you summarise the key facts of RTM v Bonne Terre Ltd and Hestview Ltd? Aarushi: RTM v Bonne Terre Ltd and Hestview Ltd concerned complex data processing in the online gambling context. The judgment was published in January 2025 following a five-day trial that took place in November 2024. It is one of the few cases that has reached trial on key legal questions in data protection law and misuse of private information.
The claimant, RTM, is an anonymised individual who used online gambling platforms between 2009 and 2019. Over this period, RTM became addicted to gambling, spending large sums of money he couldn’t afford and often gambling late at night. He described his addiction as something that cost him a decade of his life. Eventually, RTM overcame his gambling addiction. As part of his recovery journey, he investigated how his personal data was used by the online gambling platforms during this period.
RTM primarily used the online gambling platform Sky Betting and Gaming (SBG) which is operated by the Bonne Terre Ltd and Hestview Ltd. Initially, RTM bet on sports matches before beginning to gamble on casino games like Sky Vegas and Sky Casino. RTM claimed that the defendants exacerbated his gambling behaviour through the processing of his personal data.
It was common ground that SBG collects significant amounts of data on its users. While much of this data collection is essential for business operations – including transaction data, betting patterns, and payment details – the litigation revealed that SBG also processed and analysed this data for additional purposes. This included user profiling to identify problem gamblers, using SBG’s safer gambling methodologies, and for marketing purposes.
To conduct marketing, SBG used a feature store that enabled highly personalised profiling of user characteristics and behaviour – including the devices used, the times when bets were placed (including late at night), and favoured bet types. The defendant also used algorithmic propensity modelling to predict which other products a user might be interested in, allowing them to target users with relevant advertising. In addition, SBG used cookies to deliver targeted advertising.
During the trial, the court concentrated on two categories of evidence: the user profiling conducted by SBG and the actual direct marketing communications sent to RTM.
What are the main legal issues in the case? Aarushi: The case examined SBG’s legal basis for processing and compliance with the data protection principles under the Data Protection Act 1998 (DPA 1998), the UK GDPR and the Privacy and Electronic Communications Regulations (PECR).
In terms of lawful basis, SBG primarily relied on consent, which is required for direct marketing emails and cookies under PECR. RTM contended that he did not consent (in the sense that consent is defined under the legislation) to the data processing or direct marketing at various points in times. In addition, RTM contended that SBG’s privacy notices were insufficient for users to fully understand how their data was being used, preventing him from providing meaningful consent.
The second legal basis SBG relied upon was legitimate interests, claiming a legitimate commercial interest in processing user data for safer gambling modelling and marketing analysis. However, RTM maintained that these interests were outweighed by his fundamental right to privacy.
In terms of data protection principles, the claimant argued that SBG’s processing was neither fair nor transparent because users were not adequately informed about the full range of purposes for which their data was used. He also contended that the defendants gathered extensive data and did not restrict or limit its use, in violation of the principles of purpose limitation, data minimisation and retention.
The key factual issues at trial included (1) the data processing and collection by SBG, including through cookies and the data flows to third parties; and (2) the mode and method of consent at various times. The trial explored the evidence related to SBG’s processes including users’ experience online, the presentation of SBG’s privacy and cookie notices, and how the consent processes, such as tick boxes, were displayed.
How did the court interpret the relevant data protection laws, and what impact will this have? Aarushi: Justice Collins-Rice delivered what I consider to be a thorough and well-reasoned judgment focusing primarily on consent, particularly in the context of processing for direct marketing, cookies, and profiling. The court examined the complex environment of data processing, and the judgment sets out two competing narratives: one from the claimant about his user experience and his gambling habits, and the other from SBG explaining their use of data for safer gambling and marketing purposes. Ultimately, the court found that RTM had not consented to the direct marketing, cookies, or profiling for targeted marketing. This was significant because SBG had to rely on consent for direct marketing and cookies under PECR and relied on it as a general defence.
The judgment is very illuminating when it comes to how the Court understood consent. Data protection case law and commentary often includes laudable statements about the high standard expected for consent because it is rooted in autonomy and the fundamental right to privacy. The court needed to balance this high standard with the reality that users regularly click through online consent boxes whilst using online services and platforms. To bring together the case law and the specific legal issues in the case, taking into account the specific context involving a potentially vulnerable user who may be at risk from certain processing activities, the judge set out a nuanced three-strand analysis for consent. The first strand examines the subjective state of mind of the user, considering what they knew, felt and understood when giving consent. However, the court acknowledged that establishing a users’ subjective understanding can be difficult and cannot be the sole measure of consent.
The second strand focuses on the autonomy associated with the process for expressing consent. This acknowledges that businesses need a practical way to obtain consent from users. However, it emphasises that businesses must maximise the chances that when a user ticks an online consent box, they do so with an understanding that they are binding their autonomy. This is particularly significant in the context of online platforms, where users may tick consent boxes without fully understanding the implications. It underscores the importance of designing online consent processes to give users a clear opportunity to understand and make informed decisions.
The third strand addresses the evidentiary burden placed on the defendant. Under data protection law, the burden lies with the defendant to demonstrate that consent was actually obtained. As a result, SBG needed to maintain accurate records of how consent was obtained. The court noted that while RTM’s memory was not entirely clear, the internal records kept by SBG also failed to provide conclusive evidence that RTM had consented to the data processing. This created a difficult factual situation for the court.
The judgment also highlighted the need for context-specific consent. First, the court acknowledged that in higher-risk environments like online gambling, businesses need to be especially careful about obtaining clear, informed consent and this may have impact on the design of the online consent process.
Second, the court recognised that certain users may be vulnerable in certain situations, or there may be information held or available to the service about the vulnerability of the users that the service needs to acknowledge. In such situations, services like SBG may need to be more cautious and ensure they have robust and appropriate processes and systems for obtaining and proving consent or accept the risk of having a blind eye and nonetheless sending marketing.
What broader implications does this case have for the online gambling industry, and more broadly complex data processing environments? Aarushi: The judge was clear to acknowledge that the online gambling industry is already highly regulated sector subject to various regulations, such as the Gambling Commission's licensing codes, which outline requirements for advertising and other practices. Despite this regulation, and the fact that there is no general duty of care on the part of operators to take care to prevent harm to gamblers, the judge emphasised that compliance with GDPR is still crucial and must be applied in a context-sensitive manner.
Concerning broader implications, the judge explicitly stated that the ruling was limited to the specific facts of this case — focused on a particular individual, a particular platform, and a particular time period. However, the case could have implications for other regulated industries where vulnerable individuals may be exposed to data processing that they don't fully understand, such as medical contexts, or where user vulnerabilities are known or may be reasonably anticipated.
More broadly, the judgment offers valuable insights that could extend to other complex data processing environments. In many online sectors, companies gather vast amounts of data on users, create detailed profiles, use this information for wide variety of purposes, and share it with third parties that users may not be aware of. The judgment signals that consent in these environments must be properly informed and that businesses need to ensure users fully understand what they are consenting to. This includes designing consent processes to allow users to make genuinely informed decisions about their data.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|
|