Algorithm Governance Roundup #20
|
Community spotlight: Finn Myrstad, Norwegian Consumer Council | AI Act prohibitions officially applicable
|
|
|
|
Welcome to the first Algorithm Governance Roundup of 2025. It’s another bumper edition as we head to the AI Action Summit next week! This month, I spoke to Finn Myrstad, the Director of Digital Policy at the Norwegian Consumer Council (NCC). We spoke about his research, investigations and complaints into harmful AI and technology practices, the intersection of consumer protection and data protection, and updating the law to better protect consumers in the age of AI. As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm.newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you! Many thanks and happy reading! Esme Harrington, AWO
|
ASEAN has published an updated Guide on Generative AI Governance and Ethics. It provides recommendations for ASEAN on proportional and regionally interoperable measures to ensure safety, including incident reporting, testing and assurance, content provenance and safety and alignment research.
In the EU, the first tranche of the European Commission’s (EC) AI Act obligations are now applicable. This means organisations will need to comply with Article 1 on AI system definitions, Article 4 on AI literacy and Article 5 which prohibits certain AI practices that pose unacceptable risks.
To support this, the EC has approved its Guidelines on prohibited AI practices, which include legal explanations and practical examples to help stakeholders understand and comply with the requirements in Article 5. To support compliance with Article 4, the EC has published examples of AI literacy Practices implemented by AI system providers and deployers. The EC is also due to publish guidelines on the definition of AI systems in Article 1.
The EC’s AI Office has published the second iteration of the Code of Practice for General Purpose AI (GPAI). It includes new expectations in relation to copyright, asking GPAI providers to ‘make reasonable and proportionate efforts to obtain assurances from third parties’ that supply training data. Following this, it hosted a plenary summarising the work undertaken by the Working Groups. The next iteration of the Code of Practice is expected to be published in the week of 17 February.
The AI Office also shared an initial template for GPAI providers to disclose a sufficiently detailed summary of training data. The template requires providers to list the data sources used, how much data was used per modality (e.g. text/images), and whether they processed content in accordance with the EU Copyright Directive’s Text and Data Mining Exemption.
In relation to the Digital Services Act, the EC has sent X a request for further information as part of ongoing proceedings under the DSA. In particular, the investigatory measures request internal documentation on X’s platform recommender system. The EC has also requested access to X’s commercial APIs and technical interfaces to conduct a direct fact-finding investigation on content moderation and virality of accounts.
The EC has integrated a revised Code of Conduct on countering illegal hate speech online into the DSA. Following integration, online platforms can adhere to the code of conduct to demonstrate compliance with the DSA obligation to mitigate systemic risks stemming from the dissemination of illegal content on their services.
The European Affairs ministers from twelve Member States sent a letter requesting that the EC use powers under the DSA to mitigate potential election integrity risks ahead of Germany’s election. The EC published its Guidelines on election integrity under the DSA last May.
Garante, the Italian DPA, issued an emergency order to block DeepSeek’s processing of personal data and initiated an investigation. Garante was the first DPA to request information from DeepSeek after Euroconsumers and Altroconsumo submitted a complaint raising several concerns, including a lack of transparency and the transfer of personal data to a third country (China) without proper safeguards.
In addition, the Irish and the Croatian DPAs have sent DeepSeek requests for information, whilst the Belgium DPA has opened a formal investigation following a complaint filed by TestAchat. In Greece, the digital rights organisation Homo Digitalis also submitted a complaint to the Hellenic DPA. In Luxembourg, the DPA published a letter of concerns and recommendations to the public, but has not yet received a complaint. Meanwhile, authorities in France, Cyprus, the Netherlands and Germany are exploring possible actions.
The European Data Protection Board has published a report on Complex Algorithms and effective Data Protection Supervision. The report answers questions from Data Protection Authorities on how to assess bias and implement data subjects’ right in the AI context, particularly by clarifying methods and tools which can be used.
In France, the government is preparing to host the AI Action Summit on 10 and 11 February. The Summit has five work streams: 1) public interest AI, 2) future of work, 3) innovation and culture, 4) trust in AI and 5) global AI governance. Under the public interest track, France has announced it will create an AI Foundation to promote an open source approach to AI.
In Germany, the Bundesnetzagentur, the German Digital Services Coordinator and the EC completed a stress test to identify the readiness of Very Large Online Platforms to identify and minimise risks associated with election integrity under the DSA. Representatives of Google (YouTube), LinkedIn, Microsoft, Meta (Facebook, Instagram), Snapchat, TikTok and X, national authorities and civil society organisations took part in the stress test.
The G7 has published a report on the Hiroshima Code of Conduct Reporting Framework Pilot. The reporting framework collects information on organisations developing advanced AI systems. The pilot garnered responses from 20 organisations across 10 countries. Several respondents suggested the framework would be improved through alignment with other voluntary reporting mechanisms such as the Frontier AI Safety Commitments.
In Italy, Garante fined OpenAI for processing users’ personal data during the training of ChatGPT without an adequate legal basis and failing to meet transparency obligations or offer particular protection for minors, in violation of the GDPR.
In South Korea, the National Assembly passed the Development of AI and Establishment of Trust Act. This is the second comprehensive AI legislation to be passed globally, after the EU’s AI Act. The Act introduces transparency requirements, ethical guidelines for the use and development of AI, and a classification framework to identify high-impact systems. It also establishes a National AI Committee which will work jointly with South Korea’s AI Safety Institute on AI safety.
In the UK, the Department for Science, Innovation and Technology (DSIT) published its AI Opportunities Action Plan. The strategy set out plans to support the growth of AI, including by building additional AI infrastructure, creating a National Data Library of public and private data sets, and reforming the UK’s intellectual property regime. It also sets out recommendations to promote safety, such as expanding the AI Safety Institute’s research on model evaluations, safety and societal resilience research; providing additional support to regulators, including for regulatory sandboxes; and investing in new assurance tools.
The UK’s AI Safety Institute has published the International AI Safety Report 2025. This provides a comprehensive synthesis of current literature on the capabilities, risks and mitigations associated with advanced AI systems. The report received input from an international Expert Advisory Panel across 30 countries, the United Nations, European Union and the OECD, and will be presented at the AI Action Summit in Paris.
The Intellectual Property Office (IPO), DSIT and Department for Culture, Media and Sport (DCMS) launched a consultation on copyright and AI. It aims to ensure AI developers clarify how they use rightsholders' material, enhance rightsholders’ control over the use of their works for training AI models, and ensure access to high-quality training data. The submission deadline is 25 February.
The High Court of England and Wales ruled in favour of AWO's client who challenged Sky Betting and Gaming’s use of his personal data for profiling and targeted marketing without valid consent. This decision is important for anyone working in digital marketing and data, as it highlights key issues in the online advertising and AdTech sectors, as discussed in our blog and the press.
In the US, President Trump has signed an Executive Order which revoked President Biden’s Executive Order on Safe, Secure and Trustworthy Development and Use of AI. The Order also requires the White House to develop an AI Action Plan within the next six months.
The Californian Attorney General has issued a legal advisory to businesses and healthcare entities that develop, sell or use AI about their obligations under Californian law. This includes several new laws regulating AI systems in relation to disclosure, unauthorised use of likeness, use in elections, and prohibition and reporting of exploitative uses of AI.
The U.S. Copyright Office has published a report on Copyright and AI, which provides guidance on the copyrightability of outputs of generative AI. The report concludes that prompts alone do not provide sufficient human control over the expressive elements of an output to make it copyrightable.
|
10 – 11 February in Paris, France The AI Action Summit is the third global summit on AI. It consists of ‘Summit’ conferences and roundtables and ‘Fringe’ events. Events will correspond to five work streams: 1) public interest AI, 2) future of work, 3) innovation and culture, 4) trust in AI and 5) global AI governance.
11 – 12 February in London, UK Alongside the AI Action Summit, a range of panels and presentations have been organised by civil society, academia, government and industry in London.
|
Community Spotlight: Finn Myrstad, Norwegian Consumer Council
|
Finn Myrstad is the Director of Digital Policy at the Norwegian Consumer Council (NCC). We spoke about his research, investigations and complaints into harmful AI and technology practices, the intersection of consumer protection and data protection, and updating law to better protect consumers in the age of AI.
What is the Norwegian Consumer Council, and what work are you doing on AI governance? Finn: The Norwegian Consumer Council (NCC) is a publicly funded interest group working to empower and protect consumers. We have three main workstreams. First, we offer a helpline for consumers to contact us with concrete questions, for example asking about consumer rights in relation to a broken device. Second, we offer services to promote transparency and enable consumer choice, this includes price comparison tools in sectors such as energy and finance. Finally, we have a political workstream which aims to improve markets through legal and policy advocacy. I lead the digital department across this workstream. As part of our advocacy, we work closely with international partners such as the European Consumer Organisation (BEUC) and the Transatlantic Consumer Dialogue (TACD).
We’ve been working on consumer protection and the digital sector for the past decade and increasingly focus on consumer harms related to AI systems. Last year, we published the culmination of some of this work in Ghost in the machine – Addressing the consumer harms of generative AI, outlining the harms, legal frameworks, and possible ways forward. In this report, we tried to address both the structural and concrete harms and challenges posed by generative AI. Structural challenges include 1) technological solutionism, 2) opaque systems and actor chains which create a lack of accountability and 3) the concentration of power in the hands of Big Tech companies.
Concrete harms arise in relation to 1) manipulation, such as mistakes and inaccurate output, the personification of AI models, deepfakes and disinformation; 2) bias, discrimination and content moderation; 3) privacy and data protection; 4) security vulnerabilities and fraud; 5) replacing humans in consumer-facing applications; 6) environmental impacts and 7) labour impacts. We are continuing to support this work with research in the field. We currently monitor how AI systems are deployed in the market and aim to do further research or in-depth investigations.
We are also advocating for better public sector AI governance in Norway. Since many of the AI Act’s obligations will not be in place until August 2026 or beyond, we are advocating for a precautionary approach to fill the gap and reach similar policy goals as soon as possible. This includes pushing for concrete guidelines for procurement and risk assessment and mitigation processes for public sector uses of AI.
Consumer protection bodies must increasingly consider data protection and vertical technology regulation, such as the DSA and AI Act, in their work. What investigations have you done that cross these topics? Finn: We regularly submit complaints to the Norwegian Data Protection Authority concerning harmful Big Tech practices. We recently submitted a joint complaint with noyb, concerning Meta’s scraping of user data to train AI models. In this complaint, we argued that Meta did not have a lawful basis for processing because they relied on legitimate interest, which we argued was not valid, and they attempted to elicit consent for undefined broad technical means (AI) without properly specifying the purposes. In addition, Meta used an opt-out rather than opt-in mechanism for consent, which involved dark patterns to deter data subjects from exercising their right to choose.
This was the third time we have filed a complaint concerning Meta’s data protection practices in the past twelve months. The other two concerned Meta’s “pay or OK” model for using personal data for advertising. The complaints were based on both data protection and consumer law, with the second a joint complaint with seven consumer groups and BEUC (drafted by AWO’s litigation team).
Our first complaint under the General Data Protection Regulation was submitted against Google in November 2018, for using dark patterns to attain consent to location tracking. However, six years later we still don’t have a first decision by the Irish Data Protection Commissioner. Since then, we have filed numerous complaints, including against the data broker industry and against Grindr for sharing location data and users’ sexual orientation. The latter resulted in a fine from the DPA and is currently being litigated in the courts. However, the complaints against data brokers are making extremely slow progress. Overall, there is a real problem with the slow pace of enforcement, particularly concerning cross-border disputes.
Regarding consumer law complaints, we are part of the Consumer Protection Cooperation Network which is a network of consumer protection authorities that do joint investigations with the European Commission on cross-border issues. For example, in 2021, we were part of a coordinated investigation into Amazon for manipulating consumers by using dark patterns to lock-in subscribers to Amazon Prime. This mechanism does not have sanction powers or remedies for consumers, instead we were limited to requesting Amazon to change the practice.
We share our research with other groups prior to publication to encourage coordinated actions in the US and Europe. This is generally facilitated by BEUC and the TACD. For example, in relation to the Amazon investigation, we coordinated complaints with a US-based consumer group which led to a parallel investigation by the Federal Trade Commission.
The Norwegian Data Protection Authority recently piloted some public sector uses of AI in its regulatory sandbox, can you tell us how this initiative can promote consumer protection? Finn: The sandbox offers an opportunity for the data protection authority to understand the technology better, and for the regulated actor to understand the law better. We have not done a proper assessment, but in the past critics of the concept have raised concerns that sandboxes could facilitate ethics-washing or legitimise practices that would otherwise not be allowed. There is also the argument that limited resources risk the DPA prioritising the regulatory sandbox over enforcement. This is really a political discussion, and sufficient resources should be provided so that regulatory sandboxes can complement enforcement action.
We have seen some positive outcomes from this regulatory sandbox, and projects in the sandbox have even been halted after participants found them to be incompatible with data protection legislation. For example, a welfare authority was developing an AI system to predict the duration of individuals’ sick leave. The development of the system was halted because the sandbox revealed it lacked a sufficiently clear legal basis. More recently, a sandbox assessment of Microsoft Co-Pilot within a university context found the tool should only be used with very strict preconditions.
The EU is considering updates to consumer protection law following the conclusion of the Digital Fairness Fitness Check, what do you think are some of the most important areas to update? Finn: One of the key areas of concern is dark patterns. We have done a lot of research, investigations and complaints on this topic. Existing regulation, particularly the GDPR, should protect us from the practices due to rules on consent, data minimisation and purpose limitation – however, it is currently failing to stop the practices. The Digital Services Act also regulates dark patterns on Very Large Online Platforms and Search Engines, but this means smaller platforms go under the radar. This really requires better enforcement of existing regulation, but it would be beneficial to have horizontal protection on this topic. Ideally, a Digital Fairness Act should include a clear statement that dark patterns are illegal and be complemented by an extensive list of banned practices. We also need a simpler and faster enforcement mechanism, to quickly identify a pattern and decide whether it qualifies as a form of dark pattern.
In addition, addictive design is a concern. This has a lot of overlap with dark patterns, but some distinct features could additionally be dealt with in a Digital Fairness Act. This could take the form of a right not to be disturbed, as proposed by Kim van Sparrentak.
In relation to consumer investigations, it would also be helpful to bring in a reverse burden of proof. This should place the burden on technology companies to explain why their service or product is not harmful, in light of the increasing complexity and opacity of digital practices.
Generally, we need to improve enforcement mechanisms, particularly in international investigations. Enforcement of cross-border GDPR investigations are much too slow – six years to wait for a decision in our complaint against Google’s dark patterns is unacceptable. In addition, joint coordinated investigations by the European Commission and Consumer Protection Cooperation Network could be empowered to issue dissuasive fines and remedies for affected consumers. We should also consider the use of algorithmic disgorgement as a sanction, as this is a strong incentive for compliance. This may be possible under data protection law by requiring organisations to stop processing data that was collected illegally.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|
|