Algorithm Governance Roundup #21
|
Community Spotlight: Sabeehah Mahomed, Alan Turing Institute’s Children and AI | Reflections from the AI Action Summit
|
|
|
|
Welcome to February’s Algorithm Governance Roundup. This month, we gathered in Paris for the AI Action Summit and caught up with past interviewees for their key Summit takeaways. I feature this alongside our community spotlight on Sabeehah Mahomed, a Researcher in the Alan Turing Institute’s Children and AI team. I spoke to Sabeehah about the inaugural Children’s AI Summit which brought the perspectives of children and young people to Paris.
This month, AWO contributed foundational research to NSPCC's report on the risks that children face from Generative AI. The research identified 7 risks and 27 different solutions, including statements from consultations with young people.
As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content, please reply or send a new email to algorithm.newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. I would love to hear from you!
Many thanks and happy reading!
Esme Harrington
|
In Canada, the Office of the Privacy Commissioner has opened an investigation into X on whether its use of Canadians’ personal data to train AI models broke federal privacy rules.
In El Salvador, the National Bitcoin Office has announced that the government has passed an AI Law. The legislation will introduce legal safeguards for the development of both proprietary and open-source AI systems. This will include sandbox protections and safeguards against third-party misuse for developers that register with the National AI Registry.
In the EU, the European Commission (EC) has published a DSA Elections Toolkit for Digital Services Coordinators. The Toolkit summarises best practices that DSCs have used over the last year to mitigate risks on Very Large Online Platforms and Search Engines (VLOPs and VLOSEs) during elections. It recommends practices in four key areas: 1) stakeholder management, 2) communication and media literacy, 3) incident response, and 4) monitoring and analysis of election-related risks.
WhatsApp and Waze have reached the threshold to be designated as VLOPs under the DSA, exceeding 45 million monthly users across the EU. On the other hand, Europe’s four largest porn platforms have recorded major drops in monthly users, potentially disqualifying them as VLOPs.
The EU Ombudsman has published a preliminary finding that the EC breached transparency rules when it applied a general presumption of confidentiality to X’s DSA risk assessment. The Ombudsman found that the EC’s failure to consider providing public access amounted to maladministration. This follows a complaint submitted by Follow the Money journalist Alexander Fanta.
The Court of Justice of the EU ruled that under GDPR data subjects are entitled to "meaningful information about the logic involved" when a decision has been made by an automated system using their personal data. The information provided must enable the data subject to understand which of their personal data has been used and how. If a controller believes the information to be disclosed contains protected data of third parties or trade secrets, it must submit this information to the competent supervisory authority or court. It is for the authority or court to balance the rights and interests at issue and determine the extent of the data subject’s right of access to that information.
In France, the government hosted the AI Action Summit. This was the third global summit on AI, following previous events hosted at Bletchley Park, UK, in 2023 and Seoul, Korea, in 2024. The Summit had five work streams: 1) public interest AI, 2) future of work, 3) innovation and culture, 4) trust in AI, and 5) global AI governance. In addition, the AI Fringe hosted events in Paris and London which are now available online. Several AI governance initiatives were announced at the Summit, including:
... The French government announced Current AI to foster open, transparent and accountable AI innovation. The foundation received an initial $400 million investment from the French government, philanthropists, and industry partners. It will focus on 1) expanding access to high-value datasets to fuel solutions in health, education, and media in privacy preserving and safe ways; 2) supporting open, adaptable AI models that address real-world challenges; 3) strengthening accountability through transparency, oversight, and auditing. Eleven countries have backed the project including France, Germany, Slovakia, Finland and Switzerland.
... The data protection authorities of Korea, France, Ireland, Australia and the United Kingdom signed a joint declaration on implementing data governance that promotes innovative and privacy-protecting AI.
... The non-profit ROOST (Robust Open Online Safety Tools) launched to develop, open source and maintain modular tools for AI safety. This includes child safety tools, foundation model-powered content safeguards and core safety infrastructure. It will also offer organisations support from technical experts. The founding partners include Discord, OpenAI, Roblox, Google, AI Collaborative and several other foundations.
Also in France, the public prosecutor has opened an investigation into X due to alleged algorithmic bias of its recommender algorithm. The probe is also in response to a separate complaint alleging that X shares a large volume of hateful, racist and anti-LGBTQ+ political content.
The Commission Nationale de l’Informatique et des Libertés (CNIL), the data protection authority, has published new recommendations on AI and GDPR focused on individuals’ data rights. First, on the right to be informed if personal data is used to train an AI model and may be memorised, CNIL states that general information or broad disclosure of the categories of sources or key sources may be sufficient. Secondly, on the rights to access, rectify, object and delete their data, CNIL encourages AI developers to incorporate privacy by design by anonymising models where possible and developing technical measures to prevent the disclosure of personal data.
The NGO HateAid filed a complaint against TikTok with the German Federal Network Agency. The complaint alleges TikTok failed to respond to user reports of potentially illegal content, in violation of Article 16 of the DSA. HateAid reported several comments involving insults or incitement to hatred and TikTok failed to respond to over two-thirds of the reports for at least three months. Article 16(5) DSA requires platform to process all illegal content reports and notify users of moderation decisions “without undue delay”.
In Ireland, the An Coimisiún um Chosaint Sonraí, the Data Protection Commission, submitted a draft decision on TikTok’s data transfers to other supervisory authorities. The decision assesses TikTok’s transfers of EU users’ personal data to China, and whether it is complying with transparency obligations in relation to these data transfers.
The Coimisiún na Meán, the Digital Services Coordinator (DSC), is conducting research on vetted researcher data access under Article 40 DSA. It has published a survey to understand more about researchers' needs, readiness and barriers to vetted data access, which will be used by DSCs across the EU to implement the mechanism. The deadline to respond is 26 March.
Media organisation Mediahuis Ireland has brought a lawsuit against X for scam advertisements. It alleges that the scam ads were designed to appear as genuine news articles, infringing the intellectual property of publishers within Mediahuis’ portfolio and creating reputational risks. The lawsuit alleges that the scam ads were promoted on X and shared by verified users.
In the Netherlands, the College voor de Rechten van de Mens, the Institute for Human Rights, has found that Meta engaged in indirect gender discrimination when displaying job advertisements, following an investigation by Global Witness. The Authority found that Meta failed to monitor the operation of its advertising algorithm, investigate the extent it may be biased and take measures to prevent discrimination.
In Norway, the Datatilsynet, the data protection authority has published its report on sandboxing the Norwegian University of Science and Technology’s use of Microsoft Copilot. The report includes several recommendations for organisations planning to use the tool, including to reassess information management, identify and limit what Copilot will be used for, conduct a data protection impact assessment, and roll out Copilot in small and controlled steps alongside a structured plan for post-implementation monitoring.
In the U.K., the AI Safety Institute is now the AI Security Institute (AISI). This reflects a change in remit with AISI re-focusing evaluations on serious security risks rather than bias and discrimination. This includes evaluations related to chemical and biological weapons and cyber security in collaboration with the National Cyber Security Centre and the Ministry of Defence’s Defence Science and Technology Laboratory. AISI has also launched a new criminal misuse team which will work with the Home Office to focus on crime and security risks, including fraud and child sexual abuse material.
Lord Holmes of Richmond MBE has published a report advocating for AI Regulation in the UK, following the introduction of his private members bill the AI (Regulation) Bill (Nov 2023). The report presents eight case studies whereby individuals are currently affected by AI harms – including discrimination, disinformation from synthetic imagery, scams, copyright theft and unethical chatbots responses – and how the Bill would address these harms.
The House of Commons Science Committee and Culture, Media and Sport Committee held a joint session on AI and copyright law. It heard from witnesses in the technology and creative industries and covered a range of topics including transparency, copyright protections, and labelling for generative AI.
Ofcom, the UK’s online safety regulator, has published draft guidance on tackling violence against women and girls online. The guidance recommends measures for online services to tackle misogyny, pile-ons and online harassment, online domestic abuse and intimate image abuse. It proposes nine high-level actions, including governance, risk assessments, abusability and product testing, and user control and reporting. For each action, Ofcom sets out foundational steps that services must take to comply with the Online Safety Act alongside additional industry good practices.
In the U.S., the White House published a memorandum imposing tariffs on countries that subject American technology companies to "extortionate and unfair fines" and against policies that "undermine freedom of speech and political engagement or otherwise moderate content", particularly targeting the EU’s DSA and DMA. This follows remarks made by Vice President JD Vance condemning EU technology regulation at several events including the Paris AI Action Summit and the Munich Security Conference.
|
Community Reflections on AI Action Summit
|
Following the Paris AI Action Summit, we reached out to past interviewees Andrew Strait at Ada Lovelace Institute, Nabiha Syed at Mozilla Foundation and Abby Gilbert at the Institute for the Future of Work to gather their thoughts and reflections.
Andrew Strait, Ada Lovelace Institute: The Paris Summit was an event that presented a crossroads moment for the future of AI and society. One of the paths presented at the event leads towards national ‘domination’ and accelerationism. It frames the future as an ‘AI arms race’ with winners and losers, but no clear end point or objective. It’s a future that centres AI more as a geostrategic asset for military uses rather than a consumer-facing technology. It’s a path that aims to put corporate interests, speed, and efficiency ahead of what is in the public interest. And it’s a path that treats everyday people as test subjects in a grand experiment premised on the mistaken belief that unregulated and ‘unleashed’ AI will be an economic and geostrategic silver bullet.
The other path presented at the French Summit was of nations working together to build a world where AI works for people and society. The Current AI foundation embodies this – it aims to fund more research, projects, and initiatives aimed at building public interest AI, though with many questions remaining about what this term means in practice. This path leads towards a future in which national governments co-create shared infrastructure, norms, and governance to provide clear guardrails and directions for the technology. It’s a path that helps drive publicly beneficial innovation by answering the core basic questions about AI technologies – do they work? Are they fit for purpose? Are they safe and reliable? This is a future of the technology that is guided through regulation and governance to ensure developers building this tech serve the interests of people, not just creating more scale and efficiency.
Unfortunately, we heard far more of the former than the latter from world leaders and tech luminaries at the summit. I left Paris feeling the ground shift beneath my feet. While in many ways we are entering a new era of multipolar geopolitics and shifting alliances, we are also repeating the exact same mistakes around technology governance as the last 30 years with social media, mobile phones, web3, and others. As a wise scholar once said, ‘time is a flat circle’. We must remember that disruption without direction isn't progress—it's just running faster toward the same cliff.
Nabiha Syed, Mozilla Foundation: Unlike previous Summits, the AI Action Summit centred conversations about the public interest. Whether that was concerns about disruptions to the labour market or intellectual property theft from creative communities, there was a boldness in surfacing a broader definition of potential harm, beyond national security concerns. Open source was no longer only framed as a threat, but rather a pathway to more innovation. Every safety issue that open-source presents, open-source can also defuse. DeepSeeks’s entry into the market a couple of days prior to the summit was a strong affirmation of the power of open-source incentives.
It is always useful to see what could be a lofty narrative translated into action: a strong focus on tools and initiatives that empower those who advocate for the public good was much appreciated. Launches like ROOST, the Collective Intelligence Project’s Global Dialogues, or gatherings to talk about positive benchmarking and Public AI were all excellent, practical initiatives. If we want AI to serve the public good, our funding must match our level of optimism and energy for conferences. We need funding strategies that support bold and unconventional ideas that challenge power structures and are centred on people. What risk could be more worth taking than betting on people to control their technological futures?
Mozilla stands for defiant optimism that our technology future can actually be good – assuming communities lead the way. So many gathered at the Summit shared that optimism that people deserve better alternatives, that innovation is not a zero-sum profit game, and that we need to define what the good future looks like so that we can work towards it. We must set aside the misguided perception that regulation stifles innovation: Regulation, done well, should fuel innovation. It creates the conditions for responsible, ethical, and long-term innovation. When creativity and ethical design are at the core of technological development, we all benefit – and there is a market for that kind of alternative as well. It’s wonderful to celebrate this momentum.
Abby Gilbert, Institute for the Future of Work: Work was a focus of both the Global AI Summit in Paris, and in Oslo at the first Nordic AI Summit. Across events we attended the question remained: is more regulation the answer?
For us the key to unpacking this is better evaluation of the remit, fit and scope of current and potential law, via Sandbox environments which bring together different industries, disciplines, and regulators to look at real world applications, and produce credible evidence on the best routes to ensuring human centred automation.
Europe has mandated that all member states develop an AI sandbox by 2 August 2026. These should offer a controlled environment for testing and validation of innovative AI systems before being placed on the market. This is where the UK can truly lead, embracing the well stated socio-technical focus of our AI Security Institute, by driving forward Sandbox environments which consider ex-ante and ex-post contexts, and span regulatory regimes through real world use cases, such as the workplace.
|
Hybrid: 17 - 18 March, London This Summit organised will explore the critical role of standards in AI governance, examining recent developments, key challenges, and emerging needs to foster global inclusiveness and collaboration in AI standardisation.
Hybrid: 17 - 18 March, London The UK’s national showcase of how data science and AI can be applied to society's biggest challenges, organised by the Alan Turing Institute. This event has a focus on The Alan Turing Institute's grand challenges: defence and security, environment and sustainability, and transformation of healthcare. In-person tickets £240 for both days, virtual pass £20.
In-person: 20 March 10:00 - 15:30 GMT, London This unconference organised by Connected by Data is for people, particularly inside the public sector, who are interested in ensuring that public, community and worker voices are heard in decisions about data, digital and AI.
|
Community Spotlight: Sabeehah Mahomed, Alan Turing Institute’s Children and AI
|
Sabeehah Mahomed is a Researcher for Ethics & Responsible Innovation in the Turing’s Public Policy Programme. She currently works on the Children and AI project exploring the ethical and responsible design of AI technologies in the context of children’s rights. We spoke about the inaugural Children’s AI Summit, participatory research with children on generative AI, and comparative research on children’s right impact assessments.
What is the Children’s AI Summit, why is it important and what are the key outcomes you want to share with policymakers? Sabeehah: The Children’s AI Summit was inspired by the first AI Safety Summit which took place last year in Bletchley Park, UK, but failed to consider children’s experiences with AI. To amplify the importance of the topic, we decided to host a dedicated Children’s AI Summit one week prior to the AI Action Summit in Paris.
AI systems disproportionately impact children. While many of us are familiar with AI systems, children are the only group within our society that have interacted with and been impacted by AI since birth. Despite this, key stakeholders – such as AI developers and policymakers – have not adequately considered or engaged with children on this topic. This has resulted in a lack of awareness of children’s experiences with AI and a lack of action. The Alan Turing Institute’s Children and AI program, created three years ago, aims to address these gaps by centring and amplifying children’s voices.
Our Children’s AI Summit hosted nearly 150 children between the ages of 8 – 18 years old from across the UK, including Scotland, Leicester and London. Putting children’s voices and experiences centre stage, the Summit explored how AI impacts children today, and how children want to shape its future. It was a child-led event with entertainment and activities for different age groups between panel sessions and performances, including performances from National Youth Theatre, a session by the NSPCC’s Voice of Online Youth, and interactive stands by Children’s Parliament, the LEGO Group, and others – including robotics! We also invited a few partners from civil society, such as 5Rights Foundation, industry and the Department of Education. The Summit was hosted by our team in collaboration with Queen Mary University of London, with support by the LEGO Group, Elevate Great and EY.
Each session was chaired by a young person, and speakers and performers were all children. Prior to the Summit, we ran competitions across three categories, 1) arts, 2) pitching, and 3) messages to world leaders. Children across the UK and internationally were able to submit to these categories and share their thoughts, opinions and concerns about AI. We used these submissions to shape the Summit’s agenda and invited the competition winners to join us. Following the Summit, we produced the Children’s Manifesto for the Future of AI. This was also children-led, based solely on ideas shared by children and young people in the run-up to and during the Summit.
A youth representative presented the Children’s Manifesto at the Paris AI Action Summit. The key message is that policymakers must listen to children and take their perspectives on AI seriously. Children emphasised three main areas in the Manifesto: 1) education, 2) health, safety and well-being and 3) the environment. Firstly, children want AI to be used to support education, including through personalised learning and improving access to education. However, they have real concerns about unequal access, digital exclusion and bias because of the integration of AI in education globally. Second, children want AI to be used to keep them safe online and offline, but they are concerned about mental health, security, fake content, privacy and exploitation. Finally, the protection of the environment and AI’s impacts on the environment is a significant concern, and they want stakeholders to actively consider regulation, increase media coverage and public awareness on this topic.
Children and AI are also conducting research on generative AI’s impacts on children. Can you tell us about this project, including your research approach and initial findings? Sabeehah: For this research, supported by LEGO Group, we are exploring the perspectives of children, parents, carers and teachers on generative AI technologies. Our research is guided by the framework for children’s online wellbeing established by the UNICEF project Responsible Innovation in Technology for Children (an initiative funded by the LEGO Foundation) and seeks to examine the potential impacts of generative AI on children's wellbeing.
The project consists of two workstreams. The first comprises a nationwide survey on the opinions of children, their carers and teachers, led by the Turing’s AI for Public Services team. The second is a series of school-based workshops exploring children’s thoughts and perspectives around generative AI, with a focus on the use of multi-modal tools such as ChatGPT and Dall-E and how this influences creativity and play. To conduct the workshops, we collaborated with the Children’s Parliament to host six in-person workshops with classes of year 6 and year 7 in two schools in Scotland. During the workshops, we first introduced the topic of generative AI tools, explored how they work and then facilitated access to certain tools that could generate text and images. After the children had interacted with the tools, we let them choose between using traditional art materials, generative AI tools or using both. We carefully designed safeguards, including adult supervision, since these tools are not designed for children.
Through these workstreams, we will provide recommendations about future approaches for the safe and responsible design, development, and deployment of generative AI technologies that support the promotion of children’s wellbeing. Our final report on this topic is forthcoming, but a key finding from the workshops is children’s strong concerns about the environmental impacts of generative AI tools.
Our projects centre the participation of children in research and external-facing activities. Our research partnerships with the Scottish AI Alliance and the Children’s Parliament, experts in children’s rights and workshop facilitation with children, have enabled us to conduct these workshops and meet safeguarding considerations. We also host a range of external events and workshops to engage with children on this topic. For example, we recently led a masterclass on AI ethics with teenagers from across the world.
Finally, children’s rights impact assessments are emerging as a key part of governance frameworks across the globe. Can you tell us about your research on this topic? Sabeehah: Over the past few years, we’ve seen an increase in organisations and countries that are considering formally adopting children’s rights impact assessment (CRIAs). In 2023, we conducted a review of existing frameworks relating to children’s rights and AI in the UK and internationally. This in-depth review examined the ways that children’s rights are being advanced and protected in relation to AI and aims to inform best practice in child-centred AI. In November 2023, we published a report on AI, Children’s Rights, and Wellbeing: Transnational Frameworks which included an in-depth analysis of 13 frameworks at the intersections of data-intensive technologies according to three major themes, namely: children’s rights, children’s wellbeing, and child-centred policies.
Most recently, we worked with the Council of Europe’s Steering Committee for the Rights of the Child (CDENF) to conduct a Mapping Study on the extent to which children’s rights were considered across existing frameworks in member states. This work found that the majority of surveyed member states did not have plans to develop their own legal frameworks on AI in the context of children’s rights. It really emphasised the critical need for the development of this topic, which will be one of the focuses of CDENF’s work going forward.
CRIAs must involve an in-depth assessment of the benefits and limitations of the AI systems in respect of the whole breadth of children’s rights and principles, including non-discrimination and fairness. This requires the individual responsible for conducting the assessment to be adequately trained in children’s rights. In addition, the assessor must be required to evidence how the AI system adheres to children’s rights to prevent it being reduced to a tick-box exercise. The assessment also needs to be done iteratively at regular intervals to capture changes made to the system. CRIAs are only one tool to achieve child-centred AI. In addition, experts recommend organisations to also conduct a children’s rights impact evaluation which is a retrospective tool used to evaluate the actual impacts of the implemented system.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|
|