Algorithm Governance Roundup #13
|
Platform Work Directive | Community spotlight: Filippo Pierozzi, United Nation’s Office of the Secretary General’s Envoy on Technology
|
|
|
|
Welcome to March’s Algorithm Governance Roundup. This month, I spoke to Filippo Pierozzi at the United Nation’s Office of the Secretary General’s Envoy on Technology about their work with the UN’s AI Advisory Council and its report ‘Governing AI for Humanity’. As the UK's Data Protection and Digital Information Bill moves through the Committee stage in the House of Lords, our legal colleagues Ravi Naik and Alex Lawrence-Archer have published Data Protection and Digital Information Bill: A threat to fair markets and open public services for the Social Market Foundation and an accompanying blog. On the first day of debate, Lord Clement-Jones mentioned AWO as a leading voice on the Bill, Many thanks and happy reading! Esme Harrington, AWO
|
In the EU, a provisional agreement has been reached on the platform work directive. The legal act aims to regulate the use of algorithms by digital labour platforms, and to ensure minimum standards of protection for more than 28 million people working on such platforms across the EU. The agreement covers employment status, including legal presumption of employment; transparency requirements; human oversight and evaluation; and a ban on use of automated monitoring and decision-making systems for processing certain types of personal data. In France, the Autorité de la Concurrence, the competition authority, fined Google €250 million for breaching its commitments to news publishers. The Autorité determined that Google’s AI system Bard used content from press agencies and publishers to train its foundation model. Google did not notify agencies and publishers or the Autorité, and failed to propose a technical solution for press agencies and publishers to opt out. In Slovakia, the Council for Media Services, the media regulatory authority, published a brief analysis of the transparency reports provided by very large online platforms and search engines as required by the EU Digital Services Act. The regulator emphasised the lack of standardisation within the report structures and formats, and a missing common understanding of the required metrics. In the UK, the proposed Artificial Intelligence (Regulation) Bill received a second reading in the House of Lords. The private members’ bill proposes an AI authority, regulatory principles, sandboxes to support testing, and designated responsible AI officers. The National Audit Office published a report on the Use of artificial intelligence in government. It considers how effectively the UK government is positioned to maximise the opportunities and mitigate the risks of AI in providing public services. It focuses on machine learning (including language processing, predictive analytics and image or voice recognition), and includes a survey of government bodies about their planned and deployed use cases. The Information Commissioner’s Office (ICO), the UK's data protection regulator, published new guidance on the use of biometric recognition systems, as well as on content moderation processes under the Online Safety Act (OSA). The latter is aimed towards organisations who carry out content moderation to meet their OSA obligations. It forms part of the ICO’s ongoing collaboration with Ofcom, the communications regulator, on data protection and online safety technologies. The UK and Australia announced a new partnership on online safety and security. The Memorandum of Understanding, which is not legally binding, proposes joint action on areas including regulation and enforcement, tech industry accountability, countering misinformation and disinformation, countering foreign interference, and promoting the safety technology sector. The UK and US announced a new partnership on the science of AI safety. The Memorandum of Understanding will see collaborative development of tests for the most advanced AI models. The UK and US AI Safety Institutes have also laid out plans to build a common approach to AI safety testing. They intend to perform at least one joint testing exercise on a publicly accessible model, and explore personnel exchanges. In the US, the National Telecommunications and Information Administration (NTIA) within the Department of Commerce published its AI Accountability Policy report. The report, which offers recommendations for federal government action, focuses on how accountability is created by information flows (documentation, disclosures, and access) and independent evaluations (including red-teaming and audits). These, in turn, feed into consequences, including liability and regulation. The White House Office of Management and Budget (OMB) released new requirements and guidance for federal agencies’ use of AI. The document covers the roles and responsibilities of agency Chief AI Officers, the publication of agency AI strategies within a year, proactive sharing of AI code and models, and risk management practices. Utah passed the Artificial Intelligence Policy Act. The state-level AI bill takes effect on May 1 and addresses private sector AI deployments. It establishes liability on the use of AI, especially generative AI, that violates consumer protection laws if not properly disclosed. California published guidelines on public sector procurement, use and training of generative AI. It specifies that each state entity is responsible for the ethical, transparent and trustworthy deployment of generative AI in their organisation, and must assess the impact to the state workforce. State purchasing officials will have access to procurement training on Generative AI beginning March 2024. IEEE-USA published a call for legislation that establishes robust auditing protocols for automated decision-making systems (ADS). This includes an auditor network with government and government-certified independent AI auditors, central registration of ADS, a licensing regime for sensitive ADS, and domain-specific audit criteria across the AI lifecycle. At the Munich Security Conference, twenty global technology companies signed a voluntary pact to combat deceptive use of AI in 2024 elections. The signatories agreed to eight specific commitments, and pledged to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency. The United Nations General Assembly passed its first global resolution on AI. The resolution, which was unanimously adopted, encourages the promotion of safe, secure, and trustworthy AI systems that will benefit sustainable development for all. The Assembly also called on all Member States and stakeholders to refrain from or cease the use of AI systems that pose undue risks to the enjoyment of human rights.
|
Accelerating Progress Towards Trustworthy AI, Mark Surman, Ayah Bdeir, Lindsey Dodson, Alexis-Brianna Felix and Nik Marda, Mozilla Foundation AI is Taking Water from the Desert, Karen Hao, The Atlantic AI Safety is not a model property, Arvind Narayanan and Sayash Kapoor, AI Snake Oil AI Threats to Climate Change, Climate Action Against Disinformation, Check My Ads, Friends of the Earth, Global Action Plan, Greenpeace, and Kairos. A Typology of Artificial Intelligence Data Work, James Muldoon, Callum Cant, Boxi Wu and Mark Graham A Safe Harbor for AI Evaluation and Red Teaming, Shayne Longpre, Sayash Kapoor, Kevin Klyman et al. Assurance of Third-Party AI Systems for UK National Security, Rosamund Powell and Marion Oswald, Centre for Emerging Technology and Security Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models, UNESCO, International Research Centre on AI Collective Bargaining Agreements on AI at the Workplace, UNI Europe and FES Competence Centre on the Future of Work Does technology use impact UK workers’ quality of life?, Dr Magdalena Soffia, Professor Jolene Skordis, Rolando Leiva-Granados and Xingzuo Zhou, Institute for the Future of Work EU AI sovereignty: for whom, to what end, and to whose benefit?, Daniel Mügge Fake Image Factories: How AI image generators threaten election integrity and democracy, Center for Countering Digital Hate Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content, Kate Knibbs, Wired In Transparency We Trust? Evaluating the Effectiveness of Watermarking and Labelling AI-Generated Content, Ramak Molavi Vasse’i and Gabriel Udoh, Mozilla Foundation Invisible No More: The Impact of Facial Recognition on People with Disabilities, Eticas Involving the public in AI policymaking - Experiences from the People’s Panel on AI, Connected by Data Models All The Way Down, Christo Buschek and Jer Thorp Security and Privacy Challenges of Large Language Models: A Survey, Badhan Chandra Das, M. Hadi Amini, Yanzhao Wu The Poverty of Ethical AI: Impact Sourcing and AI Supply Chains, James Muldoon, Callum Cant, Mark Graham and Funda Ustek Spilda “This is not a data problem”: Algorithms and Power in Public Higher Education in Canada, Kelly McConvery and Shion Guha Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling, Victor Ojewale, Abeba Birhane, Ryan Steed, Inioluwa Deborah Raji, Briana Vecchione Tracing the Roots of China’s AI Regulations, Matt Sheehan, Carnegie Endowment for International Peace
|
The 2024 Conference on Fairness, Accountability and Transparency (FAccT) is inviting applications for the FAccT Doctoral Colloquium, open to PhD students and recent graduates. The Conference is 03-05 June in Rio de Janeiro. Application deadline is 08 April. Lighthouse Reports is hiring fellows to join its open source intelligence, data and reporting teams on a part-time basis for six months. This new Fellowship Program provides the opportunity to work on active investigations, mentoring and internal trainings. Application deadline is 08 April.
The DSA Observatory and Universiteit van Amsterdam’s Instituut voor Informatierecht is hosting a summer course on European Platform Regulation in Amsterdam on 01 - 05 July. Application deadline is 19 April.
The Ada Lovelace Institute is hiring a Public Participation & Research Practice Lead. Past projects have included the Citizen’s Biometrics Council and a major survey of UK public attitudes towards AI. Application deadline is 22 April.
The Coalition for Independent Technology Research is seeking contributions to its DSA Data Access Audit from researchers who work with social media data.
|
eLaw ConferenceIn Person: 20 -21 June, LeidenLeiden University's Center for Law and Digital Technologies, Brussels Privacy Hub, Computer Law & Security Review (CLSR) and the Data Protection Scholars Network (DPSN) is organising this conference on the theme of “Law and/versus Technology: trends for the new decade”. Regulatory Models for Algorithmic Assessment: Robust Delegation or Kicking the Can?In Person: 18:00 - 19:30 GMT 25 April, University College London
University College London is hosting Margot Kaminski, Micheal Veale, and Jennifer Cobbe to compare and contrast regulatory regimes concerning AI, with a focus on how actors within them can understand the systems around them.
|
Community Spotlight: Filippo Pierozzi, United Nation’s Office of the Secretary General’s Envoy on Technology
|
The United Nation’s Office of the Secretary General’s Envoy on Technology is responsible for the implementation of the UN’s roadmap on digital cooperation. We spoke to associate expert Filippo Pierozzi about their work with the UN’s AI Advisory Council and its report ‘Governing AI for Humanity’. Q: What is the Office of the UN Tech Envoy and what have you been working on? Filippo: The Office of the Secretary General’s Tech Envoy was created in 2021. In the summer of 2022, Mr Amandeep Gill was appointed as the Special Envoy. The Office was set up to implement the UN’s roadmap for digital cooperation (2020) and its vision for an open, safe and secure internet. Essentially, we are the technology public policy team of the UN. We ensure that the UN maintains a coherent approach to digital policy, both internally, by coordinating with all the relevant agencies, and externally, by acting as an entry point for the private sector, civil society, and technical community. Member countries Zambia and Sweden are leading on an intergovernmental agreement on digital cooperation, called the Global Digital Compact. This covers a broad range of digital issues, including connectivity, digital skills, trust and safety, and open AI. Our Office have been running the consultation on the Compact for the past year, and now member states are working on negotiations. In October 2023, our Office set up the UN’s AI Advisory Body upon the request of Secretary-General António Guterres. This has been in the works for several years. In 2019, the UN’s report on digital cooperation proposed the creation of an internal multistakeholder AI advisory body to advise on the governance of AI. Since then, UNESCO has published its recommendation on the ethics of AI and the UN Security Council has met to discuss AI for the first time. Private companies have also been interested in a body and the rise of ChatGPT and other LLMs has made it even more timely. Q: What is the UN’s AI Advisory Body and what has it been working on? Filippo: The AI Advisory Body consists of 39 expert members, including the Tech Envoy itself. The other 38 experts represent 33 different nationalities, and include government officials, academics, civil society activists, and private sector representatives. All these experts represent themselves in a personal capacity, rather than their institutions. During the appointment process, we created an open platform and received over 1,800 nominations, which were all of excellent quality. We coordinated with several UN agencies to shortlist and select the experts, who were chosen based on expertise whilst ensuring geographical and gender diversity. The AI Advisory Body has a Secretariat which is hosted within our team at the UN Office of the Tech Envoy. The Secretariat supports the body, including by conducting novel research. For example, our team have pulled together existing literature and initiatives focused on the AI risks. The AI Advisory Body delivered its interim report Governing AI for Humanity in December – within two months of formation. In this period, the body convened as a plenary three times and held over 40 meetings. The interim report is the output of these discussions and is intended to set the tone of conversation rather than offer definitive answers at this point. The report is structured around two pillars. The first proposes five principles that should shape the international governance of AI: - AI should be governed inclusively, by and for the benefit of all.
- AI must be governed in the public interest.
- AI governance should be built in step with data governance in relation to data sharing and data protection.
- AI governance must be universal, networked and rooted in adaptive multi-stakeholder collaboration.
- AI governance requires compliance and adherence with international law, including international human rights law and other UN obligations such as the Sustainable Development Goals.
The second pillar is concerned with an international institution for the governance of AI. It was too early for the AI Advisory Body to propose a specific structure for this institution. Instead, they emphasise that form must follow functions. This requires the identification of functions that an agency-to-be, or network of agencies, or network of existing institutions, could play in the future. The report identified a pyramid of seven functions, layered from most achievable to least. The first function, which has the greatest consensus and is most feasible, is horizon scanning. This involves the creation of an IPCC-style institution for AI which can inform policymakers on trends, risks, and issues to look at in the short term. Unlike the IPCC, we suggest that the body publishes reports every six months. In the medium term, AI governance arrangements must be interoperable across jurisdictions. As a result, we are concerned with understanding what different actors are doing and are closely following the G7’s Hiroshima process, the EU’s AI Act, the UK’s AI Safety Summit (and upcoming summits in Korea and France). In the longer term, there will be discussions around AI liability schemes, with the potential for a UN treaty on this topic. Q: How are you coordinating with other UN bodies that have expressed an interest in AI? Filippo: We are working with several UN agencies and departments. For example, we are coordinating closely, at both the leadership and working level, with UNESCO on AI ethics. Some of the AI Advisory Body experts also contributed in a separate capacity to UNESCO’s recommendations on the ethics of AI. Recently, our team attended UNESCO’s AI Conference in Slovenia where we hosted consultations on the interim report. We also work with the ITU, the UN Agency for Standards, UNCTED on digital economy, ILO on implications of AI for the workforce, and with the AI Commissioner for Human Rights. The members of the AI Advisory Body have met with all of these agencies in person. Q: What are the next steps for this work? Filippo: Between February and April we are running our stakeholder engagement process on the interim report. We are interested in feedback on the interim report itself and gathering expertise on several horizontal and vertical areas of AI that we were not able to cover in the interim report. For example, it does not discuss AI applications in agriculture and health, nor considerations on gender, rule of law, national security, or open source. We will draw on all these inputs for the final report. During this period the AI Advisory Body will meet as a plenary every month, including in-person stakeholder sessions in Europe and Asia. The final report will be released this summer, in advance of the UN’s Summit of the Future on the 23 and 24 September. At this Summit, heads of government will discuss and hopefully approve a political agreement called the Pact of the Future. Our final report on AI will be an important input. Both the interim report and the final report will also contribute to the Global Digital Compact which is being negotiated between member states. In particular, we hope some of the final report’s recommendations will be included in the Global Digital Compact related AI.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O
Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|