Algorithm Governance Roundup #7
|
Community Spotlight: Institute for the Future of Work | EU Parliament committees reaches compromise position for the AI Act
|
|
|
|
Welcome to May’s Roundup. We hope you enjoy this month’s community spotlight on the Institute for the Future of Work (IFOW). We talk about IFOW’s recent publication the Good Work Algorithmic Impact Assessment, which is a framework that centres the participation of workers. An update: At AWO, we are officially announcing our algorithm governance services. Algorithm governance isn’t new for us. Since our launch, we have supported companies, international organisations, charities, and government bodies. As with all our work, we undertake projects that further our mission: to support and advance data rights. To celebrate we’re offering free audits, assessments or strategic advisory up to the value of €5,000 to three organisations undertaking public interest work. If you’d like to be considered, please email enquiries@awo.agency by 30th June with a brief description of who you are, your algorithm governance challenge, and your timeline. Please take a moment to think about whether you know an organisation who would benefit, and share this with them and your networks! As a reminder, we take submissions. We are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you! Many thanks and happy reading! AWO team
|
In Europe, the Internal Market Committee and the Civil Liberties Committee of the European Parliament have passed their compromise position on the Artificial Intelligence Act. The full report includes bans on facial recognition in public and on predictive policing, obligations to conduct fundamental rights impact assessments, transparency obligations for foundation models, and rights and redress for affected persons. See EDRi’s tweet thread. The European Commission has submitted a standardisation request to CEN and CENELEC to support the implementation of the AI Act. Standards will be developed on: risk management systems; governance and quality of datasets; record keeping through logging; transparency and information for users; human oversight; accuracy; robustness; cybersecurity; quality management systems, including post-market monitoring; and conformity assessments. The European Data Protection Board has adopted guidelines on facial recognition technology (FRT) for law enforcement. These emphasise that FRT must be used in strict compliance with the Law Enforcement Directive and only if necessary and proportionate under the Charter of Fundamental Rights. The Board also reiterated its call for a ban on certain FRT uses in the Artificial Intelligence Act. In France, CNIL has published its action plan for its new Artificial Intelligence Service. This includes a specific focus on generative AI and AI video surveillance. The plan has four objectives: 1) understand the functioning of AI systems and their impacts, 2) guide the development of data protection compliant systems, 3) support innovation through a sandbox and ‘enhanced support’ programme, 4) develop audit tools and conduct investigations. In the UK, The Digital Regulators Cooperation Forum (the ICO, FCA, CMA and Ofcom) has published its workplan for 2023 - 2024. One workstream focuses on ‘supporting effective governance of algorithmic systems,’ and includes building common understandings and definitions, examining emerging risks and opportunities, and engaging with third party auditors. In the US, the White House has published the National Strategy for AI. This includes a roadmap for federal investment in AI R&D, a public call on critical AI issues, a report on the risks and opportunities of AI in education, and a listening session with workers impacted by automated technologies. The Federal Trade Commission, Department of Justice, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission have published a joint statement on automated systems. They raised concerns about unlawful discrimination and committed to enforce their respective legislation. The Senate Committee on the Judiciary Subcommittee on Privacy, Technology and the Law hosted a hearing on the Oversight of AI: Rules for Artificial Intelligence with testimony from Open AI, IBM and NYU. Foundation Models and Generative AI In the UK, the CMA has launched a review into the competition and consumer protection considerations of the development and use of foundation AI models. The CMA is seeking evidence from stakeholders and the deadline for submission is 02 June. In Canada, the Office of the Privacy Commissioner has launched an investigation into OpenAI regarding its collection, use and disclosure of personal information without consent for ChatGPT. In the US, DEF CON AI Village is hosting a red-teaming exercise for generative AI models. Thousands of individuals will participate to find bugs and biases in large language models built by Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability.
|
The Alan Turing Institute has published Exploring Children’s Rights and AI. This report explores children’s engagement and understanding of AI and its impact on children’s rights. It identified four main themes: AI and Education, Fairness and Bias, Safety and Security, The Future of AI. The research was conducted with Scotland’s Children’s Parliament and the Scottish AI Alliance. The Brookings Institution has published The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment. The article explores the diverging approaches to AI regulation in the two jurisdictions, and possible steps to establish global AI standards. Eticas has published Auditing Social Media: (In)visibility of Political Content on Migration. The report shares findings of an adversarial sock puppet audit of TikTok's recommender system regarding political content on migration. The European Center for Not-for-Profit Law (ECNL) has published Framework for Enabling Meaningful Engagement of External Stakeholders in AI. The framework is developed around three elements of meaningful engagement: shared purpose, trustworthy process, and visible impact. It prioritises engaging persons most impacted by AI systems. It is being piloted with the City of Amsterdam and feedback is welcome. Researchers at Harvard University have published Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten. This article discusses the conflict between the right to explanation and the right to be forgotten, and proposes an algorithmic framework that generates explanations that remain robust even when data deletion requests trigger model updates. Lawfare has published Challenges of Implementing AI With “Democratic Values”: Lessons from Algorithmic Transparency. This article considers two aspects of transparency, the audience and the objective, and includes analysis of the EU’s AI Liability Directive. Lighthouse Reports has published two investigations of algorithmic systems as part of its Suspicious Machines series. Ethnic Profiling, co-published with NRC, reveals that the Dutch Ministry of Foreign Affairs has profiled millions of visa applicants with an algorithm using variables like nationality, gender, and age. Social Security uses a secret AI to track sick leave and hunt down fraud, co-published with El Confidencial, reveals that the Spanish Social Security Agency has scored workers on their health status and ability to return from leaves of absence. Foundation Models and Generative AIAccess Now has published What you need to know about generative AI and human rights. The article explains the limitations and real risks of generative AI systems. It suggests avenues to improve safety including mandating transparency of the system’s exploitative labour practices, environmental impact and compliance with human rights standards. The Alan Turning Institute hosted How to regulate foundation models: can we do better than the EU AI Act? The event featured a presentation by Lilian Edwards, a professor at the University of Newcastle, on the governance of foundation models and the EU’s AI Act, followed by a panel discussion. The slides are available here. Electronic Privacy Information Center (EPIC) has published Generating Harms: Generative AI’s Impact and Paths Forward. This report categorises the harms posed by generative AI and the policy, legal and technical tools to mitigate them. The Stanford Institute for Human Centered Artificial Intelligence has published AI-Detectors Biased Against Non-Native English Writers. Researchers assessed 7 AI detectors that purport to identify content created by an AI system. They found the AI detectors misidentified 61% of TOEFL essays (Test of English as a Foreign Language) written by non-native English students as AI generated. 97% of these TOEFL essays were flagged by at least 1 of the detector systems. Researchers at the University of Washington have published Data Statements: From Technical Concept to Community Practice. This article documents the co-evolution process behind A Guide for Writing Data Statements for Natural Language Processing, a documentation toolkit that facilitates transparency, mitigation of bias and inclusion. Wired has published Generative AI Systems Aren’t Just Open or Closed Source. This article discusses a leaked memo, allegedly from Google, that claimed open-source models will outcompete closed models developed by Google and OpenAI. This article challenges the open/closed dichotomy and advocates a gradient framework for responsible AI releases.
|
The European Commission, DG CONNECT, is consulting on the draft delegated act for independent audit under the Digital Services Act. The delegated act provides a framework for the preparation, commission and reporting of independent audits. Consultation deadline is 02 June.
The University of Oxford Institute for Ethics in AI is proposing to elect a Visiting Fellow for the academic year 2023 - 2024. Application deadline is 17 June.The UK Department for Science, Innovation and Technology is consulting on its AI regulation white paper, including on its cross-sectoral principles, allocation of legal responsibility along the AI value chain, and foundation models. Consultation deadline is 21 June.OpenAI has launched a program to award ten $100,000 grants to fund experiments in setting up a democratic process for deciding what rules AI systems should follow. The program is implied to be separate from OpenAI’s business interests but this has been criticised, see TechCrunch. Application deadline is 24 June. Data & Society has launched its Algorithmic Impact Methods Lab (AIMLab) to develop and pilot public interest impact assessment methods for automated decision-making systems. The Lab is seeking partners for engagement and collaboration.
|
How do people feel about AI? 6 June, 14:00 - 15:00 GMT, Ada Lovelace Institute The Ada Lovelace Institute and The Alan Turing Institute share the findings of a survey on the British public's experience and attitudes to AI, and expectations on how it should be regulated. DSA Stakeholder Event 27 June, Brussels The European Commission, DG CONNECT, is hosting this event to offer stakeholders the opportunity to provide input ahead of the enforcement of the Digital Services Act.
|
Community Spotlight: The Institute for the Future of Work
|
The Institute for the Future of Work (IFOW) is a research and development institute exploring how new technologies are transforming work and working lives. We spoke to the team who worked on the Good Work Algorithmic Impact Assessment, which provides concrete guidelines to conduct a participatory impact assessment of algorithmic systems that involve work. Q: Can you tell us about the Institute for the Future of Work and the impetus for the Good Work Algorithmic Impact Assessment (GWAIA)? IFOW: Our mission at the Institute for the Future of Work is to build a fairer future of better work. We focus on the impacts of data-driven technologies on work and workplace rights. We were founded following a parliamentary commission into the future of work.The commission ran from 2016 to 2018 and brought together expertise in law and employment, economics, philosophy, and technology to think about the implications of AI and machine learning (ML) for work. The inquiry looked at the structure of labour markets, job opportunities, job equality, and worker rights and well-being. Impact assessment became a focus for us in response to our first major research project. In 2020, IFOW brought together an Equality Task Force. The group examined three case studies of ML deployment within the work lifecycle in recruitment, hiring and management. The group evaluated the extent to which our current legal framework, focusing on the Equality Act and information rights under UK GDPR, was able to deliver meaningful accountability for risks to equality. We also reviewed ‘technical bias auditing’ tools on the market at that time that claimed to provide ‘fairness solutions’. We found these systems were importing mathematical formulas used in American law. These systems had little relevance to the UK context and poor transparency around the definitions of fairness being used and the inevitable trade-offs made in these processes. The project identified that in this context, there were a number of ‘gaps’ in the existing regulatory regime, particularly: a) the lack of any requirement for ex-ante examination of the potential risks of algorithmic systems, and b) the absence of any requirement for disclosure of these potential or realised impacts. Under tort-based models of law, an individual has to claim wrongdoing by knowing it has taken place – so necessarily after the fact. A lack of transparency regarding the approach taken to fairness limits the use of this mechanism. A lack of any duty to consider risks ex-ante also delimits the management of potential harms. The report concluded that corporate duties for pre-emptive evaluation, with disclosure requirements, was necessary. We described this as an Accountability for Algorithms Act. However, the focus at that time was solely on equality harms. As we did more research on the impacts of algorithmic systems in practice, we found that impacts were arising across all key dimensions of Good Work. Q: How was this guidance developed and why is it important? IFOW: Knowing the limits of current legal and market-based solutions, we sought the financial support of the Information Commissioner’s Grant Programme to develop guidance, which harnessed the potential of audit and impact assessment to manage ‘fairness’ across a wider set of dimensions. Within this, we wanted to explore the role of data protection as a gateway to the preservation of a wider set of fundamental rights as they relate to work. We first undertook a review of algorithmic impact assessment methodologies and literature on participatory AI, participatory ML, human computer interaction and related fields. This included impact assessments across different sectors and deployments, as well as specific approaches for algorithmic accountability which were emerging in America and Canada. Our Good Work AIA is through the lens of the Good Work Charter principles: access, fair pay, fair conditions, equality, dignity, autonomy, well-being, support, participation, and learning. The Charter synthesises fundamental rights, principles and values and operates as a ‘checklist’ of AI principles as they apply to the workplace. This is a novel approach because we have a strong focus on worker involvement as key stakeholders. Giving Voice to Workers: Internationally, there is growing consensus on widening engagement in the algorithmic assessment process. In line with this, our concern was to make sure that data subjects’ views were incorporated for both practical and regulatory compliance reasons. We believe that people who are or will have their data represented in these systems (present or future ‘data subjects’), particularly in the field of work, are particularly well placed to identify potential risks of algorithmic systems that others might miss prior to the system’s design, development, and deployment. In the context of work, it is relatively simple to identify a total population of impacted persons and consult them to make the widest possible assessment. From disclosure to negotiation: Disclosure of information is important. Our AIA emphasises the importance of recording decisions taken at each stage of the design, development, and deployment process for accountability. We would like companies to be legally obliged to disclose decisions and have developed template legislation. However, we want to move beyond the disclosure model as a sufficient form of reducing information asymmetries. Instead, our guidance is concerned with what disclosure looks like and how this can be used as a basis for wider stakeholder involvement and co-designing the deployment of an algorithmic system at work. Social mitigations: Our concept of fairness moves beyond equality, pay and access to work and considers the balancing of interests across wider social Good Work Principles, such as conditions, autonomy, dignity, well-being, and learning. Our socio-technical approach considers redesign of the technical system and mitigations that look at how the tool is integrated within the business and shapes labour relationships. Encompassing technical audits: The UK’s AI White Paper proposes high-level principles for AI regulation, including appropriate transparency and explainability. It also places weight on technical audit. Our AIA methodology encompasses such approaches, including audits for fairness, bias, and robustness, and situates these within a process which provides the opportunity to explore what explainability and transparency of both systems and audits involves vis-à-vis workers. Understanding AI at Work Toolkit: We released a Toolkit that supports the AIA. The Toolkit is an educational resource of definitions and explainer videos. It explains how key processes in the design of algorithmic systems involve decisions that shape the systems’ impacts and outcomes. Our previous research found that employers were unaware of these processes. We argue that employers can only deploy algorithmic systems in an accountable and legally compliant manner when they are cognisant of how they work. The Toolkit is also useful for workers and Trade Union training processes. Q: Your approach recommends an initial Risk Assessment followed by a four stage Impact Assessment, why did you structure your assessment this way? IFOW: This initial stage is conducted by a cohort of what we identify as ‘accountable agents’ within an organisation, to ensure good process is followed for the initial considerations regarding adoption of a system. The process involves identifying and safeguarding against any clear breaches of the law, and promotion of good work in line with organisational values. These conditions should be considered from the outset. We propose union representatives and frontline workers are involved if possible. One of the main outputs is a Key Design Choices Report mapping choices made at the design, development, and deployment stages. This Report is a useful diagnostic and source of information for the full AIA. Through this process, the cohort of accountable agents should be able to identify whether an algorithmic system will make or inform decisions about access or terms and conditions of work, such as pay, work allocation, evaluation of performance, or discipline of workers. Wherever this is the case, the organisation is advised to conduct the full AIA. The subsequent full Good Work AIA process consists of four-stages: 1) Identifying relevant stakeholders, including workers; 2) undertaking an ex-ante risk and impact analysis; 3) designing and deploying mitigations, and 4) monitoring the impacts and mitigations within an organisation (for example, this could be done by a trade union if it has the ability to collect and process the relevant information). This approach looks to widen participation to encompass workers as data subjects, allowing the most comprehensive assessment of potential impacts.
|
|
Q: Your approach builds out on top of Data Protection Impact Assessments, why have you used this as a starting point? IFOW: The use of AI systems in work creates novel risks that are not fully captured by our existing data protection framework. However, the framework provides a strong jumping off point because Article 35 of the GDPR generally requires a Data Protection Impact Assessment when an algorithmic system is deployed that is novel or has significant elements of automated decision making. This would include decisions concerning work, allocation, payment, access to benefits, potential disciplinary or contractual matters. Importantly, businesses are well-versed in data protection compliance and often have accountable actors within the firm. Also, the emerging ecosystem of firms working on responsible AI deployment originated in the data protection space. The Digital Information and Data Protection Bill would erode our current data protection framework and we are concerned that proposed changes to Article 22 UK GDPR on profiling and Article 35 UK GDPR on DPIAs could impact the Good Work AIA. However, we intend for our framework to build and go beyond data protection compliance to promote a gold standard of responsible innovation for good work. Beyond the requirements of Article 35, we propose the AIA is conducted whenever the use of an algorithmic system impacts access, conditions or quality of work as defined in our Good Work Charter. Q: What are the next steps for this project and what other algorithm governance work is the IFOW working on? IFOW: We are building a community of trailblazer organisations who want to pioneer in good work while innovating through The Lab. If you are a business interested in piloting our approach, please contact us at abby@ifow.org. There will be a lot to learn in the pilot process. We are particularly interested in the differences between organisations who are creating and deploying systems in-house versus those that are procuring systems. We want to understand how the Software-as-a-Service procurement process impacts transparency and negotiation of system design. We also need to think about how mitigations, including system re-design and monitoring, are impacted by procurement. We are developing versions of this GWAIA guidance for use by trades unions, and in the specific context of hiring.
|
:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!
If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
|
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency. A W O
Wessex House Teign Road Newton Abbot TQ12 4AA United Kingdom
|
|
|
|
|
|