Algorithmic Transparency Standard | Community spotlight: Liz Adams, CDEI
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
Example logo
Algorithm Governance Roundup #1
Algorithmic Transparency Standard | Community spotlight: Liz Adams, CDEI
Hello — welcome to AWO’s first Algorithm Governance Roundup.

You are receiving this email because, at some point in the past, you signed up to receive newsletters from AWO.


Algorithm Governance Roundup is a new monthly newsletter to support and connect with organisations working on algorithm governance, with a special focus on the UK and EU. Every month, we will share technical and regulatory developments, job listings, publications, and events. We will also feature an organisation or initiative through our ‘Community Spotlight’ section. This month, we interview Liz Adams from the UK’s Centre for Data Ethics and Innovation on the Algorithmic Transparency Standard.


We take submissions:We are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. The purpose of this newsletter is to examine the field of assessing and auditing algorithms, mitigating algorithmic harms, and championing accountability for everyone who interacts with algorithmic systems.


If you wish to unsubscribe from Algorithm Governance Roundup, please
click here. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.

Many thanks and happy reading!


AWO team

This Month's Roundup
In the UK, theAlgorithmic Transparency Standardhas received an update. The Standard, developed by the Central Digital and Data Office (CDDO) and the Centre for Data Ethics and Innovation (CDEI), provides a framework for public sector bodies to share information on their use of algorithms. The template is available on GitHub, and the Standard and guidance are open for comment.
In this month’s Community Spotlight, we interview Liz Adams from the CDEI about the Standard.

Within U.K. parliament, the House of Commons Science and Technology Committee has launched an inquiry into the governance of AI. This inquiry seeks evidence on current governance approaches, transparency and explainability measures, AI regulation, the UK’s current legal framework, and international comparisons and lessons to be learned. The deadline for submissions is 25 November.


In Spain, a
voluntary algorithmic transparency certificate for industryis being developed by Adigital, the Spanish Association of the Digital Economy, in partnership with Eticas Foundation. To receive certification, companies will complete forms and mappings aimed at transparency and explainability. The final certificate will be an ‘algorithmic leaflet’ similar to those included in medicines, and companies will be able to add a logo to indicate compliance.

Last week, Eticas Foundation also hosted the inaugural
Algorithmic Auditing International Conference with academics, civil society, and industry leaders. A conference panel discussion, ‘Algorithmic Audits: From Theory to Practical Solutions’, is available to watch online.

In Europe, the European AI Fund has announced 14 new grantees. This includes Eticas Foundation, to support dissemination of their algorithm audit work. Other grantees are
Algorithm Audit, a Dutch non-profit with a case-based approach to forming independent audit commissions and testing technical tools for bias; the Ada Lovelace Institute, which will shape standards development mechanisms and develop audit and impact assessment methodologies; and Fundación Ciudadana Civio, which assesses algorithmic systems deployed by Spanish public bodies.

In the US, a new volunteer-led non-profit will run bounty competitions for algorithm bias. biasbounty.ai expands on Twitter’s 2021 bias bounty work. Participants compete to identify algorithm bias, modelled on the cybersecurity practice of inviting the public to identify and report vulnerabilities and security flaws. biasbounty.ai’s first bounty is live and closes on 30 November.


Further reading: Bug bounties are gaining traction within algorithm governance. Recent scholarship includes:
Research
The European Center for Not-for-Profit Law (ECNL) published poll results from a survey of 12 EU countries, which asked members of the public for their opinion on the use of AI. Respondents were seriously concerned about the application of AI for national security, and overwhelmingly supported AI rules that fully protect human rights in all circumstances, including national security. The methodology and responses per country are available here.

The ECNL has also published a new
legal opinion on the dangers of excluding AI used for military and national security from binding European instruments. The opinion looks specifically at the AI Act and the Council of Europe’s ‘zero draft’ of a Convention on AI, Human Rights, Democracy and the Rule of Law.

The Minderoo Centre for Technology and Democracy, based at the University of Cambridge, published A Sociotechnical Audit: Assessing Police use of Facial Recognition. The research develops a sociotechnical audit scorecard to assess police deployment of facial recognition technology and tests it on three police deployments in England and Wales – all failed.
​​

The Cybersecure Policy Exchange, based at Toronto Metropolitan University, the Centre for Media, Technology and Democracy, based at McGill University, and the Center for Information Technology Policy, based at Princeton University, have published AI Oversight, Accountability and Protecting Human Rights: Comments on Canada’s Proposed Artificial Intelligence and Data Act. The researchers offer five recommendations to improve the proposed Canadian legislation, including formal public consultation, empowering an independent commissioner and tribunal, and broadening the scope of the framework to include government institutions.
​​

Stanford researchers published a new paper on end-user audit, introducing an audit tool and methodology for non-technical end users. From Michelle Lam, the lead author: “We had 17 non-technical users lead end-user audits of Perspective API. With just 30 mins to conduct their audits, participants independently replicated issues that expert audits had identified and raised previously underreported issues like over-flagging of reclaimed slurs.”

Opportunities
The Ada Lovelace Institute is hiring a Visiting Senior Researcher on algorithmic auditing. The contract will run for 24 months, and is based out of their London office.
The application deadline is 1 December
.

AlgorithmWatch is offering five paid
fellowships in algorithmic accountability reporting. The fellowships will run for six months, beginning in January 2023. Applicants must reside in the European Union or Switzerland.
The application deadline is 4 December.
Upcoming Events
The Distributed Artificial Intelligence Research Institute (DAIR) 1-year anniversary event
Virtual: 2 & 3 December

Register here

The
DAIR team is hosting two and a half hours of interactive talks and conversations with team members located across North America, Europe and Africa. This anniversary celebration is designed for anyone interested in DAIR’s work focusing on mitigating the harms of current AI systems and imagining a different technological future.


Data for Policy Conference 2022
Hybrid physical/virtual: 5 December – Hong Kong; 8/9 December – Seattle; 13 December – Brussels.
Register here until 30 November

Data for Policy is hosting their annual conference on data science innovation in governance and the public sector. Standard Track 5 of the conference will focus on ‘Algorithmic Governance’.
Community Spotlight: Liz Adams, CDEI
Liz is the Head of the Algorithmic Transparency Programme at the Centre for Data Ethics and Innovation (CDEI). She leads the CDEI’s work to develop and implement the UK’s Algorithmic Transparency Standard, alongside the Central Digital and Data Office, and has previously led CDEI projects on online harms and the use of data-driven technology in recruitment.

Q: Can you give us an overview of the Algorithmic Transparency Standard?
Liz: The Algorithmic Transparency Standard aims to establish a standardised way for government departments and public bodies to inform the wider public about how they use algorithmic tools in their decision-making processes. Co-developed with the Central Digital and Data Office (CDDO), the Standard promotes public transparency as well as encouraging good behaviours in the development and deployment of algorithmic tools.


The Standard provides meaningful transparency into algorithm-assisted decision processes by presenting details about how the algorithmic tool works and the context within which it operates. The Standard is aimed at any public authority that uses algorithmic tools in their decision-making process.


The CDEI’s
public attitudes research consistently highlights transparency as a key driver of public trust. In polling about government data sharing, 78% reported that it was important that they had the option to see a detailed description of how their personal information is shared, compared to just 7% who responded that it didn’t matter.


Completed transparency reports will be reviewed by the joint CDEI/CDDO team for completeness and accessibility, but won’t undergo any formal assessment.


Likewise, we welcome teams who want to link to algorithmic impact assessments they have completed but this is not a requirement and we don’t have any specific guidance on which ones are used. We anticipate that enabling transparency will facilitate other opportunities for evaluation and assurance.

Q: How was the Standard developed?
Liz: Commitments to explore transparency mechanisms for automation in public sector decision-making have previously been made in the National Data Strategy and AI Strategy, and the concept has been strongly supported by other stakeholders, including the
Alan Turing Institute, Ada Lovelace Institute, AI Now Institute and Oxford Internet Institute.


In early 2021, the CDDO and CDEI team ran a deliberative research project with Britain Thinks to consider how the public sector can be meaningfully transparent about algorithmic decision-making. The deliberative approach was meant to gradually build up participants' understanding and knowledge about the use of algorithms in the public sector and resulted in a co-design session, where participants worked collaboratively to create prototypes of an approach to algorithmic transparency that reflected their needs and expectations. More information on the findings of this research — which informed the development of the Standard can be found here:
Following the release of the first iteration of the Standard in November 2021, we gathered feedback by piloting the Standard with teams across the public sector, as well as through an open call for feedback. Six of the pilot reports are now available on our GO​​V.UK page (reports from three police forces, the ICO, the Food Standards Agency, DHSC/NHS Digital and GDS), with more to be uploaded in the near future.

Using the feedback from these pilots, we updated the Standard, and wrote accompanying guidance, which is available in
draft form on GitHub and is open to feedback.


The piloting process allowed us to make changes to the guidance based on teams’ feedback: for example, we added and amended fields to allow teams to include information about the tool’s performance and to provide general context about their tools. We were also able to clarify fields that were unclear and work through concerns that transparency might pose security or “gaming” risks. Generally we found that the system-level transparency required by the Standard, as opposed to decision-level transparency, minimises these risks.


As we move into this next phase, we expect to see more teams from across the public sector adopting the Standard independently within their own organisations.


We remain available to support teams completing algorithmic transparency reports and will continue to gather feedback. Please do feel free to get in touch with us at algorithmic.transparency@cdei.gov.uk with any comments or questions.


Beyond the Standard itself, we are starting to investigate options around how we support publication and access to completed transparency reports. Over the next few months, we will be carrying out a discovery phase to assess user needs and consider options for how to approach this.


Q: How does the Standard fit into the wider UK regulatory framework?
Liz: The Standard aims to encourage and standardise approaches to transparency, which should lead to increased public trust in the public sector as well as giving the public sector the confidence to innovate responsibly. It isn’t designed to be an exhaustive assurance tool; although we hope that it will encourage teams that use it to think through some central questions when designing and deploying tools.


As our
draft guidance on GitHub makes clear, we only expect organisations to publish a transparency report when their tool is piloted or being deployed. However, we strongly recommend that teams begin to complete it whilst a tool is in development.


Our guidance provides the following advice for which tools are in scope:

To assess whether your tool is in scope, we would encourage you to reflect on the questions in the scoping criteria included below.
The Algorithmic Transparency Standard is most relevant for algorithmic tools that either:
  • Have a significant influence on a decision-making process with significant public effect, or
  • Directly interact with the general public.
Significant public effect can be interpreted broadly similarly to the notion of decisions with “legal or similarly significant effect” in UK GDPR, e.g. you might want to consider whether usage of the tool could:
  • materially affect individuals, organisations, groups or populations?
  • have a legal, economic, or similar impact on individuals, organisations, groups or populations?
  • affect procedural or substantive rights?
  • impact eligibility for, receipt of, or denial of a programme?
The Standard is part of a cohesive approach to AI and data governance across government, building on the vision set out in the National AI Strategy. This will include the publication of a White Paper on AI Regulation which will ensure that AI deployed across the economy is appropriately transparent and explainable, as set out in the government’s July policy statement.

Q: Is there anything else you’d like to share about the Standard or other CDEI algorithm governance initiatives?
Liz: The Centre for Data Ethics and Innovation (CDEI) leads the government’s work to enable trustworthy innovation using data and artificial intelligence (AI), supporting:
  • the National Data Strategy’s objective to unlock the power of data to drive economic growth and competitive advantage, benefitting citizens across the UK; and
  • the AI Strategy’s ambition to ensure that the governance of AI technologies encourages innovation, investment, and protects the public and our fundamental values.
​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus