Community Spotlight: Matthias Spielkamp, AlgorithmWatch | Deadline for VLOPs designation
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
Example logo
Algorithm Governance Roundup #4
Community Spotlight: Matthias Spielkamp, AlgorithmWatch | Deadline for VLOPs designation
Welcome to February’s Roundup. We hope you enjoy this month’s community spotlight with Matthias Spielkamp, executive director and co-founder of AlgorithmWatch, about their work on automated decision-making systems and a new project on auditing for systemic risks under the Digital Services Act.

As a reminder, we take submissions for the newsletter: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to algorithm-newsletter@awo.agency. Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you!


Many thanks and happy reading!


AWO team

This Month's Roundup
In Europe, the first transparency reports for the Code of Practice on Disinformation have been published.

The European Commission has published
Guidance on the requirement to publish user numbers under the DSA. All online intermediaries with over 45 million average active monthly users in the European Union will be designated as VLOPs under the DSA. Intermediaries had to declare their user numbers by 17 February. Ultimately, the European Commission is responsible for designation despite intermediaries’ claims.
Politico has created a
bar chart of the biggest platforms' declared monthly active users. Martin Husovec has crowdsourced a detailed table of intermediaries' claims.

In Germany, AlgorithmWatch has launched a new
project on auditing algorithms for systemic risk as required by the Digital Services Act. The project intends to develop definitions, procedures and best practices.
In this month’s Community Spotlight, we interview Matthias Spielkamp, executive director and co-founder of AlgorithmWatch, about this new project, their wor
k on automated decision-making systems in the public sector and their reporting fellowship.


In Spain, the Ministerio de Asuntos Económicos y Transformación Digital is leading the development of the
AI regulatory and ethical framework. The framework includes a regulatory sandbox, created last year to support the implementation of the AI Act. The framework will also establish a National Agency for Artificial Intelligence Oversight, an Observatory on the social and ethical impact of algorithms, and a Trustworthy AI Seal. AlgorithmWatch has analysed the current plans for the oversight agency.


In the United Kingdom, the Parliament Science and Tech Committee held an
oral evidence session for their inquiry into the Governance of Artificial IntelligenceStakeholders from academia and industry presented evidence.


In Brazil, the Senate Committee has published a 
draft AI Law and Report (full report in Portuguese). The draft Law takes a risk-based approach and prohibits public bodies from deploying social scoring and biometric identification systems. It places requirements on providers and users of AI systems, including to conduct risk assessments and establish governance structures. It also empowers individuals with a number of rights such as the rights to information, explanation and to challenge automated decisions. Bruna Santos provides further detail on Twitter.


In the US, New York City’s Department of Consumer and Worker Protections (DCWP) has held a
hearing on legislation introduced to mandate bias audits of automated employment decision tools (AEDT). HolisticAI has analysed the impact ratio metrics for bias audits prescribed by the DCWP. Meanwhile, lawmakers in New Jersey have also introduced a Bill to require bias audits of hiring software.

Research
Article 19 has published the Internet Standards Almanac. It includes a guide to the Standard Development Organisations working on Artificial Intelligence with details of publicly available standards, reports and working groups.

The Guardian and the Pulitzer Center’s AI Accountability Network has published ‘There is no standard’: investigation finds AI algorithms objectify women’s bodies. This report finds that AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved.


The Royal Society has published From privacy to partnership: the role of Privacy Enhancing Technologies (PETS) in data governance and collaborative analysis (PDF), which was undertaken in close collaboration with the Alan Turing Institute and contains six use cases illustrating PETs in practice. It also makes eight recommendations (many directed towards the UK Government) for the safe development and adoption of PETs, building on the Society’s 2019 publication Protecting privacy in practice.


Stiftung Neue Verantwortung has published Auditing Recommender Systems: Putting the DSA into practice with a risk-scenario-based approach. This paper proposes a risk scenario-based audit process to operationalise the recommender systems audit required by the DSA.


A researcher at the Technology and International Affairs Program at the Carnegie Endowment for International Peace has published The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist. This article questions the technical feasibility of creating robust and specific standards for AI, as relied upon in the EU’s Artificial Intelligence Act.


Researchers at the University of Montréal have published Assessing Impacts of AI on Human Rights: It’s not solely about Privacy and Non-Discrimination. This article considers how algorithmic systems impact a range of human rights and the opportunity for Human Rights Impact Assessments to be implemented in the regulation of algorithmic systems.

Opportunities
At AWO, we are expanding our team and hiring a Senior Associate for our Strategic Research and Insight team. This is a generalist role but may include work on algorithm governance. Please read or forward to anyone whom you think will be interested!
The application deadline is 05 March.

OpenAI is offering subsidised access to their API for research on fairness and representation, alignment, and sociotechnical research, amongst other areas. Researchers can apply through their Researcher Access Program Application.
Upcoming Events
Regulating AI in Europe: Between Standardization and Fundamental Rights
Hybrid in-person and online: IE Tower, Madrid and Online, 06 March 14:00 CET
Hosted by the IE University Law School’s Lawtomation Jean Monnet Centre. This talk considers how to best regulate artificial intelligence for citizens, businesses and innovation.

Community Spotlight: Matthias Spielkamp, AlgorithmWatch
Matthias Spielkamp is the co-founder and executive director of AlgorithmWatch, a non-profit organisation based in Berlin that conducts research and advocacy on automated decision-making systems.

Q: Can you tell us about AlgorithmWatch’s history?
Matthias: AlgorithmWatch was officially launched at the German digital issues conference
Re:publica in the summer of 2016. We wanted to build a watchdog for automated decision-making systems. We felt it was necessary to have such an initiative following a (unsuccessful) grant proposal we wrote, which aimed to investigate predictive policing systems being rolled out across Europe.


Our work is focused on four issue pillars.

1. Algorithmic systems in the public sector.

2. Algorithmic systems in the public sphere, for example platform regulation.

3. Algorithmic systems in the workplace.
We are focused on “people analytics” and performance monitoring systems. We have not yet had capacity to engage in the platform workers discussion but hope to do so given our work on monitoring systems.

4. Algorithmic systems and sustainability.
We are working on the sustainability implications of producing algorithmic systems. This includes energy consumption and labour conditions. For example, the recent revelations about the workers in Kenya who were exposed to horrific content whilst labelling data for ChatGPT - we believe that this aspect has been under-reported and under-regulated. The EU’s Artificial Intelligence Act is completely silent on these issues.

Q: Can you tell us about AlgorithmWatch’s work on automated decision-making systems and the public sector?
Matthias: Often, private companies are more in the public eye than the public sector, except for policing and security. Despite this, the public sector is highly relevant for people – you simply can’t escape it. Generally, public administration is characterised by strong accountability because bodies have legal responsibilities. For example, they must follow well-defined procedures and withstand the test of law. However, complex algorithmic systems can lead to a lack of accountability. They are black boxes, making them too complex and resource intensive to easily understand.


We have seen examples where mistakes by automated decision-making systems in the public sector have led to dire consequences. For example, in Australia a software system erroneously sent out high payment requests to thousands of people. Similarly, a Dutch system used to detect childcare benefit fraud erroneously claimed debts were owed, resulting in truly awful consequences for the people impacted.


German public administration suffers from a low level of digitisation, which means there are few large-scale systems for things to go wrong. This is a result of our federated structure and is subject to intense political debate. It can be quite amusing; the limit of our digitisation is, for example, our employment agency that deploys machine learning to scan and OCR paper documents sent in for child support payments. Whilst we want to see increased digitisation, we see the current situation as an opportunity to learn from other countries and build in accountability from the beginning.


To ensure accountability, we propose mandatory impact assessments and a public register for algorithmic systems in the public sector. We have published a full
impact assessment methodology. It is a two-stage process.


The first step is the automated decision-making triage. All systems must be assessed for whether they send out risk signals at all. Some complex systems may not be risky and will not require the full impact assessment. This step should not be limited to complex systems because we want to avoid claims that a system is not AI; simple systems can pose a risk to rights.


If a system does send out risk signals, it is necessary to complete a detailed impact assessment. The output of this assessment is the transparency report which must be included in the public register. This way, we use transparency as a tool rather than an end in itself.


Q: The EU’s
Artificial Intelligence Act adopts a risk-based approach, can you tell us about your work on this legislation?
Matthias: We are dissatisfied with the Artificial Intelligence Act. The conceptualisation is problematic. The Act categorises a system’s risk by its intended use. Whilst it may make sense to pose broad categories of risk, such as systems in the workplace, we believe that it is necessary to look at a complex system’s application and implementation to identify its risk profile. This is the approach described in our impact assessment methodology.


We are also very concerned about the self-assessment approach. The Act asks companies to self-assess whether they conform with certain standards and, if they decide they are, the system becomes difficult to challenge. These conformity criteria are not yet defined and are being based mainly on technical standards, which is not a good approach to protect fundamental rights.


We are working with a coalition of other CSOs, including EDRi, Access Now and ECNL, to advocate for a better approach in the European Parliament and at the Member State level. We are holding onto our broader perspective whilst advocating for specific amendments. For example, I am talking to German parliamentarians about our demand to outlaw biometric identification systems. The current government’s coalition treaty says they will demand a prohibition, but this has not yet materialised. We are also pushing to introduce provisions concerning labour relations and energy consumption of these systems.


Q: Can you tell us about your new project on auditing algorithms for systemic risks?
Matthias: This project has two elements: research and advocacy. Firstly, our research is concerned with how to define systemic risk, a key concept that is the subject of platforms’ risk assessments and audit under the EU’s
Digital Services Act. The overall aims of the project are:
  • To develop an idea of what systemic risks are.
  • To identify how platforms conceptualise systemic risks.
  • To identify which systemic risks platforms ask auditors to audit.
  • To review completed systemic risk audits.
We are right at the beginning of this process, as the first audit reports are not due until Autumn 2024 at the earliest. We are currently working on our first discussion paper. We hope our research will start a conversation in civil society about how to conceptualise systemic risks and develop criteria, indicators, and auditing rules.

Our advocacy in this area aims to create a formal role for civil society and academia when the DSA is implemented into national law in Germany. We are involved in an international initiative led by the
Frank Bold Society which is mapping the implementation of the DSA. Coalition members will then be able to leverage best practices from Germany across Europe and vice versa.


Q: You recently announced your first fellowship in algorithmic reporting. Can you tell us a bit more about this?
Matthias: From AlgorithmWatch’s creation we hoped to be a newsroom reporting on automated decision-making across Europe. This is a difficult task because of the variety of languages and contexts that exist across Europe, which prevents the formation of a true European public sphere compared to the United States. On top of this, it is difficult to receive funding for journalism in Germany because it is not categorised as a tax-exempt, non-profit activity. Despite this, we have always hoped to increase our reporting capacity and are excited to have received funding to create the fellowship.


Our journalism department is led by my colleague Nicolas Kayser-Bril. The fellowship was open to anyone with a keen interest, not just journalists, who wanted to report on algorithmic systems. They will use the fellowship money to do journalistic research and we will provide professional guidance and editorial support. This will be useful for the public, highlighting where justice, equity and fairness are impacted by algorithmic systems. We also hope it will push the concept of algorithmic accountability reporting within the journalistic community. We announced our first fellows in January and will begin to publish work in the coming months.
​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at algorithm-newsletter@awo.agency. We would love to hear from you!
Subscribe
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email privacy@awo.agency.


A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus