Community spotlight: Dr Gemma Galdon-Clavell from Eticas | Minimising discrimination in hiring practices with FINDHR
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌  ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 
Example logo
Algorithm Governance Roundup #2
Community spotlight: Dr Gemma Galdon-Clavell from Eticas | Minimising discrimination in hiring practices with FINDHR
Welcome to December’s Roundup. As the festive season kicks off, please enjoy reading our interview with Dr. Gemma Galdon-Clavell from Eticas about algorithm audits, accountability tools, and the inaugural Algorithmic Accountability Conference.

As a reminder, we take submissions: we are a small team who select content from public sources. If you would like to share content please reply or send a new email to Our only criterion for submission is that the update relates to algorithm governance, with emphasis on the second word: governance. We would love to hear from you!

Many thanks and happy reading!

AWO team

This Month's Roundup
In Europe, the Commission has launched the European Centre for Algorithmic Transparency. The Centre will contribute scientific and technical expertise to assist the Commission’s enforcement of obligations on Very Large Online Platforms and Search Engines under the Digital Services Act. These obligations include algorithmic accountability and audit. The Centre is hiring – see opportunities for more.

A new
Horizon Europe project 'FindHR' will address software-related discriminatory effects within recruiting processes by developing open anti-discriminatory methods, tools, and trainings. This interdisciplinary project is led by Universitat Pompeu Fabra and involves AlgorithmWatch and Eticas, academics at Radboud University, Max Planck Institute for Security and Privacy, Universiteit Van Amsterdam, Universita di Pisa, Erasmus Universiteit Rotterdam, non-profits Women in Development Europe+ and Praksis Association, European Trade Union Confederation, and members of industry Randstad Nederland and Adevinta Spain.

In the UK,
CDEI is collaborating with techUK to develop case studies of AI assurance good practice to address the need for guidance. These case studies will demonstrate how organisations are using different AI assurance techniques, like impact assessments and performance testing, in practice. Submissions are still open, and industry are invited to submit their case studies for inclusion.

The Arts and Humanities Research Centre, part of UK Research and Innovation (UKRI), has 
launched ‘Enabling a Responsible AI Ecosystem’, a project in collaboration with the Ada Lovelace Institute. The multidisciplinary project will draw on experts from the arts, humanities and social sciences to create an AI environment that is responsible, ethical, and accountable by default. The Programme Directors will be Edinburgh University professors Shannon Vallor and Ewa Luger.

In Chile, GobLab UAI has developed a
binding regulation on algorithmic transparency for public agencies with the Chilean Transparency Council. A pilot scheme was run on seven automated systems across four public agencies, alongside public consultation and workshops with civil society organisations and public institutions. GobLab UAI has also developed standard terms for the procurement of automated systems for the Chilean Procurement Agency, and created a public sector algorithm registry.

In the US, a group of researchers have
launched the Coalition for Independent Technology Research. It works to advance, defend, and sustain the right to study the impact of technology on society. The Coalition will promote funding for research, create standards and oversight, conduct advocacy and convene communities of practice. It aims to build a coalition of academics, journalists, non-profit researchers and community scientists. Become a member here.
The Ada Lovelace Institute published Inform, educate, entertain… and recommend? The report explores the ethics of recommendation systems used in public service media organisations, and how they address the challenge of of designing and implementing recommendation systems within the parameters of their mission.

The CDEI published Industry temperature check: barriers and enablers to AI assurance. The report summarises the key findings from its stakeholder engagement around an effective AI assurance ecosystem..

The European Union Fundamental Rights Agency published Bias in algorithms - Artificial intelligence and discrimination. The report investigates bias in the use of artificial intelligence in predictive policing and offensive speech detection, and calls for systems to be tested for biases that could lead to discrimination.

Towards Transparent and Explainable AI: Workshop on ISO/IEC Standards Development
In-person workshop at The Alan Turing Institute, London, 11 January 12:00-17:00.
Registration is open until 18 December.

The AI Standards Hub is hosting a workshop to shape two prominent international standards on AI explainability and transparency, which are currently being developed in ISO/IEC.

The AC FAccT Conference are calling for papers for their 2023 conference, which will be held in Chicago with support for online participation from the 12-15 June.
The deadline for abstract submission is 30 January, and deadline for paper submission is 6 February.

The Institute for Information Law and DSA Observatory at Amsterdam University is 
calling for participation on a workshop on Researcher Access in the DSA, to be held on the 15 March.
The deadline for application is 16 January.

The Ada Lovelace Institute is conducting research on the role of public participation in commercial AI labs. Lead researcher Lara Groves is looking to interview practitioners at AI labs involved in public engagement.
Interviews will be conducted until 13 January, reach out to to express interest.

The European Centre for Algorithmic Transparency is hiring researchers, inspections, communications and community management, information security officers and legal officers. The roles will be based in Seville, Ispra or Brussels.
The deadline for applications is 9th January.

Community Spotlight: Dr Gemma Galdon-Clavell, Eticas
Dr. Gemma Galdon-Clavell is the founder and CEO of Eticas, an organisation based in Barcelona which works on the social, ethical, and legal impact of data-intensive technology. Eticas designs and implements practical solutions to data protection, ethics, explainability, and bias challenges in AI, including algorithmic audit.

AWO: Could you tell us about Eticas' algorithm audit work?

Gemma: Eticas has existed for 10 years. We initially worked on the social, legal, and ethical impact of technology and realised we wanted to start doing something about the issues we were researching. We started to explore algorithmic inspection through word of mouth, working together in public-private partnerships to inspect systems. After three years of undertaking this work we published our
first guide to algorithmic auditing. Whilst audits are a dynamic process that must be adapted to specific algorithm and context, we proposed five general stages:
  • preliminary study
  • mapping
  • analysis plan
  • analysis
  • audit report.
Our guiding principles were legal and ethical compliance, desirability, acceptability and proper data protection and management. This first guide was still quite high-level, as we were in the learning and testing stage.

Since then, our work has become more public. We are about to launch our second guide to algorithmic auditing which is a lot more technical, drawing on our audit experience. We have recently had interest from the European Data Protection Board and the European Commission to launch this second guide as a resource for policymakers, to provide them with a framework to audit systems.

We really want to foster and accelerate the implementation of regulation. This is why we have focused on the market, working with industry actors who need assistance in combatting algorithmic harms. Our advocacy and awareness-raising is a by-product of this work with industry.

AWO: Eticas is pioneering tools to promote algorithmic governance, such as algorithmic leaflets and certification. Could you tell us about these?

Gemma: At Eticas, we place a lot of emphasis on auditing because it is a particularly powerful way to look at a system from a socio-technical and end-to-end perspective. But we believe that audit is just one of the elements needed. Artificial Intelligence, like any other product or system, needs an ecosystem of regulation to ensure its safety. This is the same as any other innovation product or system: I often say that algorithmic audits are the seatbelts of cars, but we also need red lights and zebra crossings. Therefore, we are developing other tools that touch on different aspect of the algorithmic lifecycle.

We have developed algorithmic leaflets, premised on the idea that any algorithm with social impact should be published with a priori transparency information. This leaflet would be available to anyone, for example potential clients who want to buy the system or individuals who are impacted. We have recently teamed up with Adigital, The Spanish Association of the Digital Economy, to trial this process with four companies.

Another tool we have developed is the ‘Algo Score’ which takes inspiration from environmental scores for household appliances and nutritional scores on food packaging. These provide easily understandable information for an end-user or consumer to make a decision between different systems. At the moment, there are no incentives to reward actors who take the time to incorporate end-users and consider impacts in their development process, to market transparent and safer algorithmic systems. The ‘Algo Score’ rewards these actors, by encouraging end-users to choose safer systems.

A combination of these tools will ensure that AI is the same as any other innovation field — such as the food, automobile, or medical industry — where there are built-in precautions and guarantees. The ideology of Silicon Valley — the idea that regulation limits innovation — has been incredibly harmful. It has created a non-democratic state of affairs, since democracies are characterised by regulation, control and protection. As we all know, this lack of regulation has meant little protection for individuals.

AWO: What challenges have you faced doing this work? Do you have any advice for others who are working on similar initiatives in their respective countries?

Gemma: The main challenge faced by actors working in the algorithm governance space is getting someone to listen to them, but the tide has shifted. For example, the European Commission have realised the importance of algorithmic accountability and are proactively engaging with this issue. Industry actors are also trying their best and generally don’t want to see their best efforts result in harms, so they are willing to engage and improve.

Do not let the perfect be the enemy of the good. When we work on algorithmic accountability and opening up systems for inspection, we find that actors are often unaware of the bad practices going on. It is important to realise that some improvement is better than none.

Our community has been slow to build on each other’s work. Coming from academia, it can be harder to foster collaboration and conversations. This is also made difficult by the competitive process of applying for funding. However, our goal is to work with like-minded actors to change things for the better. We hope the algorithmic auditing space can develop a more uniform voice so that we can work easily with policy makers and industry. At Eticas we have a dynamic of collaboration: Our work is conducted openly, and we ask for feedback and involvement from external organisations. We believe that unless we collaborate, we will not have the impact we want.

AWO: Speaking of collaboration, you recently hosted the first International Algorithmic Auditing Conference (available to watch here). Could you tell us about the conference and some key outcomes?

Gemma: We started working towards the Conference at the start of the Summer, as it became obvious that algorithmic auditing was gaining more attention. However, we noticed that people were saying similar things in slightly different ways and felt that this could confuse policy makers.

We had
15 speakers from across Europe and the US, and we also reached out to experts based in Latin America and Asia. Our speakers came from civil society including Data&Society, the Ada Lovelace Institute, Mozilla, academia and consultancy organisations such as .AI and Babl. We wanted to invite a broad spectrum of people who knew the field well and spent time searching for new organisations and contacts. We went on to have policy meetings with MEPs Dragoș Tudorache and Ibán García del Blanco on the Artificial Intelligence Act and to share our work on audits.

The AIA requires each Member State to set up a national algorithmic agencies. We are working with several public agencies to develop guidelines for these new bodies building on the experience of GDPR enforcement, to ensure they can effectively enforce the AIA.

One unexpected and great outcome was how enthused our speakers were to have the opportunity to share space with, and debate, algorithmic auditing issues with a community of fellow auditors and researchers. We are very eager to continue working together. For example, New York recently passed legislation mandating audits of hiring algorithms. The results from these audits are going to be published in January and we will review these together, and hopefully collaborate on a report. I have also been hired by several EU institutions to assist them to understand the auditing ecosystem, and I intend to circulate this work with the community, for feedback and validation.

AWO: Is there anything else you’d like to share about the work Eticas has done or is coming up?

Gemma: Our specific impact continues to be in EU. However, we have recently incorporated in the US, with the intention of sharing our work with policy makers, civil society, academics, and industry. We hope to be able to bridge communications between the EU and US on algorithmic accountability, by sharing our European work with policy makers and continue to build community with US organisations. We have also received funding to work in Latin America and Africa.

​​:// Thank you for reading. If you found it useful, forward this on to a colleague or friend. If this was forwarded to you, please subscribe!​

If you have an event, interesting article, or even a call for collaboration that you want included in next month’s issue, please reply or email us at We would love to hear from you!
You are receiving Algorithm Governance Roundup as you have signed up for AWO’s newsletter mailing list. Your email and personal information is processed based on your consent and in accordance with AWO’s Privacy Policy. You can withdraw your consent at any time by clicking here to unsubscribe. If you wish to unsubscribe from all AWO newsletters, please email

A   W            O
Wessex House
Teign Road
Newton Abbot
TQ12 4AA
United Kingdom
Powered by EmailOctopus