In the EU, the European Parliament has
adopted its final version of the AI Act. It includes a prohibition on real-time remote biometric identification and a tiered approach for general purpose AI, with stricter obligations for the foundation layer, and labelling and disclosure obligations for the application layer. The Verge
examines these obligations in detail.
A Time investigation has
revealed that the CEO of OpenAI lobbied for general purpose AI systems such as ChatGPT to not be designated as ‘high risk’, despite publicly advocating for global AI regulation. It also revealed that OpenAI proposed several amendments that appear in the European Parliament version of the Act.
The European Commission’s Digital Europe Programme has
launched four testing and experimental facilities ("TEFs") to test the risks and impacts of AI applications before they are launched. The TEFs have been introduced to support the EU’s Data Act which calls for regulatory sandboxes. Each TEF has a specific focus: machine learning software and robotics, health and patient data, food production, and power and mobility in cities.
The European Union and the US have
published the EU-US Terminology and Taxonomy for Artificial Intelligence. The document contains definitions of key terms relating to AI and governance.
In the UK, the Department for Science Innovation and Technology has
launched a Portfolio of AI Assurance Techniques. The portfolio features a range of case studies of AI assurance techniques. Organisations can submit case studies to the portfolio at
ai.assurance@cdei.gov.uk.
The national funding agency UK Research and Innovation has
invested £31 million in a new consortium, Responsible AI UK. The consortium aims to create a UK and international research and innovation ecosystem for responsible and trustworthy AI, and is led by the University of Southampton.
Big Brother Watch and AWO’s complaint to the ICO about British supermarkets’ use of Fcaewatch facial recognition technology to combat shoplifting
resulted in changes to Facewatch’s policies. Changes include more signage, only sending alerts about repeat offenders, and only sharing information about serious and violent offenders with other stores.
In the US, the Senate majority leader Chuck Schumer has
introduced the SAFE Innovation Framework for AI. The Bill aims to promote accountability, introducing explainability obligations; protect employment; and secure against cyber attacks. A
Time article critiques this approach and the technical feasibility of AI explainability.
Senators also plan to
introduce a bipartisan bill to clarify that generative AI will not benefit from legal immunity under Section 230 of the Communications Decency Act.
The Federal Trade Commission has
published guidance on the competition concerns of generative AI. The guidance explains the essential technical building blocks of generative AI and examines the competition concerns which include incumbents leveraging control of inputs and adjacent markets such as cloud computing.
The National Artificial Intelligence Advisory Committee (NAIAC)
published its first-year report. Several members of the Committee
published their perspective on the report, arguing it missed the opportunity to articulate a people-first, rights-respective AI strategy. The NAIAC also
hosted expert briefing sessions with civil society groups.
Mozilla has
launched the Open Source Research and Investigations team to investigate platform algorithms using crowdsourced data. The team will initially investigate TikTok’s ‘For You’ page algorithm and platform integrity amid elections outside the U.S. and in non-English speaking countries.
OpenAI has been
sued in two class action lawsuits. The first
alleges OpenAI violated privacy rights when it scraped data from the internet to train ChatGPT. The second
alleges OpenAI infringed copyright law when it scraped data from online book excerpts without the authors’ consent.
In Chile, the Government will introduce the General Instruction on Algorithmic Transparency. This law will mandate public bodies to comply with the algorithmic transparency standard. The Observatory of Public Sector Innovation
explores the development of this standard.