What’s this all about?
The EU Artificial Intelligence Act – the so-called EU AI Act - comes into force on 1 August. It will change the way in which AI is regulated across Europe, and it has extra-territorial affect too. But what will the new Act do and how should organisations prepare?
What is the EU AI Act?
The EU AI Act was passed by the European Parliament on 13 March 2024 and formally adopted by the EU Council on 21 May 2024. It was published in the Official Journal on 12 July 2024, and comes into force 20 days following publication.
The EU AI Act is in the form of an EU Regulation. The EU AI Act aims to ensure that AI systems placed on the EU market and used in the EU are safe. The EU claims that the EU AI Act is the first-ever comprehensive legal framework on AI worldwide.
What’s the EU approach?
The first thing to say is that even before the passing of the EU AI Act AI was not completely unregulated in the EU. There has been previous enforcement activity against AI under GDPR including:
- The Italian Data Protection Authority’s ban for the ReplikaAI chatbot;
- The temporary suspension by Google of its Bard AI tool rollout in the EU after intervention from the Irish data watchdog;
- Italian DPA fines for Deliveroo and a food delivery start-up over AI algorithm use;
- Clearview AI fines under GDPR including from the Italian, French and Greek DPAs.
The EU’s regulatory approach in the EU AI Act aims to be risk-based, which, according to the EU is as follows:
- Minimal risk – Most AI systems present only minimal or no risk for citizens' rights or safety. There are no mandatory requirements, but organisations may nevertheless voluntarily commit to additional codes of conduct for these if they wish. Minimal risk AI systems are generally simple automated tasks with no direct human interaction, such as an email spam filter.
- High-risk – Those AI systems identified as high-risk will be required to comply with strict requirements, including: (i) risk-mitigation systems; (ii) obligation to ensure high quality of data sets; (iii) logging of activity; (iv) detailed documentation; (v) clear user information; (vi) human oversight; and, (vii) a high level of robustness, accuracy and cybersecurity.
Providers and deployers will be subject to additional obligations regarding high-risk AI. Providers of high-risk AI systems (and GPAI model systems discussed below) established outside the EU will be required to appoint an authorized representative in the EU in writing. In many respects this is similar to the Data Protection Representative (DPR) provisions in GDPR. There is also a registration requirement for high-risk AI systems under Article 49.
Examples of high-risk AI systems include:
(a) some critical infrastructures, for example, for water, gas and electricity;
(b) medical devices;
(c) systems to determine access to educational institutions or for recruiting people; or
(d) some systems used in law enforcement, border control, administration of justice and democratic processes.
In addition, biometric identification, categorisation and emotion recognition systems are also considered high-risk.
There are some exemptions for AI systems which would ordinarily be high-risk but where these exemptions apply there’s still a record keeping requirement which is a little bit like the DPIA process under GDPR. It will be important to have proper assessment tools in place to help record this assessment as it must be produced to a regulator on demand.
- Unacceptable risk – AI systems considered as constituting a clear threat to the fundamental rights of people will be banned outright 6 months after the Act enters into force. This includes:
- AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow so-called “social scoring” by governments or companies, and some applications of predictive policing;
- In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used in the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces, subject to some narrow exceptions.
- Specific transparency risk – Also called limited risk AI systems, which must comply with transparency requirements. When AI systems such as chatbots are used, users need to be aware that they are interacting with a machine. So-called “deep fakes” and other AI-generated content will have to be labelled as such, and users will have to be informed when biometric categorisation or emotion recognition systems are being used. In addition, service providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
What is a risk based approach?
The higher the risk to cause harm to society, the stricter the rules. The European Commission’s materials accompanying the Act set this out in a diagram as follows:
What about General Purpose AI?
The EU AI Act introduces dedicated rules for so-called “general purpose” AI (GPAI) models aimed at ensuring transparency. Generally-speaking, “general purpose AI system” means an AI system that is intended by the service provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others. For very powerful models that could pose systemic risks there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation, and adversarial testing – a bit like red teaming to test for information security issues. These obligations will come about through codes of practice developed by a number of interested parties.
What is systemic risk?
Systemic risk:
- Is risk specific to the high-impact capabilities of general purpose AI models;
- Has a significant impact on the EU market due to reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole;
- Can be propagated at scale.
Broadly speaking, there are two categories of GPAI: conventional GPAI and systemic risk GPAI. There are specific requirements for providers of GPAI models, and additional, more rigorous, requirements for providers of GPAI models with systemic risk, for example extra assessment and reporting obligations. GPAI models can be integrated into a wide variety of systems and processes, to conduct a wide variety of tasks. These additional requirements seek to address the concerns that highly powerful models could cause potentially negative effects, such as disruptions of critical sectors and negative consequences to public health, and dissemination of illegal or false content.
What about enforcement?
National so-called “market surveillance authorities” (MSAs) will supervise the implementation of the EU AI Act at the national level. Member States are to designate at least one MSA and one notifying authority as their national competent authorities. Member States need to appoint their MSA at national level before 2 August 2025, for the purpose of supervising the application and implementation of the EU AI Act. It is by no means guaranteed that each Member State will appoint its DPA as the in-country MSA but the European Data Protection Board pushed for them to do so in its plenary session in July 2024.
In addition to in-country enforcement across the EU, a new European AI Office within the European Commission will coordinate matters at the EU level, which will also supervise the implementation and enforcement of the EU AI Act concerning general purpose AI models. With regard to GPAI, the European Commission, and not individual Member States, has the sole authority to oversee and enforce rules related to GPAI Models. The newly created AI Office will assist the Commission in carrying out various tasks.
In some respects, this system mirrors the current regime in competition law with in-country enforcement together with EU co-ordination. But this could still lead to differences in enforcement activity across the EU as we’ve seen with GDPR, especially if the same in-country enforcement bodies have responsibility for both GDPR and the EU AI Act.
Might I be subject to dawn raids?
Yes, in certain circumstances. The first is in relation to testing high-risk AI systems in real-world circumstances. Under Article 60 of the Act, MSAs will be given powers of unannounced inspections, both remote and on-site, to conduct checks on that type of testing.
The second is that competition authorities may perform dawn raids as a result of this Act. MSAs will report annually to national competition authorities any information identified in their market surveillance activities that may be of interest to the competition authorities. Competition authorities have had the power to conduct dawn raids under anti-trust laws for many years now. As such, competition authorities might conduct dawn raids based on information or reports received under this Act.
What are the penalties for non-compliance?
When a national authority or MSA finds that an AI system is not compliant, they have the power to:
- Require corrective actions to make that system compliant;
- Withdraw, restrict, or recall the system from the market.
Similarly, the Commission may also request the above actions to enforce GPAI compliance.
Non-compliant organisations can be fined under the new rules, as follows:
- €35 million (around $US38 million at today’s rate) or 7% of global annual turnover of the preceding year for violations of banned AI applications;
- €15 million (around $US16 million at today’s rate) or 3% for violations of other obligations, including rules on general purpose AI models;
- €7.5 million (around $US8 million at today’s rate) or 1.5% for supplying incorrect, incomplete, or misleading information in reply to a request.
Lower thresholders will be set out for SMEs and the higher thresholds for other companies.
What is the AI Office?
The European AI Office was established by the Commission earlier this year as a new EU level regulator. It was established with the aim of being the centre of AI expertise and the foundation for a single EU AI governance system, and will support and collaborate with Member States and experts. The European AI Office will also seek to facilitate uniform application of the AI Act across Member States. Despite it’s name the remit of the Office is the EU and not across Europe.
The AI Office will monitor, supervise, enforce, and evaluate compliance with the EU AI Act GPAI requirements across Member States. This is also the body that will produce the Codes of Practice for GPAI.
The Commission has granted the AI Office powers to conduct evaluations of GPAI models, investigate possible infringements of GPAI models, request information from model providers, and apply sanctions.
The AI Office will also act as Secretariat for the AI Board and convene meetings.
What is the EU AI Board?
The EU AI Board was established to support and facilitate the implementation of the AI regulations, and to assist the AI Office. The Board is composed of one representative per Member State and will be responsible for advisory tasks such as issuing opinions and recommendations and providing advice to the Commission and Member State authorities. In some respects then the EU AI Board mirrors the functions of the EDPB in GDPR enforcement.
What about data protection/privacy?
The relationship between AI regulations and data privacy regulations is important for a number of reasons, one of the most significant being how AI systems use and receive vast data inputs in its lifecycle, and how a significant amount of that data may be personal data.
The EU AI Act will run alongside existing EU data protection rules including GDPR. While GDPR does not explicitly mention AI, the EU AI Act does consider the relationship between AI and data privacy, stating that the Act is without prejudice to existing EU law on data protection.
As well as the cases mentioned above there’s also a significant volume of guidance from EU data protection authorities which will also need to be taken into account when designing or implementing an application featuring AI. An influential group of German data protection authorities, the Datenschutz Konferenz (or DSK) has already expressed concerns about issues like the allocation of responsibilities and we may see conflicts between the new Act and GDPR.
Does the EU AI Act have extraterritorial reach?
Yes, its extraterritorial application is quite similar to that of the GDPR. The EU AI Act may affect organisations in the UK, and elsewhere including the US. Broadly, the EU AI Act will apply to organisations outside the EU if their AI systems or AI generated output are on the EU market, or their use affects people in the EU, directly or indirectly.
For example, if a US business’s website has a chatbot function which is available for people in the EU to use, that US business will likely be subject to the EU AI Act. Similarly if a non-EU organisation does not provide AI systems to the EU market, but does make available AI system generated output to people in the EU (such as media content), that organisation will be subject to this Act.
The UK, the US, China and other jurisdictions are addressing AI issues in their own particular ways.
What about the UK?
The UK government published its white paper on its approach to AI regulation in March 2023, which set out its proposed “pro-innovation” regulatory framework for AI, and subsequently held a public consultation on the proposals. The government response to the consultation was published in February 2024. However, since then the UK Government has changed, and we’ve seen the Government’s position on AI change too. The position of the new Labour Government was set out in the King’s Speech in July with the new Government saying it would “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”
The new Government will also set up a new Regulatory Innovation Office which will look at the challenges of AI and support existing regulators including the Information Commissioner’s Office and the Competition and Markets Authority in using their existing powers to regulate AI. We don’t yet know the shape of the new AI law (and no draft Bill was referred to in the speech) but this could be a simplified version of the EU AI Act.
What happens next?
The EU AI Act was published in the Official Journal on 12 July 2024 and will enter into force 20 days after publication (1 August 2024) and become fully applicable two years after that, apart for some specific provisions. Prohibitions will apply after 6 months and most of the rules on GPAI will apply after 12 months. The timetable looks like this:
What is the AI Pact?
Before the EU AI Act becomes generally applicable, the European Commission will launch a voluntary so-called “AI Pact” aimed at bringing together AI developers from Europe and around the world to commit on a voluntary basis to implement key obligations of the EU AI Act ahead of the legal deadlines. The European Commission has said that over 550 organisations have responded to the first call for interest in the AI Pact but whether that leads to widespread adoption remains to be seen. The Commission published draft details of the AI Pact to a select group outlining a series of voluntary commitments as part of its the AI Pact in July. The Commission is currently aiming to launch the AI Pact in October 2024.
Summary
Legal issues concerning AI are not new and we are already seeing issues coming to the fore including through litigation, such as regarding the use of ChatGPT, notoriously concerning case-law hallucination. Organisations should consider reviewing what they are doing about AI in the workplace and at the very least set out the dos and don’ts for their employees about this. It is also wise to develop a formal process to look at issues like fairness and transparency both to meet existing legal obligations and to help comply with the new EU AI Act once it comes into force.
What can I do to prepare for AI regulation?
Organisations should start looking at the impact that this Act may have on its operations and governance.
The first step an organisation can take is to review the current position. Pertinent questions include: Are we currently using any AI systems? Are we planning to use any AI systems? Do we have any existing policies and procedures that are relevant?
Organisations can then conduct compliance gap analyses to identify the key issues to address and identify the key business areas or activities that will be affected.
Resources
You can read the EU AI Act here EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu).
Our summary of the King’s Speech and likely changes to the UK Government’s stance on AI is here: https://bit.ly/pskingsspeech.
The EDPB statement on DPAs as MSAs is here: https://bit.ly/4bTcLnZ.
There is more information on AI risk more generally in the New York State Bar Association’s AI Task Force report here: Legal Profession Impact - Ethics (nysba.org). There’s a summary of that report in a Fordham University podcast with Jonathan Armstrong here: https://bit.ly/4d5VHMq.
For more information please contact Jonathan Armstrong at Punter Southall Law.
Jonathan Armstrong
Partner
jonathan.armstrong@puntersouthall.law
Tel: +44 20 3327 5300