The EU AI Act is a proposed regulation out of the European Union (EU) that seeks to regulate the usage of artificial intelligence (AI) through a risk-based approach. The European Commission (EC) proposed the first-of-its-kind draft legislation in 2021, but with the rest of the policy world turning its attention to AI and how to regulate it, the lawmakers in the EU are hoping to speed up the process to end trilogue negotiations and vote on the bill by the end of the year. Below are the highlights of the draft legislation and what to expect moving forward.
Risk categories
Members of the European Parliament (MEPs) outlined four categories to classify all kinds of artificial intelligence based on the level of risk a specific AI tool brings. AI systems with limited and minimal risk—like spam filters or video games—are allowed to be used with few requirements other than transparency obligations. Systems deemed to pose an unacceptable risk—like government social scoring or real-time biometric identification systems in public spaces—are currently prohibited with little exception.
1. Unacceptable Risk: AI systems that are harmful to security, livelihoods, or human rights are banned.
2. High Risk: AI used in areas like transportation, education, justice, and product safety. Before these AI technologies hit the market, they need to meet strict conditions such as a proper risk assessment, high data quality, and adequate documentation for oversight and user clarity. Remote identification systems using biometrics are in this category and face similar stringent requirements. It’s important to note: if an AI system is part of a product’s safety, or is the product itself, as outlined in EU’s Annex II, and needs a third-party check per EU’s Annex II, it’s considered high risk. Such systems need an assessment not just before sale but regularly during their use.
3. Limited Risk: AI systems that must be transparent about their operation. For example, when chatting with AI like ChatGPT, users should know they’re speaking to a machine, not a human. The AI should also clarify when content is AI-generated, avoid producing illegal content, and provide overviews of copyrighted data it trained on.
4. Minimal Risk: Mostly harmless AI applications, like video games or image filters. Most AI systems will fit into this relaxed category.
Conflicting definitions
Defining AI has proven to be a tough task even before starting trilogues (negotiating meetings where representatives from the EC, Parliament, and the Council of the EU collaborate to discuss and find common ground on legislative proposals). While AI’s vague definition has driven the app economy’s continued investment and rapid growth, it’s been a headache for lawmakers who need clarity as they draft, pass, and eventually implement this regulation. The European Commission had its own take on defining AI, placing it in an appendix. But the European Council and Parliament moved it front and center. Parliament aligned the ‘AI system’ definition with one used by the Organization for Economic Cooperation and Development (OECD) and introduced terms like ‘deployers’ for AI users and other definitions for ‘affected persons’, ‘foundation model,’ and ‘general purpose AI system.’
One of the biggest challenges during trilogues will be balancing the various definitions of AI with interpretations of high-risk systems. If the definition of AI is too broad, it may barely cover basic calculations, but if it’s too narrow, the growth of AI could be stifled in addition to the law’s effectiveness. Regarding high-risk systems, if the law primarily focuses on AI systems deemed ‘high risk’, the app economy may miss out on the potential benefits these systems could bring.
For example, if basic AI models are labeled as high risk without considering how they’re used, we might overlook their positive impacts. However, a too-general approach could make it less safe for consumers when it comes to specific AI applications. With technology changing so quickly, laws need to be ready for the future, and a more concise language around AI must be clearly decided upon ahead of enforcement.
SME considerations
The European Commission defines SMEs as entities engaged in economic activities, including self-employed and family businesses, with less than 250 employees, a turnover below €50 million, and/or a balance sheet below €43 million. This category also breaks down into ‘small ’ and ‘micro’ enterprises based on employee numbers and financial metrics.
Outside of defining SME, the AI Act offers resources for compliance, including advice, financial aid, and representation. There’s also an exemption for free and open-source AI components unless used in high-risk systems, specifically aimed at aiding SMEs, startups, and academics. Another resource SMEs can benefit from is the use of regulatory sandboxes, i.e. places for developers to test their products where they won’t be subject to fines, fees, and violations. While participation is voluntary, SMEs and startups get free priority access.
Although SMEs aren’t excluded from requirements around high-risk use cases, they do get leniency. SMEs are exempt from consultation needs during assessments, and while they must provide technical documentation proving compliance, the guidelines are more flexible and make compliance much easier for smaller enterprises.
Before any high-risk systems can be used in the EU, they must undergo conformity assessments. The AI Act mandates consideration of SMEs’ financial constraints, adjusting third-party conformity assessment fees accordingly. Unfortunately, despite this stipulation, SMEs still face significant compliance costs, potentially adding up to 17 per cent overhead on AI spending in the EU.
While we’re thrilled to see SMEs receive recognition and support within the current draft, achieving compliance for small and large enterprises alike demands substantial resources, and we hope to see further SME accommodations in the final draft of the EU’s AI Act.
What’s next?
After the European Parliament made several changes to the AI Act in June trilogue negotiations began between the European Commission, Council, and European Parliament. These trilogues are centered around issues including AI’s role in copyright, using facial recognition, how personal data should be used for AI in public good, required testing environments for new AI, and how certain high-risk AI might affect human rights. With these contentious issues on the table this fall, the European Commission and Spain, which currently leads the Council, face a mounting task as they hope to pass the bill by the end of this year at the latest.
As the app economy saw with the General Data Protection Regulation (GDPR), legislation that is ‘the first of its kind’ often sets a precedent that can help or hinder global progress in the digital revolution. We hope to see policymakers at every level of government in the EU understand the responsibility that comes with issuing an early policy regulating AI and continue trilogues thoughtfully, with both consumers and SMEs in mind.