Artificial intelligence (AI) is at the centre of regulatory discussion and action in the European Union, with conversations centred around its potential risks and benefits. As a result, we have seen three separate but related AI actions from policymakers in the EU: the EU AI Act, the AI Pact, and the European Commission’s Code of Practice for Generative AI.

For businesses, including the small and medium-sized enterprises (SMEs) members of  ACT | The App Association, navigating these new rules may seem overwhelming, but it’s crucial to understand how they could affect your operations and compliance obligations.

This blog will explore how these three regulatory tools interact and how they impact AI developers, deployers, and users.

What is the AI Act?

As we have previously outlined, the AI Act is the central regulatory framework aimed at addressing the risks posed by AI, particularly high-risk systems. These include AI technologies used in critical areas like healthcare, employment, and public services, where misuse could have significant consequences for individual rights and safety. The Act introduces a series of mandatory obligations for developers and deployers of these systems, including stringent transparency and risk management requirements​. Although the AI Act has entered into force, companies will have until 2026 to fully comply.

 What is the AI Pact?

 The AI Pact is a voluntary initiative launched by the European Commission to encourage early compliance with the upcoming AI Act. It allows organizations to start implementing key principles that will eventually be mandatory under the AI Act. While it does not impose legal obligations, the pact focuses on areas like transparency, governance, and risk management, particularly for high-risk AI systems​.

For those who sign on, the AI Pact involves three core commitments: 1) adopting an AI governance strategy, 2) mapping high-risk AI systems, and 3) promoting AI literacy among employees. Many of these commitments require significant resources and careful planning, making it a challenge for smaller enterprises that may not have the financial capacity for such extensive implementation​.

The deadline to sign the pledge for organizations that wish to participate in the AI Pact’s launch campaign ​was 11 September 2024. The tentative date for a high-level signing ceremony in Brussels is 25 September 2024, with the possibility of hybrid participation.

What is the AI Code of Practice?

The European Commission’s voluntary Code of Practice for General-Purpose AI (GPAI) complements the AI Pact and the AI Act. The Code, which is still in development, is designed to provide more detailed guidance for companies to ensure they adhere to ethical standards, even for AI systems that aren’t considered high-risk under the AI Act. It will focus on practical implementation, addressing how to manage datasets, ensure traceability, and maintain transparency requirements. This is especially important for technologies such as AI-generated content and deepfakes, where clear labeling and disclosure are crucial to prevent misuse.

While following the Code is voluntary, it is an essential tool for organisations looking to navigate the complexities of AI regulation. Adhering to the Code of Practice will mean a company is considered fully compliant with the AI Act’s provisions for GPAI, giving businesses a straightforward path to meet the necessary legal requirements.

 How do the Act, Pact, and Code work together?

 Together, the Act, Pact, and Code offer a comprehensive framework to guide businesses, but neither the Pact nor the Code are required for compliance with the AI Act. Clear communication from the European Commission is essential to help businesses, particularly SMEs, understand these distinctions and make informed decisions without unnecessary pressure. While the Pact and Code encourage early adoption of best practices, they are not prerequisites for meeting the legal obligations of the AI Act. Companies that choose not to engage in these voluntary measures will still be able to comply fully with the AI Act when it becomes enforceable in 2026.

 Outlining this distinction is crucial, especially for smaller businesses. There is a risk that the Pact and Code’s voluntary nature could unintentionally create an uneven playing field, where companies with more resources may benefit from early adoption, leaving smaller enterprises that may lack the resources or capacity to participate behind. To prevent this, clearer communication is needed to ensure that companies, particularly SMEs, understand that engaging with these voluntary initiatives is optional and does not impact their ability to meet the legal requirements of the AI Act.

A confusing and uncertain regulatory path forward

 The combination of the EU AI Act, the AI Pact, and the forthcoming Code of Practice has created a complex regulatory environment. While these tools are designed to address the risks posed by AI and provide much-needed guidance, the uncertainty surrounding their timelines, obligations, and practical implementation leaves many businesses, particularly SMEs, facing significant challenges.

At the App Association, we recognise the effort to offer guidelines and support through initiatives like the AI Pact and Code of Practice. However, we must also be cautious of the burdens these frameworks may impose, especially on SMEs that often lack the resources to adapt to rapidly changing regulations. The voluntary nature of the Pact and Code offers flexibility, but the confusing connection between these initiatives, the AI Act, and the uncertainties surrounding future legal obligations makes long-term planning difficult for many SMEs.

This evolving regulatory maze presents both opportunities and risks. Early voluntary compliance may seem appealing, but it’s important to carefully weigh the cost and resource demands before jumping into developing guidelines. Striking the right balance between fostering innovation and avoiding unnecessary burdens is crucial to ensuring that smaller businesses can thrive.