For developers working with artificial intelligence (AI), the recent finalisation of the European Union (EU) AI Act brings a mix of opportunities and challenges. As the EU prepares to implement the new AI Act, it’s crucial for app developers and policymakers alike to grasp the real-life implications of the law.

ACT | The App Association has been a key player in the ongoing discourse on AI for several years. Our advocacy for balanced, flexible, and forward-thinking policies that foster innovation while ensuring ethical standards has been unwavering. In this blog, we explore how elements of the EU AI Act align with our AI policy principles and its overall effect on the app economy, particularly for small and medium-sized enterprises (SMEs).

SME support

 The AI Act recognises the critical role of SMEs and startups in driving innovation and economic growth. It also acknowledges the disproportionate effect of compliance costs on SMEs and provides solutions. By granting SMEs priority access to AI regulatory sandboxes, smaller businesses will have the opportunity to test and refine their AI technologies in a controlled environment without the usual red tape. Additionally, the AI Act promises to offer guidelines, training sessions, and dedicated communication channels to support SMEs with compliance. It aims for a proportional reduction in fees for conformity assessments, easing the burden of compliance costs for SMEs. We view these measures as a step in the right direction and hope to see them implemented effectively.

 Risk-based approach

As we have previously outlined, the AI Act introduces a risk-based approach that categorises AI systems into different levels of risk. It sets varying levels of requirements for each category, banning ‘unacceptable risk’ (with limited exceptions) and most of the obligations relating to ‘high-risk’ AI, while outlining minimum requirements for all general-purpose AI (GPAI).

A risk-based approach aligns with our AI policy principles, leaving room for innovation while addressing potential harms effectively. Such an approach should ensure the appropriate

distribution and mitigation of risk and liability, specifically that those in the value chain with the ability to minimise risks based on their knowledge and ability to mitigate should have appropriate incentives to do so. This approach is more likely to withstand the test of time and adapt to rapid technological advancements.

However, we remain cautious during the implementation process and stay committed to monitoring the regulation’s adoption. The definition of high-risk AI categories must be clear and applied correctly, and it is essential that they are not used for regulatory overreach. High-risk deployers still face uncertainties around disclosures and reporting obligations. This includes disclosures related to training in-house models, which could reveal proprietary information.

Obligations for all GPAI

The AI Act mandates minimum requirements for all GPAI regarding transparency and safety standards. While we and our members have long advocated for transparency and safety via our AI policy principles, we are concerned about the broad scope of GPAI and the additional burdens related to the cost of compliance for SMEs and startups.

Copyright

We appreciate the EU AI Act’s acknowledgment of copyright issues related to data used for AI training. Our members need strong copyright protections to safeguard hard work and creativity. The AI Act aims to ensure that creators are informed when their copyrighted materials are used in training AI systems, helping maintain the integrity of their intellectual property. This transparency is crucial for developers who invest substantial time, energy, and innovation into their products.

Further guidance is needed on how these copyright provisions will be implemented in practice. More precise guidelines will help developers navigate the complexities of compliance, ensuring the balance between protecting creators’ rights and discouraging the stifling of competitive technological advancement.

Interplay of laws

The AI Act acknowledges the interplay with other significant EU laws, such as the General Data Protection Regulation (GDPR) and the Copyright Directive. The rapid introduction of numerous regulations by the EU has created a complex and sometimes confusing landscape, especially for SMEs. The overlaps between the AI Act and other digital regulations are not always clear, making it challenging for small businesses to navigate the regulatory environment effectively. While we appreciate the acknowledgment of interplay, we hope the implementation process will prioritise providing detailed guidance on how these laws interact, helping SMEs understand their compliance obligations​.

Moving forward

We welcome the AI Act’s goal to harmonise laws across the EU, reducing the confusion and high costs associated with a patchwork of regulations. We also support its goal of promoting the ethical use of AI. Our focus is on ensuring that the new law benefits SMEs without imposing excessive burdens, as added costs and complexities disrupt smaller businesses the most.

A thoughtful implementation of the AI Act holds promise for fostering innovation and trust in AI. We’re committed to actively engaging with policymakers to ensure that SME developers can stay ahead of the curve.