Introduction

As Europe continues to shape the future of artificial intelligence, the EU is expected to publish the final draft of the General-Purpose AI (GPAI) Code of Practice (Code) in May. While the EU AI Act introduces binding legal obligations for both high-risk and general-purpose AI systems, it also advocates for the creation of a voluntary code of practice. The Code represents a key voluntary tool to align early with the AI Act’s goals.

At ACT | The App Association, we have been engaging with policymakers in the evolution of this Code. While the third draft shows signs of progress in aligning principles with operational commitments, we remain concerned that small and medium-sized enterprises (SMEs) are still being sidelined in practice.

The Third Draft: progress and persisting questions

The third draft represents a meaningful advance toward harmonising ethical principles with practical actions. It provides a more structured mapping between measures and their related commitments, reinforcing the aim of helping GPAI developers anticipate and align with the AI Act’s objectives ahead of binding enforcement.

Importantly, the draft claims to tailor expectations to the size and capacity of the GPAI provider, especially acknowledging that SMEs and startups often lack the same financial or technical resources as large corporations. The draft acknowledges the resource constraints that SMEs and startups face compared to larger organisations. In principle, it adopts a “proportional” approach that adjusts expectations according to the size and capacity of the provider. On paper, this proportional approach is a welcome step but in practice, questions remain.

SMEs in the Code: A gap between intent and implementation?

While the preamble to the Code appropriately highlights the importance of not overburdening SMEs, this sentiment is not consistently reflected in the operational sections. Although there have been clarifications since the previous draft, the text currently lacks concrete tools, templates, or streamlined requirements tailored to SMEs. As a result, smaller actors are effectively held to the same standards as major developers, with only general references to flexibility, ending up in an excessive burden for SMEs.

We are concerned about the proposed approach to compliance flexibility embedded in Measure II.4.5 of the Safety and Security section, which addresses rigorous model evaluations. It requires evaluations to meet standards comparable to those used in leading scientific journals or major machine learning conferences, a formidable bar, even for large tech players. SMEs, theoretically, are allowed to deviate from these standards if they lack the expertise or financial resources, but only on the condition that they formally notify the AI Office and negotiate an alternative approach.

This presents several challenges:

  • Administrative complexity: Many SMEs may not have the legal or compliance resources to navigate these formal processes effectively.
  • Lack of clarity: The Code provides no specific benchmarks or examples for what might constitute an “acceptable alternative.”
  • Uncertainty and inconsistency: With decisions left to case-by-case negotiations, the process could become unpredictable or opaque, especially as the AI Office is not yet fully operational, and its future capacity remains uncertain.

This mechanism lacks a predictable structure for a fruitful implementation, resulting in an inconsistent application, legal uncertainty, and could even discourage SMEs from engaging with the Code, contradicting the overarching goal of promoting inclusive and widespread adherence to responsible AI practices.

Recommendations for the upcoming Draft

The forthcoming version of the Code represents a valuable opportunity to ensure it serves as an effective, inclusive tool for the entire AI ecosystem. To better support SME participation, we respectfully suggest the following enhancements:

  • Clearly defined and scalable compliance pathways: Introduce proportional obligations with straightforward, tiered guidance tailored to organisational size and capacity.
  • Practical and easy-to-use tools: Provide accessible compliance aids such as templates, checklists, and examples that reduce reliance on legal or technical expertise.
  • Time-bound, accountable procedures with the AI Office: Establish predictable timelines and clear documentation requirements to ensure SMEs can engage effectively without undue delay or uncertainty.

By incorporating these elements, the Code can better reflect the diversity of the European AI landscape and provide a more supportive environment for smaller innovators.

Without these additions, the GPAI Code risks losing the trust and participation of smaller innovators, undermining its own objectives of fostering safe, fair, and inclusive AI development in Europe.

Conclusion

We value the efforts of the European Commission, the AI Office, and the AI Board in fostering a culture of responsible AI development. As the Code of Practice continues to evolve, we hope it will more fully embody the principle of proportionality, translating abstract flexibility into concrete, accessible, and reliable pathways for SMEs. Doing so will strengthen trust, encourage broad participation, and ensure the benefits of GPAI development are equitably shared across the European innovation ecosystem.