With global governments in a race to regulate, policymakers in the European Union (EU) and United Kingdom (UK) are considering various approaches to rules around artificial intelligence (AI). Last year, we outlined a flexible, thoughtful approach to AI policy and governance that considers the latest data privacy security laws as well as the pros, cons, and challenges of AI in our Global AI Principles, and we hope to see those considered as policymakers debate the rules around AI. As these conversations continue in capitals around the globe, we’re providing members with an EU + UK AI policy update.

AI in the EU and SME considerations

In 2021, the European Commission (EC) introduced the first cross-sectoral regulatory framework for AI within the EU, with the stated aim of fostering secure and ethical AI technologies. Despite initial resistance from some Member States, the EU’s legislative bodies approved a final version of the AI Act in early 2024.

The EU’s AI Act is a broad regulatory framework reminiscent of the General Data Protection Regulation (GDPR), with its global impact extending its influence beyond EU borders. While the EU’s AI Act generally aligns with our recommended approach, it unfortunately outright bans certain use cases of AI through certain risk categories. It does this by categorising AI applications into prohibited, high-risk, and limited-risk categories, setting stringent compliance requirements for AI providers, including self-assessment, adherence to accountability principles, and significant penalties for non-compliance.

It is worth noting that small and medium-sized enterprises (SMEs) are directly addressed in the EU’s AI Act. SMEs in the EU are defined as entities with fewer than 250 employees and financial thresholds under €50 million in turnover or €43 million in balance sheet total. The Act offers support to SMEs through advice, financial aid, and exemptions for free and open-source AI components. The Act also creates regulatory sandboxes to provide a risk-free testing environment, with SMEs receiving priority access and leniencies in compliance requirements for high-risk AI systems. However, despite some of these adjustments for SMEs, many will inevitably still face significant compliance burdens that will, in comparison to other markets, place EU SMEs at a disadvantage. This highlights the need for further accommodation in the final AI Act draft to ensure an equitable path to compliance for all enterprises.

AI in the UK and SME considerations

In February 2024, building on feedback from a 2023 AI whitepaper, the UK Government unveiled its latest thinking on AI regulation. In line with our recommendations that build on the safe and ethical development of artificial intelligence, the UK’s approach recognises that AI will affect every part of the economy and encourages regulators to explain how they will manage AI in their relevant sectors of the economy.

The UK’s strategy includes a significant investment exceeding £100 million to foster AI innovations and enhance the technical capabilities of regulators. This investment would be vital for supporting the UK’s SMEs in providing resources to navigate the evolving AI landscape confidently. Separately, the government’s allocation of £10 million to augment regulators’ AI expertise would do much to help UK policymakers understand demonstrated risks and harms, how AI fits into their existing efforts to protect consumers and companies, and whether AI is indeed a separate and distinct modality requiring new layers of regulation.

The additional announcement of new AI research hubs, bolstered by an £80 million investment, aims to catalyse transformative innovations across the UK, offering SMEs opportunities to engage in cutting-edge research and development. Through initiatives like the AI and Digital Hub, the government’s proactive engagement with SMEs underscores a strategic focus on ensuring SMEs are well-positioned to leverage AI technologies for growth and innovation.

It’s important to note just how different the UK government’s approach to AI is compared to the EU. In an effort to protect innovation and consumers, the UK has chosen to approach AI risks through existing frameworks, regardless of whether AI is being used or not. This is different from the EU’s approach, which essentially creates a new layer of requirements on top of many existing technology-neutral requirements.

What to expect moving forward

Post-adoption, the EU’s AI Act will enter into force 20 days after its official publication, with a phased application beginning in 2026. Specific provisions, including prohibitions and obligations for general-purpose AI (GPAI) models, will activate within six to 36 months post-enforcement. As we often see in the EU, how the law is operationalised will truly determine whether it will foster AI innovation while protecting consumers, and we will continue to bring the SME viewpoint forward in this process.

The UK’s 2024 roadmap for AI regulation, with a focus on refining policy positions, enhancing regulatory collaboration, and promoting AI opportunities, is at an earlier phase than the EU. With a deadline for key regulators to outline their strategic AI approaches of 30 April 2024, a number of further steps will need to take place before policy changes are established in the UK, which will provide our community with important points of input.

As this pioneering legislation sets a significant precedent, it bears the potential to either help or harm the SMEs driving the global digital evolution. We will continue to advocate for harms-based and scaled risk management concepts across the EU and UK, emphasising the importance of considering SMEs’ unique needs and challenges in the evolving AI regulatory landscape. Early policies regulating AI must be crafted with care, ensuring they foster innovation while maintaining ethical standards and inclusivity across all levels of business and society. While we are excited to see governments across the globe engaging with policies around artificial intelligence, much more work remains to be done on AI policy ahead.