It seems like everyone is talking about the “AI bubble” right now. Whether we’re in one, whether it’s about to burst, or whether we’re only at the beginning of something much larger. From Brussels to London to Washington, the same headline keeps appearing, prompting the same question: what happens next?
For independent app developers and small tech businesses, the bubble question misses the point. The real issue isn’t whether AI is overhyped. It’s whether the next phase of AI development will remain open to the people building practical, everyday tools we have all come to rely on. Small teams are already applying AI to improve accessibility, translate educational content, detect fraud, coordinate transit, streamline workflows, and help families manage care. These tech solutions are where AI most directly improves daily life, and, unfortunately, are most at risk if compliance expectations, regulatory complexity, and access to foundational AI systems are shaped primarily around the needs and resources of the largest companies. The future of AI will be shaped by the policy choices made now and by how regions around the world shape the rules that will determine who gets to build, deploy, and scale this emerging tech.
Whether you believe we’re in an AI bubble or not, the real story is how governments are writing the rules that will decide who can participate in this next wave of innovation. We’re breaking down what that looks like across the EU, UK, and United States, and what small tech needs to know as they navigate compliance and growth globally.
European Union: Implementation Will Determine Who Can Participate
The EU’s AI Act will shape how AI tools are built and deployed globally, and for small innovators, implementation will determine whether participation remains possible. But AI adoption challenges don’t exist in isolation. Though intended as guidance, the AI Act, the AI Pact, and the Code of Practice often adds new layers of complexity, while the Digital Markets Act (DMA) has delayed when developers introduce new AI features and services.
What small tech needs:
- Risk-based categories work well, but enforcement of the AI Act’s risk categories must be implemented clearly and transparently to distinguish everyday tools from truly high-risk systems.
- Practical guidance to make compliance doable vs. prohibitively expensive.
- Proportionality in compliance obligations to scale appropriately with product size and use case.
United Kingdom: Flexibility Helps, But Only If Expectations Are Clear
The UK’s AI Opportunities Action Plan sets out a national roadmap to scale the AI economy through new growth zones, expanded computing power, and a National Data Library that can unlock new public and private sector uses of AI. It aims to close the country’s skills gap, attract investment across industries, and position the UK as a global AI leader. But that vision can only come to life with standard-essential patent (SEP) reform. One of the most promising uses of AI depends on data generated from standards-driven device ecosystems (Internet of Things, wearables, patient-generated health information, etc.). Small tech innovators need a transparent and predictable SEP licensing framework that lets them leverage the power of technological standards without the risk of an eleventh-hour shakedown from the owner of a SEP required for the standard. Without SEP licensing reform, the UK’s AI ambitions risk falling behind.
What small tech needs:
- Coordination and simplification across regulators to ensure predictability and minimize compliance costs
- Consistent guidance to prevent having to re-architect products mid-development.
- SEP licensing reform to provide transparency, fairness, and cost predictability when determining whether AI innovation can scale.
Without this alignment, “flexibility” becomes guesswork, which slows deployment.
United States: Patchwork Is Not a Strategy
With no national risk-based AI framework, states are advancing their own rules, creating fragmented compliance mandates that hit small companies hardest, especially when those rules apply to the development of the technology itself rather than its deployment. This fragmentation becomes even more challenging when rules govern how AI systems must be built rather than how they are used, creating the risk that companies will need to maintain different model versions for each state.
The patchwork challenge also connects to another systemic issue: the growing AI talent gap. Many U.S. companies rely on highly specialized technical workers in AI development, and recent H-1B visa changes have made it even harder for small developers to hire and retain the talent needed to compete globally. Without alignment between AI policy and workforce policy, the United States risks choking off innovation and falling behind our foreign competitors.
What small tech needs:
- A federal baseline to protect consumers while supporting innovation.
- Compliance obligations that scale based on the level of risk to free up resources from navigating legal variability to innovation.
- Fix H-1B bottlenecks to address the AI talent gap and keep U.S. startups competitive.
Moving Forward
Across every region, the future of AI will be shaped not as much by speculation or market cycles as by how governments design and align their rules. From the EU’s complex framework for AI and competition regulation to the UK’s ambitions under its AI Opportunities Action Plan, to the fragmented approach and growing talent gap in the United States, one pattern is clear: when regulation and opportunity fall out of sync, small tech is the first to lose.
The independent innovators building the most human-centered AI tools need clarity, proportionality, and policy that enable voluntary, private sector-led standards development to keep creating solutions that drive progress in every sector. We’re continuing to engage policymakers to ensure that AI policy supports the startups and small tech companies shaping this future. To get involved in our AI advocacy or share your perspective, contact Brad Simonich if you’re based in the U.S. or EU, or Stephen Tulip if you’re in the UK.