As federal negotiators scramble to assemble a comprehensive AI legislative package—one that would ideally preempt the mounting state-by-state patchwork threatening to fragment America’s AI ecosystem—there’s pressure to sweeten the deal with provisions addressing children’s safety online. The impulse is understandable with AI chatbots like Meta’s that are having “sensual” conversations with minors. But if the App Store Accountability Act (ASAA), also a product of Meta, ends up as the “children’s safety” component of any AI bill, Congress will have smuggled a poison pill into what should be pro-innovation legislation.

The Preemption Project Makes Sense. The ASAA Add-On Doesn’t.

Preempting state and local AI mandates is sound policy. AI developers and deployers—especially the small businesses ACT | The App Association (ACT) represents—shouldn’t have to navigate 50 different regulatory schemes just to bring a productivity tool or predictive analytics to market. A federal framework that establishes baseline rules while blocking similarly scoped state requirements would give innovators the clarity they need to invest, build, and compete.

But grafting ASAA onto that framework would undermine the entire project. ASAA isn’t a targeted child safety measure. It’s a sweeping mandate that treats app stores as general-purpose age gatekeepers for every application they distribute—including the vast universe of AI tools that pose no age-specific risks. The bill would require app stores to implement age verification systems and demand that developers create their own modules to receive and track age information. Awkwardly, ASAA would also replace the existing parental consent tools app stores provide with a new system that relies on developers to receive and manage parental consent. For an AI package meant to accelerate American innovation, this would be legislative self-sabotage.

ASAA Treats Every AI Tool Like an Age Risk, No Matter What It Does.

Here’s the core problem: ASAA doesn’t distinguish between social media platforms designed to maximize teenage engagement and a small business’s AI-powered scheduling assistant. It doesn’t care whether an app collects zero user data or hosts no user-generated content. If it’s distributed through an app store, it gets swept into the same compliance dragnet. That means every developer building AI tools—voice transcription services, inventory management bots, accessibility aids—would suddenly face age verification obligations and certification requirements built for an entirely different threat model

This makes no sense. A chatbot that helps restaurant owners optimize supply chains doesn’t present the same risks as a platform designed to addict adolescents to endless scrolling. Yet ASAA would burden both identically, forcing developers to implement age checks and navigate liability exposure even when their tools have nothing to do with children. As I noted during the FTC’s age verification workshop, requiring blanket age verification across all software tools—regardless of their design, data practices, or risk profile—is regulatory overkill that stifles innovation without meaningfully protecting kids.

For small businesses, this is catastrophic. The developers we represent don’t have compliance teams standing by to retrofit age verification into products that were never designed with minors in mind. They don’t have legal departments to stand up new Children’s Online Privacy Protection Act (COPPA) compliance processes. Interestingly, the Federal Trade Commission (FTC) adopted new guidance purporting to allow operators to collect information free from COPPA constraints for the narrow purpose of age verification. This guidance does nothing to relieve small businesses force-fed “actual knowledge” as to a child’s under-13 status. These startups and entrepreneurs certainly can’t afford the risk of getting it wrong, facing potential enforcement actions, or being kicked off distribution channels because they’re late to the compliance party.

An Open Goal for China.

The geopolitical stakes make this even worse. Congress keeps saying it wants America to lead in AI. But saddling U.S. developers with ASAA’s compliance burdens—while Chinese competitors face no such constraints—is legislative malpractice. App stores are the most efficient distribution channel for reaching global users, especially for small businesses. Turning them into compliance chokepoints doesn’t just slow American innovation; it hands Beijing a competitive advantage. If accessing AI tools through U.S. platforms becomes bureaucratically nightmarish while alternative channels remain open, guess where developers and users will migrate?

ASAA’s defenders might argue that age verification is a small price to pay for child safety. But that assumes the bill actually targets real harms rather than creating broad, indiscriminate mandates. Effective child protection policy should be risk-based, focused on services where age-related dangers actually exist. ASAA is the opposite: a blunt instrument that treats every app as presumptively dangerous and every developer as presumptively liable.

What Belongs in an AI Package Instead.

If negotiators want to address children’s safety within an AI legislative framework, they should look at narrow, well-scoped provisions that would address risks that current laws fail to address. For example, some measures empowering parents and guardians to protect their kids from AI chatbot companions are appropriately targeted to the risks presented without bringing AI progress several steps backward. Charting a course on this issue is not impossible, but the effort must adhere to the light-touch approach at its core, or else America’s AI policy could be a self-defeating endeavor.