As policymakers consider new online safety proposals, the U.S. Federal Trade Commission (FTC) recently convened a workshop to examine the evolving landscape of age verification, estimation, and assurance tools, legislative proposals centered around their use, and the interplay between emerging age authentication frameworks and existing regulations, such as the Children’s Online Privacy Protection Act (COPPA) Rule. During the workshop, FTC attorneys moderated a wide-ranging discussion featuring industry and policy leaders:

  • Graham Dufault, General Counsel, ACT | The App Association (ACT)
  • Emily Cashman Kirstein, Child Safety Policy Manager, Google
  • Antigone Davis, Vice President and Global Head of Safety, Meta
  • Amy Lawrence, Chief Privacy Officer and Head of Legal, SuperAwesome
  • Nick Rossi, Director, Federal Government Affairs, Apple
  • Robin Tombs, CEO and Co-founder, Yoti

Throughout we’re going to be using a few specific terms and want to make sure we clearly define the differences:

An umbrella term for approaches to determine or estimate a user’s age online with varying assurance levels, data requirements, and privacy implications.

Determining a user’s exact age using authoritative data sources that identify the individual and associate the identity with date of birth, such as a driver’s license, passport, or birth certificate. This is the most accurate form of age assurance because it involves the use of direct evidence of an individual’s age. However, verification also presents the highest privacy risks, especially in digital contexts, since identification documents must be uploaded instead of merely being shown to an individual in real life.

Estimating a user’s age based on direct evidence of “inherent features” or behaviors that vary with age. A common example of age inference is analyzing an image of an individual to estimate their age using algorithms that look for specific attributes associated with age ranges. Another example is analyzing a user’s account history or social media activity.

Estimating a user’s age based on secondary evidence. For example, possession of a credit card can indicate that a user is over a certain age if credit cards may only be issued to individuals over a certain age in the relevant jurisdiction. Thus, the possession of a credit card is secondary evidence that the person is over a given age, since the credit card issuer can be inferred to have obtained direct evidence of the individual’s age.

Relying on age information self-reported by users or provided by a parent or guardian.

Throughout the discussion, panelists explored the practical and legal challenges developers face under broad age authentication mandates, the privacy and security tradeoffs inherent in collecting age information at scale, the tools currently available to parents, and the importance of designing flexible, risk-based approaches that protect children online without undermining parents or imposing unnecessary burdens on low-risk services.

At the start of the discussion, Graham highlighted ACT’s diverse membership, which operates across a variety of industry verticals, including digital health, education, cybersecurity, and agriculture. As he noted, many member companies’ offerings do not present the kind of age-related risks that would justify heavy-handed regulation, such as the App Store Accountability Act (ASAA). This proposal would impose a blanket requirement for app stores to collect users’ ages, put them into age categories, and share that data with every app on the stores, regardless of their content or services. For small tech companies, such a mandate would require receiving and managing sensitive personal information and absorbing complex compliance obligations under laws such as COPPA. For example, SwineTech, a U.S.-based agricultural software company, built an app called PigFlow that helps pig farmers manage their operations. As Graham pointed out, even though PigFlow is not designed for or directed to children, it would still be swept into the ASAA’s regulatory scheme without a corresponding online safety benefit.

Turning to the question of which services should seek age information, Graham urged policymakers to ground any requirements in demonstrable risk. Given the privacy and security concerns inherent to the process of verifying users’ ages, he cautioned that strict verification should be reserved for circumstances where the stakes are high enough to warrant doing so, such as where there are foreseeable, concrete risks to online safety, mental health, or reputation. As panelists discussed, apps like SwineTech’s PigFlow pose no such risks and should not be required to implement strict verification, while higher-risk services, such as those offering adult content or algorithmically driven feeds, may need age information to prevent exposure to inappropriate content. From Meta’s perspective, Antigone suggested that a risk-based approach may not be sufficient given that the open nature of app stores means minors could download any app and access unprotected features. However, presuming all apps to be high risk, regardless of their content or functionality, would impose unnecessary privacy, security, and compliance burdens on all app developers without a commensurate online safety benefit. She also floated extending parental authorization tools used for in-app purchases to app downloads, though, as Nick pointed out, existing parental controls already give parents that ability.

The discussion repeatedly underscored that deploying age verification at scale presents a fundamental tradeoff: collecting and handling sensitive personal information may enable age-appropriate protective measures, but it also creates significant privacy and security risks. As Graham made clear, for low-risk services, such as SwineTech’s PigFlow, speculative harms do not justify mandating the collection of sensitive personal information at scale. Emily similarly emphasized that age authentication efforts should be proportionate to the actual risks found in an online activity. For example, users should not have to upload a government ID to use a weather app. Some panelists noted that age verification may offer companies certain business benefits, including avoiding reputational harm and building trust in their brand. Moreover, as Antigone pointed out, requiring only apps that offer age-differentiated experiences to conduct age verification may disincentivize developers from offering those features. However, Graham cautioned that, while “requiring a class of apps to receive a signal might create a disincentive to that class of apps, I’m a little bit more worried about requiring all apps to receive a signal and therefore disincentivizing them to put an app on the app store in the first place.”

While panelists emphasized a whole-of-ecosystem approach to online safety, many proposals would undermine that goal by misaligning accountability in ways that fail parents, burden developers, and let the biggest platforms off the hook. For example, as Nick explained, the ASAA would require age signals to be transmitted to every app developer, regardless of whether a parent wants apps to have their child’s data or whether the developer has any use for it. It would also force developers to build infrastructure to receive those signals and comply with regulations triggered by actual knowledge of a user’s age, such as COPPA. Such obligations would apply even if their app is not directed to children. This approach may also frustrate developers’ own online safety mechanisms, which can be tailored to the specific app experiences or contexts. As Nick pointed out, “developers are in a position, as the ones who are creating and serving content within their apps, to have important context about their app and their users.”

The conversation then shifted to how companies can ensure that age authentication tools perform as intended. Graham stressed that credible age authentication requires rigorous, measurable evaluation. For example, are false positives and false negatives evenly distributed? What does their balance reveal about system performance? Examining those error rates will provide a clear picture of a tool’s effectiveness. He further emphasized the importance of measuring completion rates to identify friction points and assess where users are liable to abandon or circumvent an age gate. For companies building or procuring these tools, Graham pointed to ISO/IEC 27566 as a foundational framework for standardization that enables buyers and sellers to meet shared expectations and concrete performance benchmarks.

Finally, panelists underscored that age authentication should complement existing parental controls and family engagement instead of replacing them with a rigid, one-size-fits-all mandate. Graham pointed to the range of tools ACT members are already developing or using to empower parents, including web filters, app download and purchase restrictions, activity trackers, screen time limits, and communication controls. He argued that effective online safety policy requires preserving parental agency, expanding awareness of existing tools, and ensuring policymakers understand the practical burdens families face when navigating the digital ecosystem. By contrast, a prescriptive age verification mandate risks curtailing parents’ flexibility, constraining developers’ ability to build software responsive to parents’ needs and locking in rules that may become obsolete as the app ecosystem evolves.

As Graham made clear throughout the discussion, effective online safety regulation must balance protection with privacy, flexibility, and innovation. A one-size-fits-all mandate risks undermining parental agency, innovation, and privacy without delivering meaningful online safety gains. He urged policymakers to instead pursue flexible, risk-based frameworks that empower parents, protect privacy, and align accountability with demonstrable risks.