What the recent Tea app breach teaches us about age verification, trust, and the cost of age verification mandates for developers and kids alike
What happens when an app built for safety becomes the source of harm? The now-infamous app Tea, designed to help women protect themselves while dating men, asked users to upload selfies holding their government-issued ID. That voluntary safety feature resulted in the collection of some of the most sensitive personal data an app can hold.
Then, Tea was breached. Twice. Over 130,000 images, posts, and messages were exposed, including photo IDs listing home addresses and sensitive messages about access to reproductive care. Victims are now being harassed, doxed, and threatened online, enduring psychological and emotional abuse as well as threats of physical harm.
Data breaches like this aren’t rare; they’re routine. And what happened with Tea is exactly the kind of breach we risk institutionalizing through the age verification mandates being implemented or considered across the United States, the United Kingdom, and other countries. The global trend around age verification mandates will inevitably cause future data breaches. These breaches will not only expose adults but also children, revealing home addresses, personal identifiers, and private messages, which put them in harm’s way and leaves families to deal with the dangerous, real-world consequences.
What is Age Verification All About?
Age verification proposals and rules are meant to protect children online by theoretically limiting their access to harmful or inappropriate content. In practice, it often means collecting troves of sensitive personal information, such as government-issued or school IDs, facial scans, or behavioral data, turning curated online marketplaces (COMs), browsers, apps, and websites into enticing targets for cyber criminals.
In the United States, we’re seeing a wave of proposals that would require some form of age verification and data collection, including the federal Kids Online Safety Act (KOSA), the App Store Accountability Act (ASAA), and a growing number of state-level laws. Similar measures are appearing around the globe, from the European Union to Australia, and are already in force in the United Kingdom. Under the UK’s Online Safety Act (OSA), the law forces any service that might expose minors to harmful content to run “highly effective” age checks or face massive fines or even shutdown. That net is wide and covers social media, gaming, and search. Shortly after going into force, we saw the UK’s OSA age verification mandates fail in three critical ways:
These poorly designed proposals don’t eliminate danger; they just create new ones. They ask families to trade their children’s privacy for a false sense of security, while technologists bear the liability for a system no one can guarantee will hold. For developers, that’s a nightmare scenario: being roped into collecting and safeguarding highly sensitive personal data they never wanted in the first place.
What Age Verification Mandates Do to the Developer Ecosystem
Around the world, age verification mandates are becoming more common, and they tend to reach developers in three main ways: 1) through targeted rules for specific categories like adult content sites, 2) through broad requirements placed directly on nearly all online services, and 3) through mandates on distribution platforms, like app stores. The last category, app store mandates, often causes the biggest headaches for developers.
At first glance, they might seem like a compromise between the two other kinds of mandates, but in practice, they can operate like blanket mandates on every app, no matter the content or audience. In the United States, Texas, Louisiana, and Utah have new laws modeled after ASAA, requiring app stores to verify the age of users before allowing downloads. This affects all developers, including those whose products don’t serve children or don’t involve age-sensitive content, and requires them to build systems that handle age category flags and parental consent indicators. Once those flags exist, they can trigger additional obligations under federal laws like the Children’s Online Privacy Protection Act (COPPA), requiring developers to identify child users, link them to parental accounts, and obtain verifiable parental consent (VPC) for certain features. The kicker: the parental consent-to-download flag required under these state laws doesn’t actually satisfy COPPA’s VPC standard, meaning developers end up with the extra work and liability without meeting the federal requirement. One estimate puts the initial cost for small businesses to comply with app store age verification mandates at up to $280 billion, not counting the ongoing expense and risk of handling sensitive personal data many companies never expected, or wanted, to collect.
Other times, the mandate lands directly on the developer, as we’re seeing in the UK, where even the smallest teams must implement age checks and then store, transmit, and protect highly sensitive personal data they never wanted in the first place. In this context, “developer” can mean not just an app creator, but also a browser or gaming platform that operates the service itself, while a COM refers specifically to the central marketplace or store distributing multiple developers’ products. In the EU and Australia, similar proposals have been floated or implemented, and, as in the UK, these mandates often capture services with no child audience at all. And unlike frameworks such as COPPA in the United States, which already impose strict obligations on services directed to children, these new rules apply the same invasive requirements to services with no child-directed design or audience, effectively importing the compliance burden without the contextual safeguards that made the original laws workable.
Once a developer integrates an age verification system, they’ve entered the data protection business, instantly making their app a more appealing target for cybercriminals. Big players can absorb compliance costs and even monetize the new data; small tech shoulders the risk while dominant players take the profit. A single misstep can mean regulatory penalties, suspension from the COM, or irreparable damage to user trust. For independent developers and small tech entrepreneurs, the cost of getting it wrong is often higher than the cost of walking away entirely.
Lessons Learned
Unfortunately, the Tea breach wasn’t an outlier; it was a warning shot. Surveillance dressed up as safety doesn’t keep kids out of harm’s way. Instead, it strips families of their privacy and leaves developers to take the fall. Responsible technologists like our members shouldn’t have to choose between building safe products and complying with the law. However, global age verification mandates force exactly that choice.
Recent age verification mandates don’t stop harmful content from being created, monetized, or amplified; they just offload the liability onto others. If lawmakers are serious about protecting children rather than lining the pockets of companies like Meta, they need to pass a strong federal privacy law, crack down on poor data collection practices that actually exploit kids, and stop pretending that surrendering personal information is the price of children’s safety.