At CES 2025, an insightful panel featuring Federal Trade Commission (FTC) Commissioners Rebecca Slaughter and Melissa Holyoak, along with former FTC Commissioners Christine Wilson and Julie Brill, tackled the evolving regulatory landscape for artificial intelligence (AI). They examined the FTC’s role in AI oversight, particularly through the unfair and deceptive acts or practices (UDAP) of half of its authority (the other half is unfair methods of competition or UMC). While the agency plays a crucial role in protecting consumers from fraud and deception, the panel raised concerns about how far enforcement policy could go before stifling innovation.
Recently, the FTC has been honing its application of UDAP authority to AI developers, prompting questions about the scope of its purview. The panelists discussed how some cases are straightforward. For example, where companies lie about what an AI product they are selling can do, and the elements of a deception case under the FTC Act are clearly met, there is little disagreement about UDAP’s applicability to the facts. However, clear lines are forming that divide two distinct schools of thought on where the FTC’s authority should apply and where it shouldn’t when it comes to AI. For example, one recent enforcement action holding AI companies responsible for potential third-party misuse of their tools, even when those tools remain neutral and serve legitimate purposes, split the Commission. Supporters of this approach argue that the FTC must bar AI tools from the market if they could foreseeably be used to perpetrate deceptive practices. Under such a legal standard, AI companies would either need to stop the production of certain tools altogether or ensure their technologies cannot possibly facilitate consumer harm, which can be prohibitively expensive. They advocate for proactively foreclosing the introduction of new technologies that bring new risks, urging regulators to act before widespread harm occurs.
Others warn that holding AI companies liable for the use of their technologies downstream could create uncertainty and discourage technological advancement. There is reason to believe the FTC is headed toward this position and away from its previous, more interventionist posture, as Commissioner Holyoak and now FTC Chairman Ferguson dissented in Rytr when they were in the minority. Without a doubt, unclear guidelines may force companies to hesitate in developing AI tools, fearing enforcement or regulatory action. Government intervention to address AI should target actual harm, not hypothetical risks, and that liability should rest primarily with those who misuse the technology rather than its developers.
The FTC plays a crucial role in addressing UDAP conduct, including when it involves AI applications. However, where its enforcement actions seek to hold liable the producer of items that are not inherently harmful but may be used to harm others, there is a significant risk that the agency overstepped its congressional mandate. By using informal regulatory tactics, such as sweeping investigations and preemptive enforcement measures, is the FTC is shaping AI governance in ways that bypass traditional legislative processes? Regulatory uncertainty can stifle market confidence and slow down beneficial innovation.
Striking a balance between consumer protection and AI-driven innovation remains a key challenge. As U.S. AI innovators face increasingly strong global competition, policymakers must weigh the risks of AI-related harm against the dangers of regulatory overreach. AI oversight should focus on clear cases of deception or fraud, ensuring that enforcement actions rely on concrete evidence rather than theoretical concerns. Otherwise, regulatory uncertainty risks slowing AI progress at a time when competition and innovation are accelerating.