Generative artificial intelligence (GAI) has been shown to support small business innovation in several ways. For example, small companies use GAI in ways that reduce time, cost, and efficiency barriers, advancing avenues for expansion, and otherwise enabling innovation. Yet, potential consequences from utilizing GAI as a commercial tool, including the unintentional facilitation of malicious and nonconsensual use of someone’s likeness, have given rise to concerns, prompting legislative proposals aimed at reducing the risks associated with GAI technologies. But the rush to legislate against new challenges in AI may inadvertently suppress innovation and affect the growth of the small business community ACT | The App Association represents. 

 Previously, we summarized a Senate-side effort to develop a federal right to publicity: the draft Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act. The right to publicity, which protects against unauthorized commercial use of an individual’s identity, exists in most states across the United States. Yet, the rapid development of GAI has prompted lawmakers to consider establishing federal rights, focusing more on creating new property rights than on safeguarding existing privacy rights. The NO FAKES Act has yet to be introduced in the Senate and could benefit from significant improvements to align with U.S. stakeholders’ interests, including those rooted in the First Amendment. Unfortunately, a similar bill introduced in the House has gone a leap further to develop a broad federal right to publicity, an expansion that could do more harm than good.

On January 10, 2024, lawmakers in the House of Representatives announced the introduction of a new bill called the No Artificial Intelligence Fake Replicas And Unauthorized Duplications Act of 2024 (No AI FRAUD Act). This bill intends to safeguard the property rights associated with an individual’s likeness and voice, targeting the unauthorized use of technological tools, defined as “personalized cloning services.” However, these terms are broadly defined. A “personalized cloning service” is defined as an “algorithm, software, tool, or other technology, service, or device the primary purpose or function of which is to produce one or more digital voice replicas or digital depictions of particular, identified individuals.” This concept could include most consumer electronic devices such as smartphones and tablets. The definitions for “digital voice replica” and “digital depiction” are similarly broad that could include any likeness or sound reproduction of a person. The No AI FRAUD Act extends beyond the scope of the NO FAKES Act, which specifically aimed to impose greater liability for unauthorized digital reproductions of an individual’s image, voice, and visual likeness in audiovisual work or sound recordings.

 Significantly, by establishing a property right, the legislation’s provision for the transferability of such broadly defined rights to one’s likeness may exacerbate existing challenges with the right to publicity. While the bill does restrict transfers to cases where legal counsel represents the individual, is older than 18 or otherwise has court approval, or is covered by a collective bargaining agreement, these measures fail to adequately tackle concerns like the imbalance of negotiating power among parties. Consequently, the bill falls far short of guaranteeing fair licensing agreements, and such “freely transferable” federal rights will not likely serve individuals’ interests.

Although the No AI FRAUD Act acknowledges First Amendment rights, it lacks specific exclusions, such as those found in parallel state laws. It contains only broad protective language, accompanied by certain “factors” that need to be considered. U.S. litigation norms indicate that ambiguities surrounding liability in free speech cases will present the courts with ongoing challenges to clearly define legal boundaries. Given this backdrop, the No AI FRAUD Act, with its broad definitions for “personalized cloning services,” “digital voice replica,” and “digital depiction,” introduces expansive new uncertain legal causes of action and is likely to increase litigation risks for small businesses significantly. For instance, a developer who creates an app that features AI-generated content from third parties, over which they have no direct oversight, may become a target for legal action if the content becomes contentious. Such a dynamic threatens to impose constraints on the rightful application of AI technologies in content creation and dissemination, stifling innovation and hindering the growth of businesses reliant on these modern tools.

The No AI FRAUD Act’s broad stipulations and harsh distribution of liabilities pose an undue threat to the legitimate use of AI in content creation and beg more nuanced federal guidance and legislation. Congress must consider a proposal that is narrower and more balanced, aimed at creating a federal right to publicity that safeguards name, image, voice, and digital likeness without hindering innovation, including by members of the App Association. In the era of rapidly advancing AI technologies, it is crucial to consider how new government intervention could unintentionally undermine the very objectives potential legislation seeks to achieve. Instead, we encourage policymakers to first focus on bolstering existing laws and consider how any proposals affect rights grounded in the Constitution.