Early choices in AI policy will shape how the technology is deployed across the economy. Policymakers are responding to public concerns around safety, accountability, and trust. But the AI ecosystem is dynamic and still evolving, with infrastructure providers, model developers, and many application developers building tools for numerous uses.
That matters for policy design. Rules that treat AI as a single category or assume a static market tend to slow deployment and raise compliance costs where much of the societally beneficial innovation is happening, especially for small businesses.
Good AI policy focuses on harmful conduct (rather than the underlying technology), uses existing legal frameworks where they fit, and addresses genuine gaps with targeted rules. Bad policy defines the technology broadly, attaches sweeping liability, and relies on rules that are easy to write but hard to interpret and enforce.
New York’s Senate Bill S7263 is a perfect example of what not to do.
A concrete example of a broader problem
S7263 creates liability for chatbot proprietors, broadly defined as “any person, business, company, organization, institution or government entity that owns, operates or deploys a chatbot system,” when a chatbot provides substantive responses, information, or advice, or takes actions, that would amount to unauthorized practice if done by a human across a range of licensed professions and legal services.
The intent is straightforward. Policymakers want to prevent harm from tools that may mislead users or operate in ways that resemble professional services without appropriate safeguards.
The bill builds a liability framework around whether a chatbot provides “substantive” responses that could be interpreted as the unauthorized practice of a licensed profession. It also applies broadly to any “proprietor,” defined to include those who own, operate, or deploy chatbot functionality into products and services, while excluding third-party licensors.
On paper, this approach appears to address risk. However, in practice, it introduces uncertainty at the point where products are designed and deployed.
Where implementation becomes the issue
The term “substantive” is central to the bill, but it is not clearly defined in a way that translates into product decisions.
Developers are left to interpret whether common interactions fall within its scope. A parent noticing a rash on their child may ask what symptoms should trigger urgent care. A tenant may ask what a lease clause means in plain English after receiving a notice. A small business owner may ask whether a renovation requires an architect, an engineer, or a contractor before starting a project. These are ordinary threshold questions that help people understand what is happening and prepare for next steps. They are the kinds of questions that make widely available chatbots incredibly useful for a broad range of users looking to augment their own analyses and thought processes.
When the law is unclear, the product response is predictable. Companies narrow features, filter more aggressively, or avoid offering the tool altogether. That is how teams manage legal risk when rules are open-ended, and enforcement is uncertain.
The impact on access and real-world use
In many situations, users are not choosing between a chatbot and a licensed professional. They are choosing between a chatbot and fragmented sources of information.
A tenant facing a notice may otherwise piece together advice from search results, forum posts, and text messages with friends. A patient reviewing a lab result at night may turn to online forums or scattered articles before their doctor’s office opens. A first-time small business owner trying to understand permits may move between city websites, PDFs, and informal advice from other business owners.
These sources vary widely in quality and reliability. Chatbots can play a narrower but still useful role. They can explain terminology, organize information, and help users identify what questions to ask before seeking professional advice. A caregiver comparing treatment options for an elderly parent may use a chatbot to understand basic terms before speaking with a clinician. A worker trying to understand their employment classification may use a chatbot to prepare for a conversation with a lawyer or agency.
Reducing the availability of these tools does not remove the underlying need. It changes how people attempt to meet it.
Notably, the bill’s definition of “proprietor” reaches beyond large AI companies to startups and small businesses that integrate chat tools into their products. Those firms have tighter margins and fewer legal resources. As a result, broad liability rules hit smaller developers first, leading to scaled-back features, fewer new use cases, and slower deployment.
What does a better approach to AI policymaking look like
S7263 reflects a broader problem in AI governance, which is a tendency to regulate the technology in the abstract, rather than the specific ways it is used in practice.
A more effective approach starts from a different premise. AI systems are general-purpose tools that support a wide range of applications, from low-risk informational support to high-stakes decision-making. As we have outlined in our AI policy principles, policymaking should reflect those differences.
First, policy should focus on harmful conduct, not the mere use of AI. The relevant question is not whether a chatbot is involved, but whether a specific use creates a meaningful risk of harm, such as misrepresentation, impersonation of licensed professionals, or advice that users reasonably rely on in place of professional judgment.
This bill takes a different approach. It ties liability to whether a chatbot’s output resembles licensed professional activity, regardless of whether harm occurs or risk is meaningfully present. That framing creates uncertainty and captures a wide range of benign uses.
Second, obligations should be allocated across the AI value chain based on who can mitigate risk. The AI ecosystem includes infrastructure providers, model developers, and application developers. Not all actors have visibility into or control over how systems are used in practice. Assigning liability broadly at the point of deployment, without regard to control or context, creates incentives to limit functionality rather than improve safety.
Third, laws must be clear enough to translate into product design decisions. Terms like “substantive response” are easy to state but difficult to operationalize. Developers need to know what is permitted, what requires safeguards, and what crosses the line. Without that clarity, the default response is to over-filter or withdraw features altogether.
Fourth, policy should preserve space for low-risk, informational uses that expand access. Many chatbot interactions are not a substitute for professional services. They help users understand terminology, organize information, and prepare for next steps. These use cases can improve outcomes, particularly for individuals who may not otherwise have immediate access to professional advice. Policymaking should avoid collapsing these uses into the same category as high-risk decision-making.
Finally, policymakers should build on existing legal frameworks where possible. Longstanding doctrines around consumer protection, misrepresentation, and the unauthorized practice of professions already address many of the underlying harms. The challenge is not always the absence of rules, but how those rules are applied in the context of new technologies.
Taken together, this approach supports a more durable model of AI governance, one that targets real risks, provides workable compliance pathways, and preserves the ability of small businesses to build and deploy useful tools.