At the IAPP AI Governance Conference in Boston in mid-September 2025, ACT | The App Association’s Brian Scarpelli moderated a timely and thought-provoking panel titled “How Should Liability Be Distributed in the AI Value Chain?” The discussion gathered legal, technical, and policy experts to unpack one of the most pressing questions in artificial intelligence today: who bears responsibility for AI outcomes across the value chain?
Panelists Jodi Daniel (Wilson Sonsini), Steve Herman (Fishman Haygood LLP), and Scott Weiner (NeuEon, Inc.) dove into how AI systems are developed and deployed today. They explored how accountability for safety and efficacy can and should shift among developers, deployers, and end users—depending on each actor’s knowledge and ability to mitigate identified risks.
With strong audience engagement, the session illuminated key uncertainties surrounding AI liability, including:
- How fault and causation should be attributed when autonomous behavior contributes to harm;
- What constitutes “reasonable care” in AI design, training, and deployment;
- How to apportion responsibility when multiple parties indirectly contribute to an AI-driven incident;
- Whether AI should be treated as a product (implying strict liability) or a service (implying a negligence standard);
- Who bears responsibility when users reasonably rely on faulty AI advice; and
- How AI affects discovery and expert testimony in litigation.
While these and other questions continue to evolve under state and federal tort law, Brian highlighted that the ACT’s AI Roles and Interdependencies Framework provides a useful foundation for understanding how responsibility might be fairly distributed throughout the AI ecosystem. Developed with input from small business members, the framework delineates the distinct yet interconnected roles of AI actors—from foundational model developers to end users—and proposes principles for assigning responsibility based on knowledge and capacity to mitigate risk.
This technology-neutral model gives policymakers a practical tool for promoting responsible AI innovation without constraining small businesses’ ability to compete. As governments worldwide evaluate new AI liability regimes, the App Association and its members continue to advocate for the perspectives of small, creative companies driving much of today’s AI progress. We urge policymakers to work with the small business community to create frameworks that advance transparency, ensure proportionate accountability, and advance mutual understanding across the AI value chain. Such a collaborative approach will do much to secure a more trustworthy and innovative AI future.
Learn more: AI Roles and Interdependencies Framework