In the last few months, Congress has held no fewer than four hearings in different committees on artificial intelligence (AI). Each committee has examined a different aspect of the issue, and they have presented a number of policy solutions to address the questions AI poses. While policymakers in both the House and Senate have championed several different approaches, the strongest policies have a few things in common. As we’ve been watching all the different hearings, we’ve seen a few important themes emerge on next steps for AI.

The first and most important step Congress can take to regulate AI successfully is to pass a national privacy law. We have been advocating for a national privacy law for years, and this year App Association President Morgan Reed testified on the importance of privacy legislation from the healthcare perspective at a hearing of the House Energy and Commerce Committee. As AI has moved to the forefront of the national conversation, our writing has focused on the importance of privacy and security in AI (read part one and part two of that blog series). We have also written more generally about what small businesses need in a privacy law. With all this history in mind, we applaud the Energy and Commerce Committee for focusing in on the importance of comprehensive privacy legislation to address the privacy and security risks AI can present. Many of the aspects of a law like last Congress’ American Data Privacy and Protection Act (ADPPA, H.R. 8152, 117th Cong.) would address the concerns AI has highlighted. We urge Congress to pass such a national privacy law as soon as possible.

Another key point that has come up in several hearings is that federal agencies already have much of the authority they need to regulate and enforce rules that apply to conduct involving the use of AI. As Amber Kak, executive director of the AI Institute, said in the Energy and Commerce Committee’s recent hearing, “there is no AI-shaped exemption to the laws on the books.” This summary is on point. The Federal Trade Commission (FTC), Federal Communications Commission (FCC), Food and Drug Administration (FDA), and others already have laws that apply to a variety of types of entities and commercial activity, and none of those entities or activities can escape these agencies’ reach just because they use AI to perform a task or inform their decisions. But that doesn’t mean it’s necessarily clear to those federal agencies how the law should apply in seemingly novel circumstances where AI is involved, and they are all developing the depth and expertise necessary to successfully evaluate AI systems. Congress should support this ongoing work and capacity building rather than introducing additional friction through increased red tape. Many innovative technologies, especially those in healthcare with a software component, already spend a significant amount of time in clearance processes at the FDA, only to start over with extremely similar processes at the Centers for Medicare & Medicaid Services. We don’t want this situation to be repeated in arenas outside healthcare.

Members on both the House and the Senate sides have talked about the importance of transparency in AI, especially in algorithms and data used for training AI models. In the recent Senate Commerce Committee hearing on transparency in AI, senators and witnesses alike pointed out the importance of the National Institute of Science and Technology’s (NIST) AI Risk Management Framework (AI RMF). This voluntary framework breaks up the AI risk management task into straightforward components: “Govern, Map, Measure, and Manage.” It also emphasizes that success in each of these components for AI developers and users requires the cultivation of trust and employing transparency, illuminating a way forward for responsible AI implementation. The NIST AI RMF is one way to evaluate and manage risks of various uses of AI. We agree with Senator Klobuchar’s statements in the hearing that developers and users have a responsibility to mitigate risk, and we agree that different levels of inherent risk of harm merit different levels of testing and assessment. A risk-based approach will help developers determine the right level of scrutiny for their product.

Overall, Congress is doing a good job working through the more difficult issues related to AI. We hope that members of both chambers will continue to prioritize privacy legislation, build capacity in existing federal agencies, and pursue a risk-based approach to regulation of AI. Small businesses can’t afford to innovate at the speed of government, so they need Congress to provide rules that provide a solid foundation for innovation while protecting consumers.