The onslaught of innovative technologies utilizing the vast capabilities of artificial intelligence (AI), including the vast amounts of data processed and produced by AI, has sparked a flurry of legislative activity across the nation. With more than 400 state-level proposals aimed at regulating AI deployment and an additional 500 state-level proposals addressing various facets of privacy, it’s evident that policymakers are grappling with the complex relationship between these two realms. However, amidst this fervor, a crucial question remains largely unaddressed: what will come first – robust national privacy regulations or interventions aimed at an assortment of other risks, many of which fall under AI governance?

It’s akin to the age-old debate of the chicken and the egg, but this time, we’ll bear witness to results. Will lawmakers prioritize safeguarding individuals’ privacy rights and create guidelines for business compliance before delving into mainly as-yet-unknown risks of AI in all of the other contexts in which it will be used? Or will they forge ahead with regulating AI itself, hoping to mitigate privacy concerns along the way?

Here’s a hot take:

Any state or country that hasn’t established comprehensive privacy legislation ought to hit the pause button on AI regulation efforts. Why? Because effective AI governance inherently relies on a solid foundation of privacy protection. Without clear guidelines on how personal data should be collected, stored, and used, attempts to regulate AI usage are akin to building a house on shaky ground.

 AI Governance v. Privacy Legislation

These days, our data-driven economy crosses state lines, whether you’re a mom-and-pop car repair shop or a craft maker on Etsy. It is increasingly difficult to sustain a system in which individuals’ privacy rights differ significantly from companies’ obligations depending on which state they call home. Digital privacy is also a national issue, so federal policymakers are key to this as well. Congress has encountered thus far insurmountable challenges in adopting a national privacy framework. Nonetheless, they are not giving up. In the latest development, leaders of the House Energy and Commerce Committee and the Senate Commerce Committee released the American Privacy Rights Act (APRA), a draft bill that builds on the recent significant framework that advanced with strong support last year but didn’t make it all the way through. Whether this draft matches up well against the careful balance Congress needs to strike remains to be seen, and we are analyzing it carefully.

Imagine attempting to craft legislation to govern AI algorithms while ignoring the critical issue of bias, discrimination, and data misuse. It’s like trying to fix a leaky faucet without first turning off the water supply. Any regulatory framework for AI must be underpinned by robust privacy laws to ensure that individuals’ rights are protected in an increasingly data-driven world.

Furthermore, comprehensive privacy legislation should not only safeguard individual rights but also provide much-needed clarity for businesses—particularly startups and small to medium-sized businesses like our members. These companies often lack the resources to navigate a patchwork of fragmented privacy regulations—a patchwork which are already being stitched together with 16 states adopting their own comprehensive privacy laws. A uniform set of guidelines ensures a level playing field, fostering innovation while upholding ethical standards.

But what exactly does comprehensive privacy legislation entail? At its core, it should empower individuals with more meaningful control over their personal information, granting them the right to access, correct, and delete data held by companies. It should also impose strict obligations on organizations regarding data security, reasonable data minimization, and transparency, fostering trust between businesses and individuals.

Privacy legislation should extend beyond traditional notions of privacy to encompass emerging technologies like AI and biometrics. Retrofitting existing laws to accommodate new advancements is not enough; we need forward-thinking, flexible laws that anticipate future challenges while providing ample room for the entry of new firms and technologies.

In Conclusion

Of course, drafting and enacting comprehensive privacy legislation is no small feat. It requires collaboration between policymakers, industry stakeholders, and advocacy groups to strike the right balance between innovation and protection. Any regulatory framework or ethical guidelines regarding AI must recognize the pivotal role of data and address the potential risks associated with its misuse. Consider the analogy as a cautionary tale: just as the well-being of the egg depends on the protective shell, the responsible and ethical use of AI relies on a solid foundation of data privacy. The alternative—rushing into AI regulation without addressing underlying privacy concerns—is a recipe for disaster.

The relationship between AI and privacy is symbiotic. Attempting to regulate one without considering the other is a futile endeavor. As policymakers navigate this intricate landscape, they must prioritize privacy rights as the legal foundation undergirding any future potential interventions that affect AI governance. Only then can we harness the transformative potential of AI while safeguarding individuals in the digital age.