Artificial Intelligence (AI) isn’t just a buzzword, it’s a serious topic for tech innovators and policymakers around the world. As the app economy embraces this complex and evolving technology, lawmakers and regulators in various regions are clamoring to be among the first to issue rules and guidelines related to AI. But before these new rules go any further, the voice of our small business members must be considered.

We’ve made a few changes and updates to our AI principles since we first released them back in 2021, many of them inspired by new privacy laws and the ChatGPT boom that brought AI to the forefront of global consciousness. Based on those changes and the timeline they were applied, it’s clear that if policymakers slap together a heavy set of rules based on what we know today, they’re going to be outdated in five to 10 years at best. The recommendations we’ve published below provide a flexible, thoughtful approach to AI that considers the latest data privacy security laws as well as the pros, cons, and challenges of AI in fields as diverse as healthcare, education, software development, and cybersecurity. You can find our AI Principles with our members’ insights and expertise in its entirety here.

AI Principles

  1. Understand the technology

There’s no one-size-fits-all definition for AI, since it comes in so many forms, but breaking it down into basic categories and further defining those is an essential first step. Policymakers should understand that AI is a large umbrella covering a wide range of tools, that can be split up in a lot of different ways, including these three categories: 1) Machine Learning, 2) Deep Learning, and 3) Generative AI.

 

“I think a lot of people need education about the way that AI works … But I think once people understand it more, and once there’s more … I think it will have a net benefit to society.” – Suzanne Borders, CEO and Co-Founder, BadVR

 

  1. Understand how current law applies to AI

Federal, state, and local laws that prohibit harmful conduct apply equally to activities involving AI. We see this with the Federal Trade Commission (FTC) Act against unfair or deceptive practices. The use of AI does not exempt companies from adhering to these legal standards. It’s crucial for both federal and state agencies, as well as Congress, to carefully consider how these current laws intersect with AI’s unique challenges to avoid creating redundant or conflicting new regulations.

 

  1. Quality assurance and oversight

Policy frameworks for AI should adopt risk-based approaches to align with standards of safety, efficacy, and equity. Risk management should focus on distributing and mitigating liability, and incenting those in the value chain who can minimize risks. Areas of focus should be centered around ensuring AI is safe and equitable and encouraging AI developers to use rigorous procedures and documentation.

 

“Ensuring things like compatibility and data transfer security compliance is crucial. A similar framework or compliance process for AI will eventually be needed.” – Tomas Navratil, Project Manager and Digital Strategist, LucidCircus

 

  1. Thoughtful design

Policy frameworks should foster AI systems that are grounded in practical workflows and user-centric design, aiming to improve service delivery and efficiency for both consumers and businesses. Collaboration between users, developers, and stakeholders is crucial to reflect diverse perspectives in AI development and ensure its success.

 

  1. Access and affordability

Due to the fact that significant resources may be required to scale AI systems, policy frameworks should foster the creation and growth of accessible and affordable AI products. Policymakers should also ensure that frameworks avoid policies and language that would limit AI technology to be used in accessibility use cases.

 

  1. Research and transparency

Policy frameworks must back AI research and development by ensuring adequate funding and enhancing the capability of innovators and researchers to access and analyze diverse data sources. Emphasizing research on AI transparency, including its costs and benefits, and fostering stakeholder collaboration is essential for managing AI-related risks effectively.

“Artificial intelligence has been used in coding for the longest time. We use AI to review our code, ensuring we avoid common human errors before releasing it to even our test environments.” – Parag Shah, Founder, Vēmos

 

  1. Modernized privacy and security frameworks

Policy frameworks need to tackle privacy and consent issues arising from AI’s use of sensitive data, ensuring robust privacy controls for consumers. These frameworks should balance data protection with the progression of AI, avoiding excessive restrictions on data processing, while enforcing data minimization, consent requirements, and consumer rights.

 

  1. Bias

Addressing bias and errors in AI, especially in machine learning systems, is critical. Developers and users should commit to diversity and equity best practices to mitigate biases affecting marginalized groups. Regulatory bodies need to examine data origins and bias in AI development and use, ensuring it doesn’t harm users or lead to unlawful discrimination.

 

“What we do here at Shtudy is help tech talent from diverse communities get jobs at top tech companies. And a big challenge across the board, across the entire recruitment and staffing industry, is how to overcome biases throughout the hiring process.” – Geno Miller, Founder and CEO, Shtudy

 

  1. Ethics

AI success hinges on ethical usage, and policy frameworks should reinforce ethical norms at all stages of AI development and use. This includes adhering to international human rights standards, ensuring inclusivity across diverse groups, and protecting sensitive user information, thereby benefiting consumers broadly.

 

  1. Education

Policy frameworks need to focus on educating users and consumers about AI, including aspects like copyright law and intellectual property (IP) licensing in AI contexts. Additionally, academic curricula should advance understanding and ethical application of AI solutions, while also promoting AI success stories and engaging stakeholders to adapt to new AI opportunities and challenges.

  1. Intellectual Property

Safeguarding IP rights is essential for AI’s progress. Policymakers, while crafting AI governance strategies, must consider the applicability of existing legal protections in AI-related contexts and ensure that compliance rules and mandates do not undermine IP or trade secrets.

 

“Anything that’s AI-generated is noncopyrightable. Not all of it is defensible. You have no rights and no protection to your work. Anybody can take it and run with it. And you won’t be able to get any protections or income defenses.” – Marc Fischer, CEO and Co-Founder, Dogtown Media

Moving Forward  

 

App Association members have been using AI for decades, and even with the more advanced technologies in use today, one thing still rings true: humans are always responsible for AI’s output.  As policymakers around the world continue to explore AI applications, use cases, and regulatory responses to the technology, we hope to see a mindful approach that embodies our AI principles to empower innovators creating technology and protect the people who use it.

“We’ve been using it forever, the type of AI that’s popular today … We have a bunch of companies in the portfolio that are using AI, and it’s not in the areas you’d expect them to.” – Stephen Forte, Co-Founder and Managing Partner, Fresco Capital