This blog is part of a series on three important documents the European Commission (EC) released on February 19, 2020, to outline its digital strategy and policy objectives.
TLDR: The European Commission’s White Paper on Artificial Intelligence (AI)
The white paper on AI is part of the digital package the EC released in February. It sets out policy options to develop and deploy trustworthy AI while addressing its risks. These policy options include:
- Aligning Member State approaches to AI and eliminating barriers to the digital single market
- Using a risk-based and sector-specific approach to regulate AI
- Determining high-risk applications and sectors in advance
- Ensuring that high-risk AI systems comply with safety, fairness, and data protection requirements by assessing them ahead of market release and imposing new regulatory requirements
- Leveraging the size of the European market to export the EU’s approach to AI around the world
While this whitepaper is not a regulatory document, we appreciate the Commission’s attention to supporting innovative small and medium enterprises (SMEs), and we applaud its risk-based approach. However, it’s crucial that the EU’s approach, while comprehensive, continues to allow for development and innovation, especially from SMEs. Several aspects like SME support, the high-risk assessment and the record-keeping requirements are lacking details and their potential impacts will need to be carefully examined. The App Association looks forward to sharing feedback from our members during the public consultation period.
The White Paper on AI is one of three documents the EC released as part of its “Shaping Europe’s Digital Future” strategy on February 19, 2020. While EC President Ursula von der Leyen initially promised a legislative process on AI in the first 100 days of her Commission, this white paper is meant to be the first step in that direction.
Breaking it Down: A European Approach to Excellence and Trust in Artificial Intelligence
The EC has structured this white paper around two main building blocks:
- Creating “ecosystems of excellence” by aligning AI efforts along the entire value chain at the European, national, and regional levels through a comprehensive policy framework.
- Creating “ecosystems of trust” through a regulatory framework for AI that ensures a human-centric approach and compliance with EU rules, especially for high-risk AI applications.
An Ecosystem of Excellence: A Policy Framework for AI
The EC wants to build what it calls “an ecosystem of excellence to support the development and uptake of AI across the EU and public administration” based on several areas of action. The EC intends to:
- Review the 2018 Coordinated Plan on AI to ensure efficient cooperation between Member States. A first revision of the plan is expected to be adopted at the end of 2020.
- Address fragmentation in the research sector and commits to facilitating the establishment of AI testing centres that combine public and private instruments.
- Update the Digital Education Action Plan, the Reinforced Skills Agenda, and introduce specialized AI master’s programmes to ensure digital education and attract global talent.
- Create one AI-specialized Digital Innovation Hub per Member State and strengthen the AI-On-Demand platform to foster cooperation between SMEs. For easier access to finance for SMEs and start-ups the EC plans to build on its pilot investment fund in AI and blockchain (100 million EUR in Q1 2020) and scale-up the InvestEU guarantee for AI.
- Use a broad-based public-private partnership (PPP) to ensure private sector involvement in the innovation and research agenda and adequate levels of co-investment.
- Establish an “Adopt AI Program” to support the public procurement of AI systems, based on sector dialogues on the development and adoption of AI in healthcare and public services.
- Allocate 4 billion EUR under the Digital Europe programme to boost high-performance and quantum computing to improve access to and management of data.
- Collaborate with other countries on AI, based on an approach that respects EU rules and values such as fundamental rights, human dignity, pluralism, inclusion, non-discrimination, and privacy/data protection.
An Ecosystem of Trust: A Regulatory Framework for AI
The EC identifies a lack of trust as the main reason why the broader uptake of AI is lagging. To address this, it wants to establish a clear European regulatory framework that builds trust in AI for consumers and businesses. The EC determines that the main risks stem from the application of rules that are designed to protect fundamental rights, such as privacy, non-discrimination, safety, and liability.
The Commission identifies several characteristics of AI that make it difficult to ensure compliance and rule enforcement to protect fundamental rights: the black-box effect (opacity of algorithms), complexity, unpredictability, and autonomous behaviour. Businesses and enforcement authorities face legal uncertainty without stringent safety provisions. For individuals harmed by AI technologies, the lack of clear liability requirements may cause problems to obtain compensation.
While several EU rules already protect fundamental and consumer rights, the EC proposes adjusting them to address the AI-specific risks.
- Effective Application and Enforcement of Existing EU and National Legislation: Considering the opaqueness of AI, adjusting/clarifying existing legislation on liability or fundamental rights may be necessary to ensure efficient enforcement.
- Limitations of Scope of Existing EU Legislation: Currently, it is unclear whether EU product legislation covers stand-alone software and services, thus it may not cover AI technologies.
- Changing Functionality of AI Systems: Existing legislation does not adequately address how the inclusion of AI or machine learning into products can modify their functioning over time and the risks that come with such modifications.
- Uncertainty Regarding the Allocation of Responsibilities Between Different Economic Operators in the Supply Chain: EU rules on product safety become unclear if AI is added after a product is placed on the market by a party that is not the producer. EU rules apply to the producer while others in the supply chain are governed under national liability rules.
- Changes to the Concept of Safety: EU legislation doesn’t explicitly address the risks of using AI in products and services such as cyber threats, personal security risks, loss of connectivity, etc.
To ensure that the EU’s legal framework is ready for future technological developments, the EC concludes that a common EU-wide approach and new AI-specific legislation may be needed. This new regulatory framework is likely to follow a risk-based approach while trying to avoid disproportionate burdens for SMEs. To determine whether an AI application is “high-risk” the EC proposes reviewing two cumulative criteria:
- The sector in which the application is being employed, based on a clear and exhaustive list of sectors in which “significant risks” are expected to occur, such as healthcare, transportation, energy, and the public sectors.
- The use of the AI application itself, assessing the risk that a flaw in the application could cause injury or death, as well as material or immaterial damage.
For example, an appointment-booking system in a doctor’s office may be deployed in a high-risk sector (healthcare), but a flaw in the AI application would most likely not result in significant damages. On the other hand, there may be exceptions where the use of AI is considered high-risk per se regardless of the sector, such as using AI for recruitment processes or biometric identification.
Any mandatory requirements included in a potential new regulatory framework would only apply to high-risk AI applications. Such requirements are likely to include:
- Using only data sets to train AI that are sufficiently broad, representative, and cover all relevant scenarios, and respect privacy and personal data protections.
- Keeping records regarding training/testing data sets, including selection and characteristics of data (in some cases the data sets themselves), and documentation of programming and training methodologies. Such records should be able to be made available upon request by authorities.
- Providing clear information regarding the AI system’s capabilities and limitations, its intended purposes and expected level of accuracy, as well as informing citizens when they are interacting with AI rather than a human being.
- Ensuring that AI systems are robust and accurate, that outcomes are reproducible, and that the system can deal with attacks, errors, and/or inconsistencies during all life cycle phases.
- Guaranteeing some form of human oversight throughout the AI system’s lifecycle
The EC may consider specific requirements for facial recognition. Current rules require biometric identification to be duly justified, proportionate, and subject to adequate safeguards. The Commission plans to launch a broad dialogue with Member States to determine the specific circumstances in which such use may be justified.
In a future regulatory framework, the Commission would like each requirement to be addressed to those actors that are in the best position to address potential risks. For example, during the development stage, this will be the developers, while during use phases the deployer may be subject to the relevant requirement. Requirements will apply to everyone who offers services and products in the EU market, no matter where they are located.
Before a high-risk application goes to market, the EC will require a conformity assessment. Fortunately, the EC is willing to provide some support structure for SMEs to lighten the burden this may place on them. For AI applications that are not high-risk, a voluntary labeling scheme may be established. Those that subject themselves to the requirements voluntarily would be awarded a label of quality, signaling EU-approved trustworthiness of their products and services.
The EC proposes a European governance structure on AI in the form of a framework for cooperation of national competent authorities to avoid fragmentation. This structure could function as an information and best practices exchange, advisory on standardization activity and certification, and facilitate the implementation of the new legal framework. To ensure maximum stakeholder participation, the structure would consult consumers, businesses, researchers, and civil society on the implementation and development of the framework.
Finally, the EC asserts that AI should work for people and the good of society through an approach that supports the development and uptake of trustworthy AI.
We appreciate the Commission’s attention to supporting innovative SMEs, and we applaud its risk-based approach. While certain applications that may require stricter regulation, many others may not need such strict rules. It’s unclear by whom and how the determination of what is a high-risk or low-risk application will be made. The risk framework requires further refinement and nuance. The potential for data localization practices and disclosure of source code due to the record-keeping requirements are also concerning and may need to be addressed in future legislation.
This white paper sets out the EU’s ambitions related to AI and indicates what may be included in a future AI legislation. It’s a thoughtful foundation to start working towards a regulatory framework for AI. The EC’s concerns about fundamental rights and suggestions of an ex-ante review mark the “European approach” to regulation. Along with our member companies, we look forward to our continuing participation in this important policy conversation.