The EU has once again beaten the rest of the world to the regulatory punch, this time on artificial intelligence. After six months of negotiations, the European Commission, the European Council, and the European Parliament have reached a political agreement on the AI Act. The parties are still negotiating the final text of the Act and the provisional agreement has not been published yet, but here are the highlights:
Applicability. The Act is focused on use cases, and thus the Act will apply to AI developers as well as businesses that deploy third party AI tools. If a business places an AI system on the market or puts it into service in the EU, or where the AI outputs are intended to be used in the EU, the Act will apply – regardless of where the business is located.
Penalties. The penalties for violating the AI Act are predictably steep: 7% of global annual turnover or €35 million for prohibited use cases, and 3% of global annual turnover or €15 million for violations of other provisions of the Act.
Risk-Based Approach. The Act classifies AI use cases based on their level of risk – unacceptable, high, and minimal risk – and imposes requirements accordingly.
Unacceptably Risky Use Cases. The Act prohibits these use cases outright. This category includes the following:
- Manipulation of human behavior to circumvent users’ free will;
- Social scoring;
- Certain applications of predictive policing;
- Emotion recognition systems used in the workplace; and
- With some exceptions related to serious crimes, real-time remote biometric identification for law enforcement purposes in public.
High-Risk Use Cases. A use case will qualify as high risk if it poses a significant risk to an individual’s “health, safety, fundamental rights, environment, democracy and the rule of law.” This includes AI used in medical devices, vehicles, human resources, education, elections, critical infrastructure, law enforcement, administration of justice, biometric identification, and anything that profiles individuals.
Requirements for high-risk uses include:
- A pre-launch conformity assessment to demonstrate compliance with trustworthy AI principles (data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness);
- Registration in a public EU database of high-risk AI systems; and
- Fundamental rights impact assessments.
Foundational Models. The Act deviates from its use case focus when it comes to foundational models, or General Purpose AI (GPAI). The Act imposes a variety of transparency requirements on all GPAI developers, obligating them to share technical documentation and training data summaries with downstream developers. The Act also requires developers to include policies to ensure they respect copyright when training their models and imposes labelling requirements on certain AI-generated content.
Similar to the White House’s Executive Order, the Act imposes special requirements on GPAIs that require a large amount of computing power, given that they may pose “systemic risks.” (The Act’s threshold is lower than that in the Executive Order, which means that current foundational models are likely to be covered.) Developers of this subset of GPAIs will be required assess and mitigate risks, report serious incidents, conduct state-of-the-art tests and model evaluations, ensure cybersecurity and provide information on the energy consumption of their models.
Minimal Risk Uses. For minimal risk use cases, the Act states that businesses can voluntarily adhere to transparency requirements and codes of conduct.
Effective Date. As with the GDPR, there will be a grace period before the law is enforceable:
- Restrictions on prohibited AI systems will go into effect 6 months after the Act is finalized;
- General Purpose AI requirements will become enforceable in 12 months; and
- The rest of the AI Act will go into effect in 2 years.
Exemptions. There are several notable exemptions in the Act. Free and open-source models are not covered by the Act, unless the model is a GPAI that poses systemic risk. The Act also doesn’t apply to research and development before a release on the market. Nor does it apply to AI systems used exclusively for military, national security, or defense purposes.
Again, the text of Act isn’t final, so a lot of the details remain to be seen. But just like the GDPR changed the game for privacy law, we can be sure the Act will do the same for AI.