Share

https://artificialintelligenceact.eu/high-level-summary

The AI Act classifies AI according to its risk:

  • Unacceptable risk is prohibited (e.g. social scoring systems and manipulative AI).
  • Most of the text addresses high-risk AI systems, which are regulated.
  • A smaller section handles limited risk AI systems, subject to lighter transparency obligations: developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).
  • Minimal risk is unregulated (including the majority of AI applications currently available on the EU single market, such as AI enabled video games and spam filters – at least in 2021; this is changing with generative AI).

The majority of obligations fall on providers (developers) of high-risk AI systems.

  • Those that intend to place on the market or put into service high-risk AI systems in the EU, regardless of whether they are based in the EU or a third country.
  • And also third country providers where the high risk AI system’s output is used in the EU.

Users are natural or legal persons that deploy an AI system in a professional capacity, not affected end-users.

  • Users (deployers) of high-risk AI systems have some obligations, though less than providers (developers).
  • This applies to users located in the EU, and third country users where the AI system’s output is used in the EU.

General purpose AI (GPAI):

  • All GPAI model providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary about the content used for training.
  • Free and open licence GPAI model providers only need to comply with copyright and publish the training data summary, unless they present a systemic risk.
  • All providers of GPAI models that present a systemic risk – open or closed – must also conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections.