AI Act Made searchable by Non-official, based on provisional agreement

AI Act Made searchable by Algolia logo

Title-Viii-Post-Market-Monitoring-Information-Sharing-Market-Surveillances

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system. 2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.

Article 62: Reporting of Serious Incidents

1. Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market surveillance authorities of the Member States where that incident occurred. 1a. As a general rule, the period for the reporting referred to in paragraph 1 shall take account of the severity of the serious incident. 1b. The notification referred to in paragraph 1 shall be made immediately after the provider has established a causal link between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident.

Article 63: Market Surveillance and Control of AI Systems in the Union Market

1. Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. However, for the purpose of the effective enforcement of this Regulation: (a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation; (b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

1. Where an AI system is based on a general purpose AI model and the model and the system are developed by the same provider, the AI office shall have powers to monitor and supervise compliance of this AI system with the obligations of this Regulation. To carry monitoring and supervision tasks the AI Office shall have all the powers of a market surveillance authority within the meaning of the Regulation 2019/1020.

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

1. Market surveillance authorities shall have the competence and powers to ensure that testing in real world conditions is in accordance with this Regulation. 2. Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory sandbox under Article 54, the market surveillance authorities shall verify the compliance with the provisions of Article 54a as part of their supervisory role for the AI regulatory sandbox.

Article 64: Powers of Authorities Protecting Fundamental Rights

3. National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation in accesible language and format when access to that documentation is necessary for effectively fulfilling their mandate within the limits of their jurisdiction.

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to fundamental rights of persons are concerned. 2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation.

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

1. Where a market surveillance authority has sufficient reasons to consider that an AI system classified by the provider as non high-risk in application of Annex III is high-risk, they market surveillance authority shall carry out an evaluation of the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Annex III and the Commission guidelines. 2. Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into compliance with the requirements and obligations laid down in this Regulation as well as take appropriate corrective action within a period it may prescribe.

Article 66: Union Safeguard Procedure

1. Where, within three months of receipt of the notification referred to in Article 65(5), or 30 days in the case of non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5, objections are raised by the market surveillance authority of a Member State against a measure taken by another market surveillance authority, or where the Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into consultation with the market surveillance authority of the relevant Member State and operator or operators and shall evaluate the national measure.

Article 67: Compliant AI Systems Which Present a Risk

1. Where, having performed an evaluation under Article 65, after consulting the relevant national public authority referred to in Article 64(3), the market surveillance authority of a Member State finds that although a high-risk AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, fundamental rights, or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue delay, within a period it may prescribe.

Article 68: Formal Non-Compliance

1. Where the market surveillance authority of a Member State makes one of the following findings, it shall require the relevant provider to put an end to the non-compliance concerned, within a period it may prescribe: (a) the CE marking has been affixed in violation of Article 49; (b) the CE marking has not been affixed; (ea) the registration in the EU database has not been carried out; (eb) where applicable, the authorised representative has not been appointed;

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

1. Without prejudice to other administrative or judicial remedies, complaints to the relevant market surveillance authority may be submitted by any natural or legal person having grounds to consider that there has been an infringement of the provisions of this Regulation. 2. In accordance with Regulation (EU) 2019/1020, complaints shall be taken into account for the purpose of conducting the market surveillance activities and be handled in line with the dedicated procedures established therefore by the market surveillance authorities.

Article 68c: A Right to Explanation of Individual Decision-Making

A right to explanation of individual decision-making 1. Any affected person subject to a decision which is taken by the deployer on the basis of the output from an high-risk AI system listed in Annex III, with the exception of systems listed under point 2, and which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety and fundamental rights shall have the right to request from the deployer clear and meaningful explanations on the role of the AI system in the decision-making procedure and the main elements of the decision taken.

Article 68d: Amendment to Directive (EU) 2020/1828

In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council[1a], the following point is added: “(67a) Regulation xxxx/xxxx of the European Parliament and of the Council [laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (OJ L …)]”. [1a] Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Directive (EU) 2019/1937 of the European Parliament and of the Council shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches.

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

1. The Commission shall have exclusive powers to supervise and enforce Chapter/Title [general purpose AI models] taking into account the procedural guarantees by virtue of Article H. The Commission shall entrust the implementation of these tasks to the European AI Office, without prejudice to the powers of organisation of the Commission and the division of competences between Member States and the Union based on the Treaties. 2. Without prejudice to Article 63a paragraph 3, market surveillance authorities may request to the Commission to exercise the powers laid down in this Chapter, where this is necessary and proportionate to assist with the fulfilment of their tasks under this Regulation.

Article 68g : Monitoring Actions

1. For the purposes of carrying out the tasks assigned to it under this Chapter, the AI Office may take the necessary actions to monitor the effective implementation and compliance with this Regulation by providers of general purpose AI models, including adherence to approved codes of practice. 2. Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. A complaint shall be duly reasoned and at least indicate:

Article 68h: Alerts of Systemic Risks by the Scientific Panel

1. The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that: (a) a general purpose AI model poses concrete identifiable risk at Union level; or (b) a general purpose AI model meets the requirements referred to in Article 52a [Classification of General Purpose AI Models with Systemic Risk]. 2. Upon such qualified alert, the Commission, through the AI Office and after having informed the AI Board, may exercise the powers laid down in this Chapter for the purpose of assessing the matter.

Article 68i: Power to Request Documentation and Information

1. The Commission may request the provider of the general purpose AI model concerned to provide the documentation drawn up by the provider according to Article 52c [Obligations for Providers of General Purpose AI Models] and 52d [Obligations on Providers of General Purpose AI Models with Systemic Risk] or any additional information that is necessary for the purpose of assessing compliance of the provider with this Regulation. 2. Before the request for information is sent, the AI Office may initiate a structured dialogue with the provider of the general purpose AI model.

Article 68j: Power to Conduct Evaluations

1. The AI Office, after consulting the Board, may conduct evaluations of the general purpose AI model concerned: (a) to assess compliance of the provider with the obligations under this Regulation, where the information gathered pursuant to Article 68i [Power to Request Information] is insufficient; or (b) to investigate systemic risks at Union level of general purpose AI models with systemic risk, in particular following a qualified report from the scientific panel in accordance with point (c) of Article 68f Enforcement of Obligations on Providers of General Purpose AI Models and General Purpose AI Models with Systemic Risk .

Article 68k: Power to Request Measures

1. Where necessary and appropriate, the Commission may request providers to: (a) take appropriate measures to comply with the obligations set out in Title VIIIa, Chapter 2 [Obligations for Provider of General Purpose AI Models]; (b) require a provider to implement mitigation measures, where the evaluation carried out in accordance with Article 68j [Power to Conduct Evaluations] has given rise to serious and substantiated concern of a systemic risk at Union level;

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Article 18 of the Regulation (EU) 2019/1020 apply by analogy to the providers of the general purpose AI model without prejudice to more specific procedural rights provided for in this Regulation.

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

1. The Commission shall designate one or more EU AI testing support structures to perform the tasks listed under Article 21(6) of Regulation (EU) 1020/2019 in the area of artificial intelligence. 2. Without prejudice to the tasks referred to in paragraph 1, EU AI testing support structure shall also provide independent technical or scientific advice at the request of the Board, the Commission, or market surveillance authorities.

Suitable Recitals for article 1

X Close this recital