AI Act Made searchable by Non-official, based on provisional agreement

AI Act Made searchable by Algolia logo

Title-Iii-High-Risk-Ai-Systems

Article 6: Classification Rules for High-Risk AI Systems

1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex II;

Article 7: Amendments to Annex III

1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying use cases of high-risk AI systems where both of the following conditions are fulfilled: (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; (b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

Article 8: Compliance with the Requirements

1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account its intended purpose as well as the generally acknowledged state of the art on AI and AI related technologies. The risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. 2. 2a. Where a product contains an artificial intelligence system, to which the requirements of this Regulation as well as requirements of the Union harmonisation legislation listed in Annex II, Section A apply, providers shall be responsible for ensuring that their product is fully compliant with all applicable requirements required under the Union harmonisation legislation.

Article 9: Risk Management System

1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. 2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps: (a) identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;

Article 10: Data and Data Governance

1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 whenever such datasets are used. 2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices appropriate for the intended purpose of the AI system.

Article 11: Technical Documentation

1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements.

Article 12: Record-Keeping

1. High-risk AI systems shall technically allow for the automatic recording of events (‘logs’) over the duration of the lifetime of the system. 2. In order to ensure a level of traceability of the AI system’s functioning that is appropriate to the intended purpose of the system, logging capabilities shall enable the recording of events relevant for: 2a. (i) identification of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or in a substantial modification;

Article 13: Transparency and Provision of Information to Deployers

1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer set out in Chapter 3 of this Title. 2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.

Article 14: Human Oversight

1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use. 2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.

Article 15: Accuracy, Robustness and Cybersecurity

1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle. 1a. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in paragraph 1 of this Article and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholder and organisations such as metrology and benchmarking authorities, encourage as appropriate, the development of benchmarks and measurement methodologies.

Article 16: Obligations of Providers of High-Risk AI Systems

Providers of high-risk AI systems shall: (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title; (aa) indicate their name, registered trade name or registered trade mark, the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable; (b) have a quality management system in place which complies with Article 17;

Article 17: Quality Management System

1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: (a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;

Article 18: Documentation Keeping

1. The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: (a) the technical documentation referred to in Article 11; (b) the documentation concerning the quality management system referred to in Article 17; (c) the documentation concerning the changes approved by notified bodies where applicable; (d) the decisions and other documents issued by the notified bodies where applicable;

Article 20: Automatically Generated Logs

1. Providers of high-risk AI systems shall keep the logs, referred to in Article 12(1), automatically generated by their high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least 6 months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of personal data.

Article 21: Corrective Actions and Duty of Information

Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the deployers, the authorised representative and importers accordingly.

Article 23: Cooperation with Competent Authorities

1. Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in a language which can be easily understood by the authority in an official Union language determined by the Member State concerned. 1a. Upon a reasoned request by a national competent authority providers shall also give the requesting national competent authority, as applicable, access to the logs referred to in Article 12(1) automatically generated by the high-risk AI system to the extent such logs are under their control.

Article 25: Authorised Representatives

1. Prior to making their systems available on the Union market providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union. 1b. The provider shall enable its authorised representative to perform its tasks under this Regulation. 2. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institution of the Union determined by the national competent authority.

Article 26: Obligations of Importers

1. Before placing a high-risk AI system on the market, importers of such system shall ensure that such a system is in conformity with this Regulation by verifying that: (a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of that AI system; (b) the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV; (c) the system bears the required CE conformity marking and is accompanied by the EU declaration of conformity and instructions of use;

Article 27: Obligations of Distributors

1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by a copy of EU declaration of conformity and instruction of use, and that the provider and the importer of the system, as applicable, have complied with their obligations set out in Article 16, point (aa) and (b) and 26(3) respectively.

Article 28: Responsibilities Along the AI Value Chain

1. Any distributor, importer, deployer or other third party shall be considered a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are allocated otherwise;

Article 29: Obligations of Deployers of High-Risk AI Systems

1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article. 1a. Deployers shall assign human oversight to natural persons who have the necessary competence, training and authority, as well as the necessary support. 1a. To the extent deployers exercise control over the high-risk AI system, they shall ensure that the natural persons assigned to ensure human oversight of the high-risk AI systems have the necessary competence, training and authority as well as the necessary support.

Article 29a: Fundamental Rights Impact Assessment for High-Risk AI Systems

1. Prior to deploying a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law or private operators providing public services and operators deploying high-risk systems referred to in Annex III, point 5, b) and d) shall perform an assessment of the impact on fundamental rights that the use of the system may produce.

Article 30: Notifying Authorities

1. Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. These procedures shall be developed in cooperation between the notifying authorities of all Member States. 2. Member States may decide that the assessment and monitoring referred to in paragraph 1 shall be carried out by a national accreditation body within the meaning of and in accordance with Regulation (EC) No 765/2008.

Article 31: Application of a Conformity Assessment Body for Notification

1. Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member State in which they are established. 2. The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the types of AI systems for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article

Article 32: Notification Procedure

1. Notifying authorities may only notify conformity assessment bodies which have satisfied the requirements laid down in Article 33. 2. Notifying authorities shall notify the Commission and the other Member States using the electronic notification tool developed and managed by the Commission of each conformity assessment body referred to in paragraph 1. 3. The notification referred to in paragraph 2 shall include full details of the conformity assessment activities, the conformity assessment module or modules and the types of AI systems concerned and the relevant attestation of competence.

Article 33: Requirements Relating to Notified Bodies

1. A notified body shall be established under national law of a Member State and have legal personality. 2. Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks, as well as suitable cybersecurity requirements. 3. The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall be such as to ensure that there is confidence in the performance by and in the results of the conformity assessment activities that the notified bodies conduct.

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Where a conformity assessment body demonstrates its conformity with the criteria laid down in the relevant harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union it shall be presumed to comply with the requirements set out in Article 33 in so far as the applicable harmonised standards cover those requirements.

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

1. Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 33 and shall inform the notifying authority accordingly. 2. Notified bodies shall take full responsibility for the tasks performed by subcontractors or subsidiaries wherever these are established. 3. Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider.

Article 34a: Operational Obligations of Notified Bodies

1. Notified bodies shall verify the conformity of high-risk AI system in accordance with the conformity assessment procedures referred to in Article 43. 2. Notified bodies shall perform their activities while avoiding unnecessary burdens for providers, and taking due account of the size of an undertaking, the sector in which it operates, its structure and the degree of complexity of the high risk AI system in question. In so doing, the notified body shall nevertheless respect the degree of rigour and the level of protection required for the compliance of the high risk AI system with the requirements of this Regulation.

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

1. The Commission shall assign an identification number to notified bodies. It shall assign a single number, even where a body is notified under several Union acts. 2. The Commission shall make publicly available the list of the bodies notified under this Regulation, including the identification numbers that have been assigned to them and the activities for which they have been notified. The Commission shall ensure that the list is kept up to date.

Article 36: Changes to Notifications

-1. The notifying authority shall notify the Commission and the other Member States of any relevant changes to the notification of a notified body via the electronic notification tool referred to in Article 32(2). -1a. The procedures described in Article 31 and 32 shall apply to extensions of the scope of the notification. For changes to the notification other than extensions of its scope, the procedures laid down in the following paragraphs shall apply.

Article 37: Challenge to the Competence of Notified Bodies

1. The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 33 and their applicable responsibilities. 2. The Notifying authority shall provide the Commission, on request, with all relevant information relating to the notification or the maintenance of the competence of the notified body concerned.

Article 38: Coordination of Notified Bodies

1. The Commission shall ensure that, with regard to high-risk AI systems, appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures pursuant to this Regulation are put in place and properly operated in the form of a sectoral group of notified bodies. 2. The notifying authority shall ensure that the bodies notified by them participate in the work of that group, directly or by means of designated representatives.

Article 39: Conformity Assessment Bodies of Third Countries

Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified Bodies under this Regulation, provided that they meet the requirements in Article 33 or they ensure an equivalent level of compliance.

Article 40: Harmonised Standards and Standardisation Deliverables

High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) 1025/2012 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title or, as applicable, with the requirements set out in [Chapter on GPAI], to the extent those standards cover those requirements.

Article 41: Common Specifications

1. The Commission is empowered to adopt, after consulting the Advisory Forum referred to in Article 58, implementing acts in accordance with the examination procedure referred to in Article 74(2) establishing common specifications for the requirements set out in Chapter 2 of this Title or, as applicable, with requirements set out in Article [GPAI Chapter], for AI systems within the scope of this Regulation, where the following conditions have been fulfilled:

Article 42: Presumption of Conformity with Certain Requirements

1. High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, contextual or functional setting within which they are intended to be used shall be presumed to be in compliance with the respective requirements set out in Article 10(4). 2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council1 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements.

Article 43: Conformity Assessment

1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following procedures: (a) the conformity assessment procedure based on internal control referred to in Annex VI; or

Article 44: Certificates

1. Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be easily understood by the relevant authorities in the Member State in which the notified body is established. 2. Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by Annex IIand four years for AI systems covered by Annex III. On application by the provider, the validity of a certificate may be extended for further periods, each not exceeding five years for AI systems covered by Annex IIand four years for AI systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment procedures.

Article 46: Information Obligations of Notified Bodies

1. Notified bodies shall inform the notifying authority of the following: (a) any Union technical documentation assessment certificates, any supplements to those certificates, quality management system approvals issued in accordance with the requirements of Annex VII; (b) any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII; (c) any circumstances affecting the scope of or conditions for notification;

Article 47: Derogation from Conformity Assessment Procedure

1. By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may authorise the placing on the market or putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection and the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period of time while the necessary conformity assessment procedures are being carried out, taking into account the exceptional reasons justifying the derogation.

Article 48: EU Declaration of Conformity

1. The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity for each high-risk AI system and keep it at the disposal of the national competent authorities for 10 years after the AI high-risk system has been placed on the market or put into service. The EU declaration of conformity shall identify the high-risk AI system for which it has been drawn up.

Article 49: CE Marking of Conformity

1. The CE marking of conformity shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008. 1a. For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can be easily accessed via the interface from which the AI system is accessed or via an easily accessible machine-readable code or other electronic means. 2. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems.

Article 51: Registration

1. Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of high risk AI systems referred to in Annex III point 2, the provider or, where applicable, the authorised representative shall register themselves and their system in the EU database referred to in Article 60. 1a. Before placing on the market or putting into service an AI system for which the provider has concluded that it is not high-risk in application of the procedure under Article 6(2a), the provider or, where applicable, the authorised representative shall register themselves and that system in the EU database referred to in Article 60.

Suitable Recitals for article 1

X Close this recital