Title III High-Risk AI System Article 6: Classification Rules for High-Risk AI Systems 1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex II; (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk. 2a. By derogation from paragraph 2 AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case if one or more of the following criteria are fulfilled: (a) the AI system is intended to perform a narrow procedural task; (b) the AI system is intended to improve the result of a previously completed human activity; (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III. Notwithstanding first subparagraph of this paragraph, an AI system shall always be considered high-risk if the AI system performs profiling of natural persons. 2b. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 51(1a). Upon request of national competent authorities, the provider shall provide the documentation of the assessment. 2c. The Commission shall, after consulting the AI Board, and no later than [18 months] after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article completed by a comprehensive list of practical examples of high risk and non-high risk use cases on AI systems pursuant to Article 82b. 2d. The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the criteria laid down in points a) to d) of the first subparagraph of paragraph 2a. The Commission may adopt delegated acts adding new criteria to those laid down in points a) to d) of the first subparagraph of paragraph 2a, or modifying them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but that do not pose a significant risk of harm to the health, safety and fundamental rights. The Commission shall adopt delegated acts deleting any of the criteria laid down in the first subparagraph of paragraph 2a where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union. Any amendment to the criteria laid down in points a) to d) set out in the first subparagraph of paragraph 2a shall not decrease the overall level of protection of health, safety and fundamental rights in the Union. When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1) and shall take account of market and technological developments. Next Article 7: Amendments to Annex III